entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
18
175
authors
sequencelengths
1
1.12k
primary_category
stringclasses
114 values
categories
sequencelengths
1
8
text
stringlengths
5
364k
http://arxiv.org/abs/2407.13415v1
20240718113825
Empirical Analysis of Sri Lankan Mobile Health Ecosystem: A Precursor to an Effective Stakeholder Engagement
[ "Kenneth Thilakarathna", "Sachintha Pitigala", "Jayantha Fernando", "Primal Wijesekera" ]
cs.CR
[ "cs.CR", "cs.CY", "cs.HC", "cs.SE" ]
^1University of Colombo School of Computing, ^2University of Kelaniya, ^3Heritage Partners, ^4ICSI, ^5UC Berkeley § ABSTRACT Sri Lanka recently passed its first privacy legislation covering a wide range of sectors, including health. As a precursor for effective stakeholder engagement in the health domain to understand the most effective way to implement legislation in healthcare, we have analyzed 41 popular mobile apps and web portals. We found that 78% of the tested systems have third-party domains receiving sensitive health data with minimal visibility to the consumers. We discuss how this will create potential issues in preparing for the new privacy legislation. Empirical Analysis of Sri Lankan Mobile Health Ecosystem: A Precursor to an Effective Stakeholder Engagement Kenneth Thilakarathna,^1, 2 Sachintha Pitigala,^2 Jayantha Fernando,^3 Primal Wijesekera,^4,5 July 22, 2024 ============================================================================================================== packed_enum packed_item § INTRODUCTION Sri Lanka's mature and effective public health ecosystem has proven equally efficient compared to major economies. Given the open and unrestricted nature of the health ecosystem, patients can consult any medical practitioner at will without worrying about insurance or the cost. Inadvertently, this has also created a vibrant digital ecosystem where consumers in Sri Lanka have long enjoyed the comfort of scheduling appointments over the phone digitally. As the first in the region, Sri Lanka recently passed its first nationwide privacy legislation <cit.>. Europe and the US, despite having regulations in place for some time, are facing continuous threats of breaching consumer privacy expectations and regulatory violations. Research at the intersection of Tech, Policy, and Legal has shown various reasons, such as lack of awareness, miscomprehension of the regulatory requirements, miscommunication, the lack of transparency in the third-party code and systems used by developers, and finally, lack of a strong institutional framework due to budgetary and bureaucratic constraints. This long list of reasons has produced the notion of law in the books vs. law in the code. Sri Lanka is uniquely placed in this journey since the country is at an early phase of setting up the institutional framework for the implementation of privacy legislation enacted recently. The core objective of the work is to utilize the golden period and ensure that relevant authorities and stakeholders in the health domain are properly informed by empirical evidence and aware of their responsibilities towards a privacy-conscious ecosystem. § RELATED WORK Prior studies have investigated potential regulatory violations in mobile apps in the US <cit.> while another line of work examines the compliance of mobile health apps in the context of GDPR <cit.>. The focus of privacy studies has delve into the healthcare ecosystem through multiple subsystems such as analyzing privacy of femtech health <cit.>, mobile-based COVID-19 tracing applications for their privacy implications <cit.>, operation of mobile therapy apps, a subset of the telehealth app ecosystem <cit.> to name a few. Another research trajectory assesses the availability, scope, and transparency of mobile health app privacy policies <cit.> where there are studies focused on understanding the developers' role in health privacy and challenges faced in producing complied software solutions <cit.>. In the US, legal authorities such as the FTC and HHS scrutinize mobile health apps for compliance <cit.>. FTC has used the HBNR <cit.> and FTC Act <cit.> to pursue these actions as these apps traditionally fall outside the scope of HIPAA <cit.>. § BACKGROUND §.§ What is a health app? Definitions of "health app," "medical app," or "wellness app" can overlap, and there is no universally accepted definition for these terms. For the context of this work, we define a health app as an Android-based mobile app or a mobile-ready website that lets consumers (patients) communicate with a healthcare provider and schedule an appointment or give specific information or guidance on a specific health condition such as diabetes, or online pharmacy. §.§ Applicable Privacy Legislation In 2022, the government of Sri Lanka passed its Personal Data Protection Act, No. 9 of 2022 (PDPA), which is a commendable starting point for a privacy ecosystem <cit.>. PDPA is a comprehensive privacy legislation such as CCPA or GDPR; however, Health data is defined under a special category with extra protection. § METHODOLOGY §.§ Mobile Analysis Tool We focused our analysis on the Android ecosystem, given it is the most prevalent mobile operating system <cit.> and our in-house tools on Android. In our custom Android platform (based on v9.0-_r39), we modified the platform to enable the real-time monitoring of apps' access to protected resources (e.g., location data, address book contacts, etc.) and sharing over the network. We have also implemented stringent guards to hide our instrumentation from the latest array of anti analysis techniques <cit.>. Our network interception occurs at two different points: one at the default Android network stack at conscrypt library level just before the SSL_read and SSL_write, and one at the webview library level intercepting XHR requests. Our instrumentation does not have any false positives, but there is a probability of false negatives due to the coverage in app execution – the data should be treated as the absolute lower bound of what is happening in the real world. §.§ App Corpus We looked for Android apps and websites that let a patient make an appointment with a physician, provide information or guidance on specific conditions, online pharmacy systems, rehabilitation apps, or sites for leading hospitals in the Sri Lanka Google Play Store from 01 June 2024 until 07 June 2024 and in the Google website. We ended up testing 41 Android-based apps and websites. The dataset includes 5 Android apps and 37 websites accessed through Android: 11 hospitals, 14 medical clinics, four information sites targeting specific conditions, seven online pharmacies, and five physician scheduling portals. The data set also has different types of health apps focusing on fertility, diabetes, cancer, mental health, etc. From here onward, we refer to both apps and websites as health systems. §.§ Testing Procedure We tested the apps and websites using Pixel 3a phones running our instrumented Android. We explored the apps/websites as much as possible go over as many options and links as possible. Whenever possible, we also searched for specific conditions or a specific physician to see whether such information is shared with third-parties. Android offers more sensitive resources and functionalities than the desktop environment. Hence, understanding how these websites behave in the mobile ecosystem is important. During the app testing, the researcher recorded all the sensitive data used, such as synthetic names, all the synthetic health data used to fill the questions, sensitive pages visited, such as "help on anxiety." Once the network traces are decoded, we use a script to look for any transaction with specific strings used in the testing. §.§ Health Data The definition of health data under PDPA <cit.> is ”personal data related to the physical or psychological health of a natural person, which includes any information that indicates his health situation or status;;” and HHS recently issued a guidance <cit.>, identifying that even sharing the app name along with a unique ID of the consumer would fall within the definition of Personal Health Information (PHI), as it identifies the individual's past, present or future health or health care or payment. Keeping this new legal principles in mind, we labeled the following items as health data as any of the following can be used to infer their health conditions or health interests: conventional health data, app usage (apps targeting specific medical conditions), search queries within health apps (searching for a medical condition, a medication, etc.), navigation with the app (viewing pages focusing on a specific medical condition). §.§ Ethical Considerations Based on our number of prior conversations with our Institutional Review Board on large-scale mobile app analysis for compliance, we determined we do not require an IRB review for this study. We are not examining any human subject but only the app execution. §.§ Ecological Validity Android apps are likely to detect the geolocation based on the IP and might change their behavior accordingly. All Sri Lankan based apps were executed in Sri Lanka to preserve the ecological validity of the test environment. We only have one app that stopped its execution after detecting the underlying custom Android OS. § DATA SHARING PRACTICES Physician Information: Since the pandemic, systems allowing scheduling appointments online have soared, and most of them went unchecked until recently. Online physician scheduling is quite popular in Sri Lanka. In the health compliance ecosystem, they play a crucial role. Our dataset has seventeen systems that allow patients to search for physicians, get scheduling information, or make an appointment directly. We have observed that ten health systems (58.82% out of 17 health systems with the scheduling feature) have shared sensitive physician information with third parties. Physician information can divulge a patient's condition as a highly sensitive data point. Table <ref> lists all the third-party recipients of such sensitive information. The last column denotes whether the respective recipient will treat the health data without violating any regulatory or consumer privacy expectations. Usage Information: Event reporting in online systems is usually harmless for the consumers. But, in a health system, it can diffuse sensitive information such as potential interest or a condition a patient has. For example, a repeated visit to infertility pages can likely expose consumer's highly sensitive condition. Hence, sharing usage information should be done carefully with masking. This concern is confirmed by HSS (in the US) as per their latest guidance on regulatory expectations <cit.>. In our dataset, we have 27 (65.85% of our test-pool) apps sharing highly sensitive usage information with third parties who are likely to use such information for user tracking and profiling. This usage information includes visits to infertility treatment pages, mental illness tests, physician pages specializing in specific conditions. In an ideal setup, patients are highly unlikely to share their interests and why they visit such pages. Table <ref> lists all the third-party recipients of such sensitive information. The last column denotes whether the recipient will treat the health data without violating regulatory or consumer privacy expectations – this is determined based on their public documents. We observed that nine health systems shared the search queries we used during the testing with third-party recipients—Google Analytics (8 apps) and Facebook (2 apps) are the most common recipients. Similar to app usage, search queries are sensitive, such as physicians, symptoms, and a particular medicine, all of which could expose sensitive conditions associated with the patient. We also observed one health system sending sensitive health information over the Internet unencrypted, jeopardizing the confidentiality and integrity of the patients' health data. Out of the 33 health systems sharing data with third parties, only nine health systems (27.27%) have acknowledged third-party data sharing in their privacy policies, and, overall, 27 ( 65% of our test-pool) health systems did not have a privacy policy. Except for two Android apps (which shared AAID with Facebook and Google Analytics), none of the other systems shared soft or persistent IDs with third parties. From a developer perspective, no sensitive health data shared had an ID linking to the patient. The biggest caveat, however, is the patient's IP address. Even HIPAA labels the IP address as one of the eighteen HIPAA identifiers that can be used to link to a specific person. The extent of the linkage depends on the nature of the mobile phone's connection. If it is a home WIFI, it is easy to link each data transfer to a specific person along with the IP address and many other meta information. § REGULATORY PREPARATION The main objective of this work is to understand the current status of the mobile health ecosystem as a precursor to understanding the cost, responsibilities, and challenges faced by developers and health organizations in complying with the new Privacy legislation. The preparation has to be done in two ways: by managing consent and by providing data subject rights. We recently conducted a focus group to understand stakeholder perspectives on the new privacy legislation <cit.>. Participants raised both of these tasks as sources of cost in the process of complying with new legislation. None of the health systems we analyzed have any sort of consent management (except for one website that had cookie consent). This will be one of the first major changes for health systems to properly manage consumers' consent. There are several challenges to effectively obtaining an informed consent from patients. PDPA requires the controller to properly convey one or more predefined purposes to the data subject before obtaining consent unless exempted under PDPA guidelines such as legal obligations. Given the widespread use of third parties receiving sensitive health data, it will be a challenge for health systems to set a predefined purpose properly. Especially once a data controller shares data with the likes of Facebook and Google Analytics, it is hard to dictate how they are going to use the health data. Another major change would be to obtain consent for cross-border data transfer. All third-party data recipients are not based in Sri Lanka; hence, as per PDPA clauses, health systems need to obtain consent from the users of the health systems properly. The heavy use of third-party trackers such as Google Analytics, Facebook, and DoubleClick further complicates since controllers need to properly expose how the data recipients are going to profile customers based on health data. Apart from the knowledge that most of such data is used in Advertising, it is a black box for outsiders to understand how the whole mobile ad ecosystem behaves, leaving the data controllers in Sri Lanka in the dark. PDPA has a separate clause for profiling, especially when using special category data such as health. PDPA (similar to the clauses in GDPR and CCPA) emphasizes providing data subjects (patients in this context) with an array of rights: access, erase, and withdraw consent. Our prior work on implementation of data subject requests (DSR) in CCPA <cit.> showed controllers have a hard time accurately responding to DSR because controllers are not fully aware of how third parties collected data from data subjects while executing within the controller's app. Given the widespread use of third-party trackers, Sri Lankan health systems will face the same issue. PDPA sets clear guidelines on what information should be available for the consumer for transparency. Privacy policy is one of the key techniques to properly convey data practices, purposes, and other relevant information. Most health systems in Sri Lanka do not have any privacy policy. We believe this will be an easy first step for many organizations to publish an accurate privacy policy. Still, this step will also be affected by third-party data collection's opaque nature and their purposes. Literature has looked into how developers understand their regulatory responsibilities and the gaps in their comprehension of the regulations <cit.>. Further work is needed to understand why developers share health data with third parties that have publicly asked not to share heath data with them [https://support.google.com/analytics/answer/13297105?hl=en] [https://web.facebook.com/business/help/361948878201809?id=188852726110565&_rdc=1&_rdr]. Literature has proposed solutions such as the use of SBOM to communicate compliance restrictions <cit.>. This is one of the overarching objectives of this work, i.e., to work with stakeholders to understand how to effectively implement privacy legislation while helping the likes of developers of health systems providing proper guidance and tools. The Data Protection Authority of Sri Lanka is keen to work with stakeholders to understand their perspectives and figure out the best way to roll out the implementation with the help of relevant parties such as health organizations, patients, developers of health systems, and legal practitioners. We have already conducted one stakeholder engagement to understand the cost, challenges, and opportunities that lie ahead in the compliance process. We hope this work will set an effective precursor for engaging with professionals in the healthcare domain to figure out their costs and challenges in preparing for the new legislation. The objective of this is not to blame anyone but to figure out where the help is most needed. This work was supported by the U.S. National Science Foundation (under grant CNS-2055772 & CNS-2217771 ). plain
http://arxiv.org/abs/2407.12386v1
20240717080740
An Empirical Extinction Curve Revealed by Gaia XP Spectra and LAMOST
[ "Ruoyi Zhang", "Haibo Yuan", "Bowen Huang", "Tao Wang", "Lin Yang", "Gregory M. Green", "Xiangyu Zhang" ]
astro-ph.GA
[ "astro-ph.GA" ]
0000-0003-1863-1268]Ruoyi Zhang Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China; yuanhb@bnu.edu.cn Department of Astronomy, Beijing Normal University, No.19, Xinjiekouwai St, Haidian District, Beijing 100875, China Max Planck Institute for Astronomy, Königstuhl 17, Heidelberg D-69117, Germany 0000-0003-2471-2363]Haibo Yuan 0000-0002-1259-0517]Bowen Huang 0000-0002-4878-1227]Tao Wang Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China; yuanhb@bnu.edu.cn Department of Astronomy, Beijing Normal University, No.19, Xinjiekouwai St, Haidian District, Beijing 100875, China 0000-0002-9824-0461]Lin Yang Department of Cyber Security, Beijing Electronic Science and Technology Institute, Beijing 100070, China 0000-0003-3112-3305]Xiangyu Zhang Max Planck Institute for Astronomy, Königstuhl 17, Heidelberg D-69117, Germany § ABSTRACT We present a direct measurement of extinction curves using corrected XP spectra <cit.> of the common sources in DR3 and LAMOST DR7. Our analysis of approximately 370 thousand high-quality samples yielded a high-precision average extinction curve for the Milky Way. After incorporating infrared photometric data from 2MASS and WISE, the extinction curve spans wavelengths from 0.336 to 4.6 μm. We determine an average R_55 of 2.730 ± 0.007, corresponding to = 3.073 ± 0.009, and a near-infrared power-law index α of 1.935 ± 0.037. Our study confirmed some intermediate-scale structures within the optical range. Two new features were identified at 540 and 769 nm, and their intensities exhibited a correlation with extinction and . This extinction curve can be used to investigate the characteristics of dust and enhance the extinction correction of Milky Way stars. A Python package for this extinction curve is available <cit.>. § INTRODUCTION The impact of dust on starlight in the universe is profound. Dust grains absorb electromagnetic radiation across a wide spectrum, ranging from ultraviolet (UV) to infrared (IR) wavelengths, as well as X-ray wavelengths. Subsequently, they re-emit the absorbed energy in the form of infrared and microwave radiation <cit.>. Re-radiation in the IR band could redistribute more than 30% of the energy of starlight in the universe <cit.>. Moreover, dust grains scatter photons in the X-ray, UV, optical, and near-infrared bands. The combined effect of these obscuration mechanisms on starlight, i.e., absorption and scattering, is known as dust extinction. The variation of extinction with wavelength is the extinction curve or the extinction law. Our understanding of interstellar dust largely relies on the study of extinction and its wavelength dependence. The overall shape (particularly the slope) of the extinction curve provides crucial insights into the size distribution of dust grains, while its spectral features reflect the chemical composition of the dust. Furthermore, the extinction curve serves as an indispensable correction tool for understanding the intrinsic properties of stars and galaxies. OB stars are excellent probes of the extinction curve because their luminosity allows them to be observed out to large distances and through significant dust extinction. Additionally, their spectra contain few spectral lines, thereby matched stellar templates are easy to find. Consequently, the majority of extinction curve measurements to date have relied on the “pair method" for OB stars. Therefore, the majority of extinction curve measurements made so far have been conducted by using the “pair method" for OB stars. Pioneering studies by <cit.> used UV spectra from the International Ultraviolet Explorer (IUE) satellite of 45 reddened OB stars and 10 standards to parameterize the UV extinction curve and the 2175Å bump for the first time. Subsequently, <cit.> analyzed about 30 OB stars with IR photometry from <cit.>. They found that the total-to-selective extinction ratio, , could account for most variations in the extinction curves from UV to optical wavelengths. The extinction law was further validated by a much larger sample of 417 OB stars across diverse interstellar environments <cit.>. Subsequently, numerous studies have refined the Galactic extinction model using varied samples and wavelength ranges, employing diverse data analysis techniques <cit.>. These models, dependent on , have progressively revealed more detailed structures such as intermediate-scale structures (ISS). Some study found ISSs are independent of <cit.>, while <cit.> reached the opposite conclusion based on the same data. These studies are carried out using multi-wavelength spectroscopic and photometric data from dozens to hundreds of OB stars. However, the relatively limited number of sightlines somewhat constrains our ability to extensively sample various interstellar environments. Consequently, it becomes challenging to conduct comprehensive statistical analyses that explore the relationships between the properties of dust, interstellar environments, and other aspects of the interstellar medium. With the advent of large-scale sky surveys, measuring by combining spectroscopic and photometric data for a much larger sample of stars has overcome many limitations. <cit.> utilized APOGEE spectra in conjunction with optical and near-infrared photometry from Pan-STARRS 1, 2MASS and WISE to map the distribution across 150,000 sightlines near the Galactic midplane. They found that does not correlate with dust column density but exhibits a negative correlation with the far-infrared dust emissivity index. <cit.> estimated values for approximately 3 million sightlines using datasets from LAMOST and multiple photometric sky surveys, thereby constructing a high-resolution 2D map that covers two-fifths of the sky. They observed significant spatial correlations between and molecular clouds, while also verifying the general absence of correlation between and dust extinction. These investigations enhance our comprehension of dust characteristics in various interstellar conditions and provide accessible maps for accurate extinction corrections. However, utilizing photometric data to calculate only offers an indirect assessment of dust properties, as it does not directly provide the extinction curve. Additionally, extinction measurements derived from broadband photometry are impacted by bandwidth effects <cit.>, wherein the observed spectral energy distribution (SED) of stars influences the determinations. Hence, there remains a need for large-scale extinction curve measurements to diminish selection biases and statistically explore the commonality of dust properties. This study leverages the most recent XP (BP and RP) slitless spectral data, along with stellar parameters from LAMOST, to ascertain the extinction curve of the Milky Way. This paper is structured as follows: Section <ref> outlines the data sources utilized in our analysis. In Section <ref>, we compute the intrinsic spectra and extinction curve using the star-pair method. Section <ref> presents the calculation and validation of a median extinction curve and the identification of several intermediate-scale structures. Our findings are summarized in Section <ref>. § DATA The Data Release 3 <cit.> catalog was obtained by the first 34 months of continuous all-sky scans of the mission of the European Space Agency. DR3 has released mean low-resolution BP/RP spectra for approximately 220 million sources, representing the largest dataset of low-resolution spectra to date. The XP (BP and RP) spectra cover the wavelength range of 336 – 1020 nm with a spectral resolution of R ∼ 20 – 70. <cit.> corrected the systematic errors in the Gaia XP spectra by comparing them against external spectral libraries. After comprehensively correcting the systematic errors depending on color BP-RP, G, and reddening, they achieved an internal precision of 1–2 percent. We have incorporated their systematic error corrections in our analysis. LAMOST DR7 accurately measures stellar parameters with the LAMOST Stellar Parameter Pipeline (LASP; ). For our calculations, we utilized the effective temperature , surface gravity , and metallicity parameters from LAMOST. § METHOD The star-pair algorithm, when applied to photometric survey data, can be used to obtain the intrinsic colors and reddening of stars based on the assumption that stars with the same stellar atmosphere parameters have the same intrinsic color <cit.>. Similarly, we posit that stars with matching stellar parameters should also share identical intrinsic spectral profiles. By employing the star-pair method on XP spectra, we can deduce intrinsic spectra and extinction curves. For each reddened target star, the intrinsic spectrum is derived from its control pairs, also referred to as comparison pairs, which have similar atmospheric parameters and very low extinction. The extinction curve is then calculated by subtracting the intrinsic spectrum from the observed spectrum. The target sample, i.e., the total sample, contains 5,056,929 common sources with XP spectra from Gaia and LAMOST. While the control sample stars are carefully selected based on the following criteria: * The vertical distance from Galactic disk |Z| > 300 pc and the ecliptic latitude | elat | > 10. This ensures that the control sample remains unaffected by Galactic and potential zodiacal dust. It is noteworthy that the Galactic dust disk's scale height is approximately 103 pc <cit.>. * The signal-to-noise ratio (SNR) of the LAMOST spectra is required to be greater than 20 to guarantee the accuracy of the derived stellar atmosphere parameters. * The spectrum's systematic error correction must be reliable <cit.>. * ≤ 0.01mag; if < -0.8, < 0.01-(+0.8)/100 mag; if > 6300 K, < 0.01+(-6200)/(4×10^4) mag; if < 5500 K, < 0.01-(-5500)/(3×10^5) mag. We first adopted a stringent limit for the reddening, then relaxed the restriction in the low-temperature, high-temperature, and metal-poor zones, to obtain a sufficient number of control samples. At the high temperature end, the extinction limit is at a maximum of 0.1 mag. Finally, we selected 67,650 (1.3% of the total) control stars that uniformly cover various stellar parameters. To mitigate random errors, we averaged the spectral flux within 542–558 nm, designating the mean value as the flux at 550 nm, denoted as f_550. Subsequently, for all observed spectra, we calculated the relative flux, f_λ/f_550, and dereddened the control sample therein. Initially, we employed the extinction curve from <cit.> for the extinction correction. Once we acquired the median extinction curve based on the XP spectrum, it superseded the initial extinction curve for further correction. This process is iterated until the results are stabilized. For each star within the target sample, control pairs are determined from the entire control sample through a "pairing" process. A star is considered a suitable control pair if the differences in its parameters relative to those of the target star fulfill the following criteria: Δ < 30 + [ 0.005 × ( 6000 - ) ]^2 K, Δ < 0.5 dex, and Δ < 0.3 dex. Note that the threshold for temperature difference is minimized at 6000 K and is gradually adjusted to be more lenient in regions where the control sample is less dense. The maximum Δ is about 340 K at the highest temperature, 9500 K. The typical number of control stars assigned to each target star is approximately 800, and we set the minimum required number as ten. To derive the intrinsic spectrum of the target star, we modeled the relationship between the relative flux and the stellar parameters of the local control pairs for each wavelength. This was accomplished using the function f_λ/f_550 = a × + b × + c × + d, where f_λ represents the dereddened flux at wavelength λ, and a, b, c, and d are parameters to be determined. With these parameters established for each wavelength, the relative flux for any given star can be calculated by inputting its stellar parameters. Given that the control sample is a subset of the target sample, the intrinsic fluxes estimated by the star-pair method are accessible, in addition to the intrinsic fluxes directly dereddened using the extinction curve. We then calculate the differences between these two sets of intrinsic fluxes at 400, 500, 650, 750, and 900 nm for the control sample, iteratively excluding 3σ outliers. In this study, we use monochromatic normalization to mitigate the bandwidth effects in the extinction measurements. Consequently, we adopt 440 nm and 550 nm as the approximations for the Johnson B and V bands, respectively <cit.>. To reduce the significant observational uncertainty in XP spectra,we average the extinction over a 16 nm range centered at both 440 nm and 550 nm, thereby determining the monochromatic extinction. The normalized extinction at wavelength λ, relative to that at 550 nm can be calculated by A_λ - A_55 = -2.5 logF^'_obs(λ)/F^'_intrinsic(λ), where A_λ and A_55 is the extinction at wavelength λ and 550 nm, F^'(λ), defined as F(λ)/F(55), represents the flux at λ relative to the flux at 550 nm, the subscripts “obs" and “intrinsic" respectively denote the observed and intrinsic SED, respectively. We define the normalized extinction curve, in alignment with , as: k(λ-55) ≡A_λ - A_55/A_44 - A_55 and the definition of the ratio of total-to-selective extinction R_55 as R_55≡A_55/A_44 - A_55 = A_55/E(44-55). Here, k(λ-55) and R_55 serve as analogs to the more commonly used extinction normalizations, k(λ-V) and , respectively. Figure <ref> illustrates the calibrated observed spectra, the intrinsic spectra, and the extinction curves for a giant star and a main-sequence star. The observable fluctuations, or "wiggles", present in the observed spectra and the extinction curves are primarily the artifacts in the XP spectra due to long-range noise correlations <cit.>. The intrinsic spectrum for each star is estimated using multiple local control pairs, allowing their wiggles to counterbalance each other. Finally, we obtained the extinction curves for 99.5% of the target sample. § RESULT AND DISCUSSION §.§ The Extinction curve We selected a high-quality sample of 366,189 stars to derive the median extinction curve by the following selection criteria: (1) [Calculated using the color excess and reddening coefficient of color G_ BP-G_ RP in <cit.>. The same applies hereinafter.] > 0.15 mag; (2) LAMOST SNR > 30; (3) G > 12 mag; (4) Reliable systematic error correction <cit.>. Afterward, we removed the 3σ outliers of k(λ-55) at each wavelength and calculated the median k(λ-55), as displayed in Figure <ref>. Note that the small wiggles at both the blue and red of the extinction curve are artifacts in the XP spectra. And the spike around 641 nm lies close to the boundary between the BP and RP bands at 635 nm, suggesting that it could be artificial. In Figure <ref>, we demonstrate the mean extinction curve with =3.07 from ZG24, and the model curve with =3.1 of , <cit.>, and <cit.> for comparison. Overall, the consistency of the extinction curves is good. The differences between the ZG24 and XP extinction curves are less than one-percent, while the differences for and are within 5%. The extinction curve differs by up to 10% in the 540-760 nm range and below 440 nm. More detailed comparison will discuss in Section <ref>. To validate our extinction curve, we assessed its correlation with several stellar or ISM parameters, including the G band magnitude, , , , and . We divided the high-quality sample into four subsets on each parameter[For the G magnitude, we included stars of 10 < G < 12.] and calculated their respective median extinction curves. Subsequently, we analyzed the residuals between these extinction curves and the extinction curve of the entire high-quality sample. The results are shown in Figure <ref>. The curves in the top panel agree well, except for the red line which represents stars with 10<G<12, for which new systematic errors were introduced during the calibration <cit.>. Consequently, stars of 10<G<12 has been excluded from the high-quality sample. The curves exhibit little correlation with both and , suggesting that these two critical stellar parameters in the star-pair method do not introduce significant systematic errors in our results. A slight correlation with is observed. This could be attributed to the distinct spatial distributions of the subsets. Variations in dust properties, associated with different spatial positions, might alter the shape of the extinction curve <cit.>. A secondary reason might be that not all XP systematic errors related to have been fully corrected, particularly in the high extinction end. The data are from measurements labeled as “reliable" in <cit.>, which employs forward modeling along with a series of photometric data spanning from ultraviolet to infrared. Hence, the correlation between and the extinction curves is expected. §.§ Comparison with photometric reddening Observed color excess ratios provide an independent method for testing extinction curves. Therefore, we use precise reddening measurements for 5 million stars from <cit.> to further examine the extinction curve. To integrate photometric and spectroscopic data, this study assumes two ultra-narrow passbands of width 16 nm to calculate the magnitudes at 440 and 550 nm, respectively. We used the star-pair algorithm to derive the color excess for colors (440-550) and (550-Ks). Consequently, we can represent the empirical reddening for each color as CERs in the form of k(λ-55). On the other hand, we adopt a forward modeling approach to calculate simulated CERs <cit.>, using the median XP extinction curve, the BOSZ synthetic spectral database <cit.>, and the response function for each filter <cit.>. Figure <ref> compares the observed and simulated k(λ-55) of the high-quality sample. The dispersion of observed CERs arises mainly from random errors in the magnitudes of the 440 and 550 nm bands, which depend on the signal-to-noise ratio of the XP spectra. The dispersion in the simulated CERs merely reflects variations in SED. The strong agreement between the two sets of CERs confirms the reliability of our results, particularly considering that they share the same dependence on SED. §.§ Intermediate-scale structures It is noteworthy that, besides the wiggles at both ends, there are some distinct real structures in the extinction curve. We suspect that the bumps at 487, 540, 637, and 769 nm, respectively, are the so-called intermediate-scale structures (ISS). ISSs are usually broader than diffuse interstellar bands (DIBs), but narrower than the commonly observed broadband variability, as reported in <cit.>. Using extinction curves of 72 early-type stars from , <cit.> identified three optical ISSs at 437, 487, and 630 nm, respectively. Our study also observed bumps near 487 and 630 nm. However, the central wavelengths for the feature at 630 nm are shifted to 637 nm, possibly due to the influence of the junction of the BP and RP spectra. The feature at 487 nm is very distinct and can be seen from all sources except . Whereas the 437 nm ISS is located near the blue end where the wiggles are strong, thus affects its profile. The newly found candidates for ISS at 540 and 769 nm are evident in this study and in ZG24, both using XP spectra. ZG24 employed a forward-modeling machine learning approach to obtain the intrinsic spectra and extinction curves for sources common to both XP and LAMOST DR7. Compared to the three known ISSs, these two new features exhibit narrower widths and weaker intensities. As a result, they were not prominent in previous literature extinction curves, though still can be observed. For example, a noticeable bump at 769 nm (1.30 μ m^-1) and a slight bump within a trough at 540 nm (1.85 μ m^-1) are visible in Figure 2 from <cit.> and the second panel of Figure 5 from . The extinction models of and did not exhibit these structures, since they both employed low order polynomial to fit the continuum. Such fitting techniques smoothed out the finer details in the extinction curve, potentially obscuring subtle structures. In this study, there are three ISSs that can be clearly detected, namely at 487, 540, and 769 nm. As depicted in Figure <ref>, the intensity of these three ISSs does not vary with the physical parameters of stars, further supporting their reality. It is noteworthy that they all have a negative correlation with . This work, in contrast to the samples used in previous studies, employed more stars with lower extinction, resulting in these two ISSs being detected with greater intensity. Among these three ISSs, only the one at 769 nm shows a significant correlation with , whereas the features at 487 and 540 nm do not. <cit.> found that the intensities of the three ISSs located at 437, 487, and 630 nm are independent of . Conversely, using the same data, observed a correlation with . However, did not find relevance for ISSs at 540 and 769 nm. These conflicting results indicate that the correlation of these ISSs with requires further study, which is beyond the scope of this paper. Our Future research will explore the potential connections of ISSs with interstellar dust, DIBs, and gas. §.§ Extend to the infrared Using the CERs described in Section <ref>, we can integrate the photometric data, especially the infrared bands from 2MASS and WISE, with our extinction curve. First, we calculate the effective wavelength of passbands using the following function: λ_eff = ∫λ^2 T(λ) F(λ) dλ/∫λ T(λ) F(λ) dλ , where T(λ) is the filter transmission. F(λ) is the BOSZ synthetic spectra for corresponding stellar parameters of each star, which is reddened using the XP extinction curve at specific . Figure <ref> displays the effective wavelengths and median CERs for each band of the high-quality sample. Consistent with the previous comparisons, the two types of extinction data match very well. To mitigate wiggles at both ends of the XP median extinction curve, we then smoothed the extinction curve for λ < 440 nm and λ > 1000 nm. Subsequently, we perform a power-law fit for the smoothed extinction curve at wavelengths between 1000 nm and 1020 nm, as well as for the k(λ-55) values in the J, H, Ks, W1, and W2 bands. The power-law function is described by: k(x-55) = b · x^α - R_55 , where x is the wavelength number in μ m^-1, b is a free parameter, α is the power-law index, and R_55 represents the intercept at x=0. Our fitting yield b = 0.994 ± 0.007, α = 1.935 ± 0.037, and R_55 = 2.730 ± 0.007. By adopting α = 0.910 and β = 0.064 from ZG24 for Eq. <ref>, we convert R_55 into = 3.073 ± 0.009. By combining the smoothed median extinction curve with the power-law function for the infrared region, we can produce an extended extinction curve spans from 0.336 to 4.6 μm. This extended extinction curve is suitable for extinction corrections across various stellar spectra in the Milky Way. §.§ Comparison with literature To compare with the curves from from ZG24, , , and , we first need to convert them all into the form of k(λ-55). ZG24 and do not require conversion, whereas and use the following formula for conversion: k(λ-55) = α k(λ-V) + β and R_55 = α - β , where α=0.990 and β=0.049 (Eq. 16, 17 of ) We compared them with our median extinction curve as illustrated in Figure <ref>. In the visible range (400-700 nm), the literature curves generally align well with ours. The differences for and remain within 5%. While the extinction curve shows up to a 10% difference in the 540-760 nm range. Using similar data, the average extinction curve of ZG24 displays high consistency with our findings, including the ISSs mentioned above. The overall residuals are less than one percent. The relatively larger discrepancies within the 400-500 nm and 625-645 nm ranges are probably attributed to the systematic errors in the XP spectra within these bands <cit.>, which have been addressed in our corrections. This mutual validation further attests to the validity of the results from both studies. Except for ZG24, all models at =3.1 exhibit significant discrepancies with the photometric k(λ-55) in the infrared region with a wavelength larger than 950 nm. This finding aligns with that of <cit.>, who reported a power-law index α of 2.070 ± 0.030. The infrared extinction data used in this paper have been verified to be consistent with their color excess ratios <cit.>. For reference, the infrared spectrum from 0.8 to 4.0 µm used in is sourced from <cit.>, with an average α of 1.7. A possible explanation of this discrepancy is the selection effect of the sample, meaning that the power-law index α of the infrared extinction curve varies with the line of sight <cit.>. § SUMMARY We have directly measured the extinction curves using XP spectra of approximately five million common sources from DR3 and LAMOST DR7. We derived the average extinction curve for the Milky Way from about 370 thousand high-quality spectra. The wavelength range of this curve includes two segments: spectroscopic coverage from 336 to 1020 nm, and an extension to 4.6 μm through power-law fitting to the infrared photometric data. Validation against photometric data and existing extinction models confirms the reliability of our extinction curve. The median extinction curve reveals several optical structures with resolution down to approximately 10 nm. The ISS features at 487 and 630 nm are confirmed, with new features identified at 540 and 769 nm. For the high-quality sample, whose is greater than 0.15, the average R_55 is 2.730 ± 0.007, corresponding to = 3.073 ± 0.009. The near-infrared power-law index α is 1.935 ± 0.037. This direct measurement of extinction curves, utilizing the largest spectral dataset to date, provides a powerful tool for studying dust properties and correcting extinction in stellar spectra across the Milky Way. This work is supported by the National Key Basic R&D Program of China via 2019YFA0405500 and the National Natural Science Foundation of China through the projects NSFC 12222301, 12173007, and 12173034. We acknowledge the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A08 and CMS-CSST-2021-A09. This work has made use of data products from the LAMOST, , 2MASS, and WISE. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope; LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
http://arxiv.org/abs/2407.12353v1
20240717071402
Non-equilibrium phase coexistence in boundary-driven diffusive systems
[ "Shin-ichi Sasa", "Naoko Nakagawa" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
S.-i. Sasa Department of Physics, Kyoto University, Kyoto 606-8502, Japan sasa@scphys.kyoto-u.ac.jp N. Nakagawa Department of Physics, Ibaraki University, Mito 310-8512, Japan naoko.nakagawa.phys@vc.ibaraki.ac.jp Non-equilibrium phase coexistence in boundary-driven diffusive systems Shin-ichi Sasa Naoko Nakagawa July 22, 2024 ====================================================================== § ABSTRACT Liquid-gas phase coexistence in a boundary-driven diffusive system is studied by analyzing fluctuating hydrodynamics of a density field defined on a one-dimensional lattice with a space interval Λ. When an interface width ℓ is much larger than Λ, the discrete model becomes the standard fluctuating hydrodynamics, where the phase coexistence condition is given by the local equilibrium thermodynamics. In contrast, when ℓ < Λ, the most probable density profile is determined by a new variational principle, where the chemical potential at the interface is found to deviate from the equilibrium coexistence chemical potential. This means that metastable states at equilibrium stably appear near the interface as the influence of the particle current. The variational function derived in the theoretical analysis is also found to be equivalent to the variational function formulated in an extended framework of thermodynamics called global thermodynamics. Finally, the validity of the theoretical result is confirmed by numerical simulations. § INTRODUCTION A rich variety of phenomena exhibit non-equilibrium phase coexistence, such as boiling heat transfer, pattern formation in crystal growth, and motility-induced phase separation <cit.>. Although many such impressive phenomena are dynamic and complex, a non-trivial and surprising phenomenon has been predicted in calm and simple phase coexistence out of equilibrium. One example is the quantitative prediction that, in liquid-gas coexistence under heat conduction, the temperature of the liquid-gas interface is lower than the equilibrium coexistence temperature for the pressure <cit.>, where the equilibrium phase coexistence occurs at the first-order phase transition point far from the critical point. This phenomenon means that metastable states at equilibrium are stabilized by a steady current even in the linear response regime. The prediction was presented in an extended framework of thermodynamics that we call global thermodynamics. This framework was first proposed as a natural extension of the minimum principle of free energy with the key concept of global temperature <cit.>. Applying the framework to a van-der Waals fluid revealed that the temperature of the liquid-gas interface is different from the first-order transition temperature at equilibrium. Then, the formulation was carefully arranged so that quantitative predictions could be made for real materials <cit.>. The equivalence among different ensembles was discussed, and finally, the maximum entropy principle was formulated for enthalpy-conserving heat conduction systems <cit.>. The entropy defined in the formulation is found to possess a non-additive term in addition to the space integral of the local entropy density field. This formulation enables us to apply global thermodynamics to various systems. There have been no experimental reports on the predictions of global thermodynamics. Nevertheless, it is worth mentioning that numerical simulation of the Hamiltonian Potts model in heat conduction shows results consistent with the quantitative prediction of global thermodynamics for large enough systems <cit.>. The singular nature of the phase coexistence has also been discussed by analyzing a mesoscopic model describing the order parameter dynamics in heat conduction <cit.>. However, this analysis involves some phenomenological assumptions in calculating the singular part. Furthermore, the model is too complicated to extract the microscopic mechanism of the deviation in interface temperature from the equilibrium phase coexistence temperature. On the basis of this background, we study a simple system that exhibits non-equilibrium phase coexistence. We consider a system in which the number density field is a single dynamical variable and the density field is directly driven by a boundary condition of the chemical potential, where the temperature is given as a constant parameter of the system. The stochastic time evolution of the density field is described in terms of a discrete form of fluctuating hydrodynamics with a space interval Λ. When the width of an interface in phase coexistence ℓ is much larger than Λ, the model is equivalent to the standard fluctuating hydrodynamics <cit.>. Then, local fluctuation properties of thermodynamic quantities are described by local equilibrium distribution <cit.>. In contrast, when ℓ≪Λ, the local distribution may be out of equilibrium <cit.>. For this case, we derive the variational function that determines the most probable density profile. We then find that the chemical potential at the interface of the most probable profile deviates from the equilibrium coexistence value. This means that metastable states at equilibrium stably appear near the interface of the driven system. The formula describing the deviation takes the same form as those phenomenologically predicted by global thermodynamics. Indeed, we can formulate extended thermodynamic relations for this system by using the variational function, which is equivalent to global thermodynamics. We also confirm the validity of the theoretical calculation by numerical simulations. The rest of this paper is organized as follows. In Sec. <ref>, we introduce a model we study in this paper. We then review phase coexistence conditions for equilibrium systems, and summarizes basic issues for non-equilibrium phase coexistence. In Sec. <ref>, we derive a variational function by analyzing the Zubarev-McLennan representation of the stationary distribution. In Sec. <ref>, we rewrite the variational equation as the form giving the chemical potential at the interface. In Sec. <ref>, we formulate extended thermodynamic relations based on the variational function and discuss the relationship with global thermodynamics. In Sec. <ref>, we show results of numerical simulations. Section <ref> provides some concluding remarks. § SETUP §.§ Model We consider a collection of stochastic and diffusive particles in a closed tube which are driven by an external battery at one surface x=0. See Fig. <ref> for the illustration of the setup. We describe the system by an averaged particle density ρ(x) defined in a one-dimensional region [0,L]. More precisely, we define ρ(x) ≡1/A∫ dy dz ρ(x,y,z) for the three-dimensional particle density ρ(x,y,z), where the area of the cross section of the tube is denoted by A. We then assume a standard continuum description of fluctuating hydrodynamics of ρ(x). The density field ρ(x,t) satisfies the continuity equation ∂_t ρ+ ∂_x j=0. Based on the mean field picture in the cross section, we assume the current as j(x,t) = -σ(ρ(x))[∂_x ρ(x)-ϕδ(x)] +√(2σ(ρ(x))T/A)·ξ(x,t). T is the temperature, σ(ρ) is a conductivity as a smooth function of ρ, and ϕ represents the voltage of a battery located at x=0. ξ is Gaussian-white noise satisfying ⟨ξ||=⟩0 and ⟨ξ|(x,t) ξ(x',t') |=⟩δ(x-x')δ(t-t'). The symbol “·" in front of ξ in (<ref>) represents the Stratonovich product in space and the Ito product in time. Here, because the variance of the surface average Ξ(x,t) ≡∫ dy dz ξ(x,y,z,t)/A of the three-dimensional Gaussian-white noise ξ(x,y,z,t) with unit variance is 1/A, we set Ξ(x,t)=ξ(x,t)/√(A), which determines the A dependence of (<ref>). The free energy functional of the density profile ρ=(ρ(x))_0 ≤ x ≤ L is expressed as (ρ)=∫_0^L dx [ f(ρ(x))+ κ/2 (∂_x ρ)^2 ]. As a specific example, we consider the case that f(ρ)= -1/2(ρ-1.5)^2+1/4(ρ-1.5)^4 by introducing dimensionless length and energy in this form. Note that our argument below is independent of the specific form if f(ρ) contains two local minima. The length unit is assumed to be the order of particle distances. κ characterizes the interface free energy, which is relevant when ∂_x ρ is large. In particular, when 0.5 < ρ̅< 2.5 in noiseless equilibrium systems with T=0 and ϕ=0, phase coexistence occurs with two interfaces. Then, √(κ) determines the interface width in the phase coexistence. Now, we notice that there is a cutoff length Λ of the continuum description. Because the noise is assumed to be white in space, Λ should be larger than the microscopic length, which is set to the order of unity. On the other hand, it is obvious that Λ should be much smaller than the system size L. In many cases, the calculation result of fluctuating hydrodynamics is independent of the cutoff Λ even in the limit Λ→ 0, while there is a case where a singular cutoff dependence is observed <cit.>. Here, let us recall that the width of interfaces in phase coexistence is estimated as a microscopic length. Thus, this may be smaller than the cutoff length of fluctuating hydrodynamics. Such a case cannot be studied by the continuum model. We thus need to propose and analyze a discrete model in which the microscopic cutoff Λ is explicitly introduced. With this background, we consider a sequence of N-boxes in a one-dimensional ring. Let ρ_i be the density of particles at the i-th box, where 1 ≤ i ≤ N. Mathematically, ρ_i is defined on the i-th site in the one-dimensional lattice { i | 1 ≤ i ≤ N, i ∈ℤ} with the periodic boundary condition ρ_0 ≡ρ_N and ρ_N+1=ρ_1. The horizontal size of the box is denoted by Λ and the cross section area of the box is A. The system size L is given by L=Λ N. The free energy functional given in (<ref>) is then replaced with (ρ)=Λ∑_i=1^N [ f(ρ_i)+ κ/2Λ^2 (ρ_i+1-ρ_i)^2 ] for ρ=(ρ_i)_1 ≤ i ≤ N. The density ρ_i satisfies the continuity equation ρ_it+ j_i-j_i-1/Λ=0 with j_0=j_N. The current j_i is defined on the i-th bond connecting the i-th site and the (i+1)-th site. Using the generalized chemical potential μ̃_i defined by μ̃_i ≡1/Λ Fρ_i, we replace the current given in (<ref>) by j_i(t)= - σ(ρ_i^ m)/Λ (μ̃_i+1- μ̃_i - ϕδ_i,N) + √(2 σ(ρ_i^ m) T/A Λ)·ξ_i(t), where we set ρ^ m_i= (ρ_i+ρ_i+1)/2 to satisfy the detailed balance condition. ξ_i(t) is Gaussian-white noise satisfying ⟨ξ|_i |=⟩0 and ⟨ξ|_i(t) ξ_j(t') |=⟩δ_ijδ(t-t'). The Λ dependence of the noise intensity in (<ref>) is understood from the replacement of δ(x-x') with δ_ij/Λ. Explicitly, μ̃_i is written as μ̃_i = μ(ρ_i)- κ/Λ^2(ρ_i+1+ρ_i-1-2ρ_i) with μ(ρ_i)= ∂ f(ρ_i)/∂ρ_i. The total number of particles ∑_i=1^N ρ_i =ρ̅N is conserved in the time evolution. The average density ρ̅ is a parameter of the system. The steady state of this system is characterized by five parameters (, , ϕ, ρ̅, N), where ≡κ/Λ^2, ≡T/A. That is, systems with the same values of (, , ϕ, ρ̅, N) exhibit the same steady state. In the argument below, the Λ dependence appear only through dependence. When ≫ 1 and N →∞, the system behavior of (<ref>) and (<ref>) is understood by analyzing (<ref>) and (<ref>) because (<ref>) and (<ref>) correspond to an accurate approximation of (<ref>) and (<ref>). In contrast, the system behavior for the case < 1 cannot be described by (<ref>) and (<ref>). For such cases, we have to analyze the discrete model with focusing on the limiting case that ≪ 1. See Fig. <ref> for the illustration of two limiting cases. Finally, we note that A ≫ 1 because A is the square of a macroscopic length. We thus consider the weak noise limit → 0 for the steady state realized in the limit t →∞. §.§ Review of equilibrium phase coexistence We review phase coexistence states for the equilibrium system with ϕ=0. The stationary distribution of ρ is given by P_(ρ;ρ̅)=1/Zexp[ -1/ F(ρ) ] δ( ∑_i=1^Nρ_i-ρ̅N ). See Appendix <ref> for the derivation. The most probable profile in the weak noise limit → 0 is determined as the density profile that minimizes F(ρ) defined by (<ref>). The variational equation is obtained as μ̃_i=μ̃_i+1, which means that μ̃_i is a constant independent of i. We study two limiting cases ≫ 1 and ≪ 1. First, for the case ≫ 1, we can derive the phase coexistence condition from the analysis of the continuum limit of (<ref>) with N →∞, which is given in (<ref>). Importantly, the solution of the variational equation is unique under the boundary condition that ∂_x ρ(0)=∂_xρ(L)=0 with ρ(0) > ρ(L). As shown in Appendix <ref>, phase coexistence occurs when ρ̅ satisfies ≤ρ̅≤, where and are determined by μ()= μ(), p()= p() with the thermodynamic pressure p(ρ) defined by p(ρ)≡ρμ(ρ)-f(ρ). Hereafter, the equilibrium value of the coexistence chemical potential is denoted by μ_c≡μ()= μ(). We assume < without loss of generality. Then, and represent the densities of the liquid and gas in the phase coexistence, respectively. Furthermore, once and are obtained, the fraction of the liquid region X^ eq is uniquely determined from Λ∑_i ρ_i=ρ̅L, which is expressed by + (1-)=ρ̅. See Fig. <ref>(c) for the profile. For the other case ≪ 1, which we mainly study in this paper, the derivation method in the continuum limit cannot be applied. However, even in this case, the variational principle determines the most probable profile. The variational equation (<ref>) is rewritten as μ_i=μ_i+1 in the limit → 0 so that the chemical potential is uniform. In this case, there is a one-parameter family of solutions ρ_i;X^ϕ=0 characterized by an interface position X. We here attempt to write the solution ρ_i;X^ϕ=0 explicitly. First, let μ_X^ϕ=0 denote the uniform value of the chemical potential for the solution. Referring to Fig. <ref>(a), we set and as those satisfying μ()=μ()=μ_X^ϕ=0 with <. We can then express the solution as ρ_i;X^ϕ=0=χ(i ∈ [1, NX])+χ(i ∈ [NX,N]) with a half-integer NX and 0≤ X ≤ 1, where χ( P ) represents the characteristic function that takes 1 if P is true and 0 otherwise. Then, for a given X, the condition X + (1-X) =ρ̅ determines the value of μ_X^ϕ=0. Note that the value of X is not determined from the variational equation (<ref>) with → 0. Physically, the solutions ρ^ϕ=0_X form a family of metastable states characterized by X. The most probable value of X, which is denoted by X_*, is derived from the minimum free energy principle formulated as follows. We first define the variational function (X;ρ̅)≡(ρ_X^ϕ=0). Then, the minimum free energy principle means (X_*;ρ̅) =min_X (X;ρ̅), where, in the present case, X_* satisfies . (X;ρ̅)X|_X=X_*=0. Substituting the expression (ρ_X^ϕ=0)=LXf()+L(1-X)f() into (<ref>), we obtain f()-f()=μ_X_*^ϕ=0 (-) using (<ref>) and (<ref>). Comparing (<ref>) and (<ref>) with (<ref>), we find μ_c=μ_X_*^ϕ=0, =, =. Moreover, X_* is equal to X^ eq. The result means that the equilibrium states for the two limiting cases, ≫ 1 and ≪ 1, are equivalent. §.§ Preliminaries for non-equilibrium phase coexistence We concentrate on the case that ρ̅ satisfies < ρ̅< where liquid and gas coexist. When the voltage of the battery ϕ is positive, the stationary distribution is not written as the canonical form (<ref>). Therefore, we do not have a general variational principle for determining the most probable profile for ϕ >0. Nevertheless, we divide the problem for determining the most probable profile into two steps. As the first step, we consider stationary solutions of the deterministic equations without noise. If the stationary solution is unique, this is the most probable profile in the weak noise limit. In contrast, if stationary solutions form a one parameter family, we proceed to the second step, where we will formulate a variational principle for selecting the most probable profile among the stationary solutions as we have examined in (<ref>) for the equilibrium system. In this section, we focus on the first step. We consider stationary solutions of the deterministic equation given in (<ref>) and (<ref>) with =0. Let J be the steady particle current induced by the battery. Setting j_i(t)=J for all i in (<ref>), we obtain the conduction equation 1/Λ(μ̃_i+1- μ̃_i + ϕδ_i,N) = -J/σ(ρ_i^ m) . The uniformity of J yields J = ϕ( Λ∑_i=1^N 1/σ(ρ_i^ m))^-1. When ≫ 1 and N →∞, we can analyze the continuum limit of (<ref>). As shown in Appendix <ref>, we find that the solution of the deterministic equation is unique and that the chemical potential and pressure are continuous at the interface in the space x̂=x/L with L →∞. We can then conclude that the chemical potential at the interface is . See Appendix <ref> for the derivation of this result. In contrast, when ≪ 1 and N →∞, there is a family of stationary solutions of the conduction equation (<ref>) with → 0. In the equilibrium system, they correspond to metastable states characterized by (<ref>), as shown in Fig. <ref>(b), and the metastable states were candidates for the most probable profile. We thus expect that a family of stationary solutions ρ_i;X^ϕ for ϕ>0 correspond to metastable states among which the most probable profile is selected. In the remainder of this section, we express the metastable states explicitly as a preliminary for the analysis in the next section. We concentrate on small ϕ and ignore the contribution of O(ϕ^2). The solution ρ_i;X^ϕ should be given as a perturbation of the equilibrium solution ρ_i;X^ϕ=0 given in (<ref>). We thus assume that |ρ_i;X^ϕ-| < |ρ_i;X^ϕ-| for 1 ≤ i < NX, and |ρ_i;X^ϕ-| < |ρ_i;X^ϕ-| for NX < i ≤ N. Letting σ^L = σ(), σ^G = σ(), the conduction equation (<ref>) with → 0 results in μ(ρ_i;X^ϕ)= - 1/σ^LJLi-1/N+μ_1:X^ϕ +O(ϕ^2) , (1≤ i≤ NX), μ(ρ_i;X^ϕ)= -1/σ^GJLi-N/N+μ_N;X^ϕ +O(ϕ^2), (NX< i≤ N), where μ_1;X^ϕ=μ_N;X^ϕ+ϕ . From the direct calculation, we also obtain the difference between the chemical potentials of two adjacent sites over the interface as μ(ρ_X-1/(2N);X^ϕ)-μ(ρ_X+1/(2N);X^ϕ)=O(ϕ/N). Finally, the condition 1/N∑_i=1^N ρ_i;X^ϕ = ρ̅ uniquely determines μ_1;X^ϕ and ρ_1;X^ϕ for a given X. We can easily confirm that lim_ϕ→ 0ρ_i;X^ϕ is equivalent to (<ref>). §.§ Singular continuum description We introduce a continuum description using a real variable x̂=x/L defined on the interval [0,1] by taking the limit N →∞. The mathematical formalize of this limit is beyond the scope of the present paper. However, to simplify the calculation in the subsequent sections, we naively introduce the continuum description based on expected behaviors of ρ_i(t) for large N. For a discrete variable ρ=(ρ_i(t))_i=1^N at time t, we first define ρ_N(x̂,t̂) as the piece-wise linear function obtained by connecting two consecutive points (i/N, ρ_i(t)) and ((i+1)/N, ρ_i+1(t)) for 0 ≤ i ≤ N in the (x̂, ρ) space, where we set t̂=t/L^2 for later convenience. For sufficiently large N, we expect that there exists an almost continuous function ρ(x̂, t̂) such that |ρ(x̂, t̂)-ρ_N(x̂, t̂)| =O(1/N). The chemical potential μ(x̂,t̂) in the continuum description is defined from μ_i(t) by the same procedure and it is expected that μ(x̂,t̂)=μ(ρ(x̂,t̂)). The definition of j(x̂,t̂) is slightly different from ρ(x̂,t) and μ(x̂,t), because the current j_i(t) is defined on the i-th bond connecting the i site and i+1 site as seen in (<ref>) and (<ref>). Concretely, we define j(x̂,t̂) by using the piece-wise linear function obtained by connecting points (i/N+1/2N, j_i) in the (x̂, j) space. Then, the continuum limit of (<ref>) and (<ref>) with → 0 is expressed by ∂_t̂ρ(x̂, t̂)+ ∂_x̂ j(x̂, t̂) =0, and j(x̂, t̂)=-σ(ρ(x̂,t̂)) [μ(x̂,t̂)x̂-ϕδ(x̂)] +√(2σ(ρ(x̂,t̂))/L)·ξ̂(x̂,t̂), where ξ̂(x̂,t̂) satisfies ⟨ ̂|ξ(x̂,t̂) ξ(x̂',t̂') |=⟩δ(x̂-x̂')δ(t̂-t̂'). Apparently, (<ref>) and (<ref>), which we call the singular continuum description, take the same form as a singular case of the standard continuum description (<ref>) and (<ref>) with κ→ 0 by setting x̂ =x/L and t̂ = t/L^2. We here notice the order of limits in both the descriptions. In the singular continuum description, we first take κ/Λ^2 → 0, and then N →∞ with L fixed. In the standard continuum description, we take κ/Λ^2 →∞ and N →∞ with L fixed. Then, as a singular case of the standard continuum description, we consider the limit κ→ 0. The behavior of the two descriptions are rather different, but we do not discuss the difference anymore. In the argument below, we will consider only the singular continuum description. The stationary solution of the deterministic equation ρ_i;X^ϕ and the corresponding chemical potential μ(ρ_i;X^ϕ) are expressed as ρ_X^ϕ(x̂) and μ_X^ϕ(x̂) in the singular continuum description. Note that ρ_X^ϕ(x̂) is discontinuous at x=X, while μ_X^ϕ(x̂) is continuous at x=X as is seen from (<ref>). The chemical potential at the interface x=X is denoted by μ_X^ I≡μ_X^ϕ(X). As shown in Fig. <ref>, ρ_X^ϕ(x̂) is a piece-wise continuous function and that μ_X^ϕ(x̂) is a piece-wise smooth function with singular points at x̂=0 and x̂=X. Furthermore, we rewrite (<ref>) and (<ref>) as μ_X^ϕ(x̂)= -1/σ^LJLx̂+μ_X^ϕ(0) +O(ϕ^2) for 0 ≤x̂ < X, and μ_X^ϕ(x̂)= -1/σ^GJL(x̂-1)+μ_X^ϕ(1) +O(ϕ^2) for X < x̂≤ 1. We also have μ_X^ϕ(0)=μ_X^ϕ(1)+ϕ . Here, from (<ref>), we obtain JL= ϕ( X/+1-X/)^-1. Then, the density field ρ_X^ϕ(x̂) for 0 ≤x̂ < X is determined from (<ref>) with μ_X^ϕ(x̂)=μ(ρ_X^ϕ(x̂)) and |ρ_X^ϕ(x̂)-| < |ρ_X^ϕ(x̂)-|. Similarly, the density field ρ_X^ϕ(x̂) for X < x̂≤ 1 is determined from (<ref>) with μ_X^ϕ(x̂)=μ(ρ_X^ϕ(x̂)) and |ρ_X^ϕ(x̂)-| < |ρ_X^ϕ(x̂)-|. Note that ρ_X^ϕ(x̂) and ∂_x μ_X^ϕ(x̂) are discontinuous at x̂=0 and x̂=X. For given X and system parameters (ϕ, ρ̅), μ_X^ϕ(x̂) and ρ_X^ϕ(x̂) are uniquely determined from the condition ∫_0^1 dx̂ ρ_X^ϕ(x̂) =ρ̅. § VARIATIONAL FUNCTION To simplify the notation, we use x and t for x̂ and t̂ in this and next sections. In the previous section, we determined the candidates of steady density profile ρ_X^ϕ(x) characterized by X in the weak noise limit → 0. To determine the most probable density profile among them for given system parameters (ρ̅,ϕ), in Sec. <ref>, we derive a variational function using the Zubarev-McLennan representation of the steady state. The variational function includes a time integral of the current at x=0. After confirming some basic issues and assumptions in Sec. <ref>, we calculate the time integral of the current in Sec. <ref>. The result is presented in Sec. <ref>. §.§ Stationary distribution We consider the stationary distribution P_ß(ρ;ρ̅, ϕ) of density field ρ. When ϕ=0, the stationary distribution is given by (<ref>). However, the stationary distribution for the system with ϕ > 0 is not generally obtained. Nevertheless, in the linear response regime out of equilibrium, there is a useful expression called the Zubarev-McLennan representation P_ß(ρ;ρ̅, ϕ) = P_(ρ;ρ̅) exp[ - ϕ⟨Q||^⟩_ρ +O(ϕ^2) /] with Q=∫_0^∞ dt  j_N(t), where j_N(t) is a fluctuating current at the N-th bond, which is defined in (<ref>). The N-th bond is the only bond on which the driving ϕ is imposed. ⟨ ||^⟩_ρ denotes the conditioned expectation for the equilibrium path ensemble provided that the initial density profile is given by the specified ρ as the argument of the stationary distribution <cit.>. See Appendix <ref> for the derivation of (<ref>). For the general expression (<ref>) with (<ref>), we take the limit → 0 and consider the singular continuum description introduced in the previous section. From (<ref>) and (<ref>), we have P_ß(ρ;ρ̅, ϕ) = exp[ - 1/ ( F(ρ)+ ϕ⟨Q||^⟩_ρ +O(ϕ^2) + const ) ] δ( ∫_0^1 dx ρ(x)-ρ̅) with Q=∫_0^∞ dt  j(0,t). Note that j_N(t) becomes j(0,t)=j(1,t) in the singular continuum description. The most probable density profile is identified as ρ_X_*^ϕ(x) that minimizes -log P_ß(ρ_X^ϕ;ρ̅,ϕ). This is the variational principle to determine X_*. From (<ref>), we obtain the variational function as (X;ρ̅,ϕ)=(ρ_X^ϕ)+ ϕ⟨Q||_⟩ρ_X^ϕ^ +O(ϕ^2). We then find X_* as the special value of X that minimizes the variational function (X;ρ̅, ϕ) with (ρ̅,ϕ) fixed. That is, X_* is determined as (X_*;ρ̅,ϕ) =min_X (X;ρ̅,ϕ) . We thus need to calculate ⟨Q||_⟩ρ_X^ϕ^. §.§ Preliminaries for the calculation ⟨Q||_⟩ρ_X^ϕ^ To calculate ⟨Q||_⟩ρ_X^ϕ^ eq, we have to know the typical time evolution of ρ(x,t) starting from ρ_X^ϕ(x) at t=0 under the equilibrium condition ϕ=0 in the weak noise limit → 0, where → 0 is taken after t →∞ is considered. Here, from (<ref>), we find that only ϕ-independent terms of ⟨Q||_⟩ρ_X^ϕ^ are necessary for the calculation because ϕ-dependent terms are absorbed into O(ϕ^2). We then notice expansions μ_X^ϕ(x)=μ_X^ϕ=0(x) + O(ϕ), ρ_X^ϕ(x)=ρ_X^ϕ=0(x) + O(ϕ). See Fig. <ref> for illustration of ρ_X^ϕ(x) and ρ_X^ϕ=0(x). See also the sentence involving (<ref>) for μ_X^ϕ=0. Using these relations, we find ⟨Q||_⟩ρ_X^ϕ^ eq= ⟨Q||_⟩ρ_X^ϕ=0^ +O(ϕ) . That is, we study the typical time evolution of the density field starting from ρ_X^ϕ=0(x). Since we study the case → 0, ρ_X^ϕ=0(x) represents a metastable profile that does not evolve in time without noises. Nevertheless, weak noise slowly drives the metastable profile ρ_X^ϕ=0(x) to the equilibrium state characterized by (<ref>). One may conjecture that the most probable time evolution from ρ_X^ϕ=0(x) in the weak noise limit is described by the relaxation process to the equilibrium profile ρ_X_*^ϕ=0(x) in Fig. <ref>(c) from the metastable profile in Fig. <ref>(b). However, this is not correct because of the space translational symmetry of equilibrium systems. Note that metastable profiles for a given width X of the liquid region form a one-parameter family of profiles obtained by any space translation of ρ_X^ϕ=0(x), and similarly equilibrium profiles also form a one-parameter family. Thus, stochastic dynamics in this neutral direction are equally probable, which are represented by Brownian motion of the profile with the width of the liquid region fixed. In other words, most probable process is not uniquely determined in the weak noise limit. We have to consider a collection of trajectories that are equally probable and dominantly contribute to ⟨Q||_⟩ρ_X^ϕ=0^. We call this collection the highly probable path ensemble to distinguish it with the most probable process. Here, we concretely describe the highly probable path ensemble starting from the initial condition ρ_X^ϕ=0(x). We assume that each trajectory in the highly probable path ensemble satisfies the following two conditions. First, the liquid or gas region is not separated into smaller pieces of liquid or gas at any time t. That is, the number of interfaces in the system is always two. Second, the time evolution occurs along the metastable states by the influence of weak noise. This means that the interface positions correspond to the slowest mode of the time evolution and the densities of the liquid and gas regions are determined from the condition of the metastable profile. We then consider the time evolution ρ(x,t) as follows. Let (t) and (t) denote the liquid region and gas region at any time t. Recalling the relation (<ref>) with (<ref>) and (<ref>), we express ρ(x,t) in the highly probable path ensemble as ρ(x,t)= ρ^ L(t) χ(x ∈(t)) + ρ^ G(t) χ(x ∈(t)), where ρ^ L(t) and ρ^ G(t) are determined from μ(ρ^ L(t))=μ(ρ^ G(t)) and (t) |(t)|+ (t) |(t)|=ρ̅. It should be noted that we adopt the singular continuum description introduced in the previous section. Using this form of the time evolution, we estimate (<ref>). Because ρ(x,t) in (<ref>) is described by (t) and (t), we explicitly express them in terms of the interface positions (t) and (t) at time t, where they satisfy ρ((t)+,t) - ρ((t)-,t) >0, ρ((t)+,t) - ρ((t)-,t) < 0 for small positive . See Fig. <ref>(a). Then, the liquid region (t) and gas region (t) are expressed as (t)=[(t),(t)] and (t)=[0,1] \(t) if (t) < (t), or (t)=[(t),(t)] and (t)=[0,1] \(t) if (t) > (t). Now, we consider the time evolution of interface positions, (t) and (t), starting from (0)=0 and (0)=X. However, because 0 ≤(t) ≤ 1 and 0 ≤(t) ≤ 1, (t) and (t) are not continuous functions of t. This would lead to a complicated calculation of the accumulated current defined by (<ref>). To describe the interface motion using continuous functions, we introduce generalized coordinates (t) ∈ℝ and (t) ∈ℝ such that displacements of the interface from the initial time 0 to the time t are given by (t)-(0) and (t)-(0). That is, (t) and (t) describe the positions of the left and right interfaces of the liquid region in a generalized coordinate space ℝ. The interface positions (t) and (t) in the space [0,1] are then obtained as (t)=(t)-⌊(t) ⌋, (t)=(t)-⌊(t) ⌋, where ⌊ ⌋ represents the floor function. The width of the liquid and gas regions are then written as |(t)|=(t)-(t), |(t)|=1-|(t)|=(t)-(t)+1 irrespective of the sign of (t)-(t). We also introduce the center of the liquid region in the generalized coordinate space as (t)=(t)+(t)/2. Using the width |(t)| and the center (t) of the liquid region, we have (t)=(t)+|(t)|/2, (t)=(t)-|(t)|/2. We note that, in the weak noise limit, |(t)| → X^ eq as t →∞, while (t) shows unbounded-free Brownian motion because of the translation symmetry for the case ϕ=0. This fact simplifies the calculation of the accumulated current defined by (<ref>). At the end of this subsection, we discuss the difference between the most probable process for the case ≫ 1 and the highly probable path ensemble for the case ≪ 1. In the former case, ρ_X^ϕ=0 evolves to an equilibrium configuration in the deterministic system, which is in contrast with the latter case where we study. To obtain the accumulated current Q for the former case, we have to analyze the time-dependent solution of the deterministic equation, which is out of the present paper. §.§ Expression of j(0,t) In this subsection, we calculate j(x,t) in the singular continuum description based on (<ref>) and (<ref>). In the argument below, (t), (t), (t), and (t) are simply denoted by , , , and if their t-dependencies are clearly guessed. For a given density profile (ρ(x,t))_x=0^1 at time t, we define ψ(x,t)≡∫_0^x dx' (ρ(x',t)-ρ̅) for 0 ≤ x ≤ 1. Because ψ(0,t)=ψ(1,t)=0, ψ(x,t) can be extended to a periodic function in x. That is, we define ψ(x,t)≡ψ(x-⌊ x ⌋,t) for any -∞ < x <∞. See Fig. <ref>(b) for the illustration. The time derivative of (<ref>) leads to ∂_t ψ(x,t)=-j(x,t)+j(0,t) for any x ∈ [0,1]. We here integrate (<ref>) over the liquid region and divide by . We repeat the same operation for the gas region. Summing up the two results, we have a relation 1/∫_dx ∂_t ψ + 1/∫_dx ∂_t ψ =- 1/∫_dx j(x,t) - 1/∫_dx j(x,t) +( ||/+||/)  j(0,t) . Recall the formula for j(x,t) given in (<ref>). Because ∫ dx ξ(x,t)=0 and μ(x,t) takes a constant value in the bulk regions, the first and second terms on the right side of (<ref>) turn out to be zero. We thus obtain the expression for j(0,t) as j(0,t)= [ ||/+||/]^-1[ 1/∫_dx ∂_t ψ + 1/∫_dx ∂_t ψ] . We remark that j(0,t) is stochastic as specified by (<ref>). In the right side of (<ref>), (t), (t), and the space integrals of ∂_tψ(x,t) are affected by the Brownian motion of the interfaces. The current j(0,t) formulated in (<ref>) contains the time derivative of ψ in the space integrals. We now transform (<ref>) into a formula by letting the time derivative be outside the integral. In the transformation procedure, we need to pay attention to the time-dependent ranges and of the integrals. As the result, the current j(0,t) is expressed as j(0,t)= Φ(t)-Φ_0(t), where Φ(t) ≡d/dt{[ ||/+||/]^-1[ 1/∫_dx ψ + 1/∫_dx ψ] } , and Φ_0(t) is determined from (<ref>), (<ref>), and (<ref>). Concretely, we perform the time-derivative of (<ref>). In the time derivative of a function of Brownian motion X(t), we note the chain rule d f(X(t))/dt= f'(X(t))∘ dX/dt, where the symbol ∘ represents the Stratonovich product. We then have Φ_0(t)= ( 1/-1/) [ |(t)|/+|(t)|/]^-1 |(t)|((t)-ρ̅) ∘d/dt . See Appendix <ref> for the derivation of Φ_0(t). Due to the translational invariance for the case ϕ=0, (t) shows the free Brownian motion which is independent of |(t)|, |(t)|, and (t). Therefore, taking the path ensemble average over noise realization, we have ⟨Φ|_0(t)| ⟩= ( 1/-1/) ⟨[ |(t)|/+|(t)|/]^-1 |(t)|((t)-ρ̅)||⟨%s|⟩d/dt|⟩ =0 Combining this with (<ref>), we obtain ⟨j|(0,t)|^⟩ eq_ρ^ϕ=0_X = ⟨Φ|(t) |^⟩ eq_ρ^ϕ=0_X . §.§ Result of ⟨Q||^⟩_ρ_X^ϕ=0 Let us define q(τ)≡∫_0^τ dt Φ(t). From (<ref>), we have ⟨Q||^⟩_ρ_X^ϕ=0= lim_τ→∞⟨q|(τ) |^⟩_ρ_X^ϕ=0. Substituting (<ref>) into (<ref>) with noting (0)=0 and (0)=X, we obtain q(τ) = [ |(τ)|/+|(τ)|/]^-1[ 1/∫_(τ) dx ψ + 1/∫_(τ) dx ψ] - [ X/+1-X/]^-1[ 1/∫_0^X dx ψ + 1/∫_X^1 dx ψ] . From the piece-wise linear nature of ψ(x,t), the integrals are calculated by the trapezoidal rule as ∫_dx ψ(x,t) = 1/2 || (ψ((t),t)+ψ((t),t)), ∫_dx ψ(x,t) = 1/2 || (ψ((t),t)+ψ((t),t)), where ψ((t)+1)=ψ((t)) is applied. Using these relations, we obtain 1/∫_dx ψ + 1/∫_dx ψ = 1/2( ||/+||/) (ψ((t),t)+ψ((t),t)) . Substituting this into (<ref>), we have q(τ)= 1/2(ψ((τ),τ)+ψ((τ),τ)) -1/2(ψ(0,0)+ψ(X,0)). Here, ψ(0,0)=0, ψ(X,0)=(-ρ̅)X, and ψ((τ),τ)=ψ((τ),τ), ψ((τ),τ)=ψ((τ),τ). We thus obtain lim_τ→∞⟨q|(τ)|^⟩_ρ_X^ϕ=0→ *= 1/2lim_τ→∞(⟨ψ((τ),τ)||+⟩⟨ψ((τ),τ)||⟩) -1/2 (-ρ̅) X. Note that the positions of the interfaces, and , are uniformly distributed in the interval x∈[0,1] as τ→∞. We thus conclude that ⟨Q||^⟩_ρ_X^ϕ=0 =C -1/2 (-ρ̅) X, where C is a constant independent of X. Using the relation ρ̅=X+(1-X), which comes from (<ref>), we can also express (<ref>) as ⟨Q||^⟩_ρ_X^ϕ=0 =C -1/2 (-) X(1-X). Substituting (<ref>) into (<ref>), we have (X;ρ̅,ϕ)=∫_0^1 dx  f(ρ_X^ϕ(x)) -ϕ/2(-)X(1-X)+ ϕ C . We note that ρ_X^ϕ(x) is uniquely determined for given (X,ϕ,ρ̅) by μ(ρ_X^ϕ(x))=μ_X^ϕ(x) with (<ref>) and (<ref>) and that and are functions of (X,ρ̅). Thus, (X;ρ̅,ϕ) is the variational function for determining the most probable interface position X_* for a given (ρ̅,ϕ). The most probable density profile is expressed as ρ_X_*^ϕ(x). § VARIATIONAL EQUATION In this section, using the variational function (<ref>), we determine the most probable value of X. This is regarded as an extension of the argument for determining the equilibrium profile in the paragraph containing (<ref>). Thus, similarly to (<ref>), we analyze the variational equation . (X;ρ̅,ϕ)X|_X=X_*=0 with (<ref>). The difference from (<ref>) is the O(ϕ) terms in (<ref>). As shown in Fig. <ref>, one of the interfaces of ρ_X^ϕ(x) is located at x=0 for any X, both the liquid and gas density profiles are sloped, and the chemical potential profile μ_X^ϕ(x) is piece-wise linear. Then, as did for the equilibrium system, we first derive the chemical potential at the interface μ_X_*^ defined by (<ref>) instead of directly calculating X_*. Once we have μ_X_*^, we obtain the most probable profiles μ_X_*^ϕ(x) and ρ_X_*^ϕ(x), and the value of X_*. The determination of μ_X_*^ is also physically important because if μ_X_*^≠, metastable states at equilibrium stably appear in the non-equilibrium phase coexistence. §.§ Preliminaries for the calculation In the argument below, we ignore the contribution of O(ϕ^2). We first define ≡1/X∫_0^X dx ρ^ϕ_X(x), ≡1/1-X∫_X^1 dx ρ^ϕ_X(x), and ≡1/X∫_0^X dx μ^ϕ_X(x), ≡1/1-X∫_X^1 dx μ^ϕ_X(x). Because the density profile ρ_X^ϕ(x) and the chemical potential profile μ_X^ϕ(x) are linear in the respective regions, [0,X] and [X,1], we obtain f( )=1/X∫_0^X dx f(ρ^ϕ_X(x)), f( )=1/1-X∫_X^1 dx f(ρ^ϕ_X(x)), and =μ(), =μ(). The pressures in the liquid and gas regions are characterized by ≡ p() and ≡ p(), which are expressed by =-f(), =-f(). The first term on the right side of (<ref>) is written as ∫_0^1 dx  f(ρ_X^ϕ(x))=X f( ) +(1-X) f( )+O(ϕ^2), and the variational function (<ref>) as (X;ρ̅, ϕ)= X f( )+(1-X) f() -ϕ/2(-)X(1-X)+ϕ C. Since =+O(ϕ) and =+O(ϕ), we can rewrite (<ref>) as (X;ρ̅, ϕ)= X f( )+(1-X) f() -ϕ/2(-)X(1-X)+ϕ C. We emphasize that (<ref>) is explicitly expressed as a function of X. Noting that the chemical potential profile is piece-wise linear as shown in Fig. <ref>, we estimate = μ_X^ϕ(0)+ μ_X^/2, = μ_X^ϕ(1)+ μ_X^/2. From (<ref>), ϕ=μ_X^ϕ(1)-μ_X^ϕ(0) is rewritten as - =ϕ/2. Furthermore, using (<ref>) and (<ref>), we can express and in terms of μ_X^ as =μ_X^ +JL/X/2, =μ_X^ -JL/1-X/2. §.§ Steady state We consider the variational equation (X;ρ̅,ϕ)X =0. Substituting (<ref>) into (<ref>), we have f()-f()-ϕ/2(-)(1-2X) + d/dXX+d/dX(1-X) -ϕ/2(d/dX-d/dX)X(1-X) = 0. Using (<ref>), we rewrite the second line as [X+(1-X)][Xd/dX+(1-X)d/dX]. Here, taking the derivative of ρ̅=X +(1-X) in X, we obtain Xd/dX+(1-X)d/dx=-(-). We substitute this into (<ref>) and combine it with (<ref>). Then, the variational equation (<ref>) becomes f()-f()-(-)[(1-X)+X]=0. The solution of (<ref>) provides the most probable value X_* of the interface position. Therefore, we express f()-f()-(-)[(1-X_*)+X]=0. Now, we rewrite (<ref>) as a different form using the chemical potential at the interface position X_*. We subtract the equilibrium version of (<ref>), i.e., f()-f()-(-)=0, from (<ref>). Noting f()-f()=(-)+O(ϕ^2), we obtain (-){-[(1-X_*)+X_*]}=0. Substituting (<ref>) into this equation and using ≠, we have μ_X_*^= - JL X_*(1-X_*)/2( 1/ - 1/). Because X_*=+O(ϕ), we can rewrite it as μ_X_*^= - JL (1-)/2( 1/ - 1/). Furthermore, combining (<ref>) with (<ref>), we finally obtain μ_X_*^ = +ϕ/2(-)(1-)/+ (1-). Recalling that is uniquely determined by ρ̅ from (<ref>), we conclude that μ_X_*^ is expressed in terms of the system parameters. Thus, the chemical potential at the interface deviates from linearly with the voltage ϕ. This means that metastable states at equilibrium stably appear around the interface. The relation (<ref>) is the most important achievement of our theory. Next, from (<ref>) and (<ref>), we obtain = +ϕ/2, = -(1-)ϕ/2, which also gives μ_X_*^ϕ(0) and μ_X_*^ϕ(1) using (<ref>). The result yields μ_X_*^ϕ(x). Using μ_X_*^ϕ(x)=μ(ρ_X_*^ϕ(x)), we have ρ_X_*^ϕ(x). Finally, from X_* +(1-X_*) =ρ̅, we can express X_*- in terms of system parameters. In this manner, all thermodynamic quantities are determined by the variational principle. As one example, we discuss the pressure in the steady state. Using (<ref>), we can express (<ref>) as -=ϕρ̅/2. Recalling that the local pressure is given by p(x)=p(ρ(x)), we define the pressures at the left and right sides of the interface as p_-≡lim_x→ X_*^-p(ρ(x)), p_+≡lim_x→ X_*^+p(ρ(x)). Then, using ρ^ L/G_X_* (μ^ L/G_X_*-μ^ I_X_*) =p^ L/G_X_*- p_-/+ , we can derive p_-=p_ c+(μ_X_*^-), p_+=p_ c+(μ_X_*^-), where p_ c is the equilibrium coexistence pressure. This result indicates that the pressure is not continuous at the interface. It should be noted that this discontinuity occurs only in the singular continuum description for ≪ 1 but never occur in the continuum description for ≫ 1 as shown in Appendix <ref>. § GLOBAL THERMODYNAMICS In this section, we introduce an extended framework of thermodynamics based on the variational function (<ref>). We set M ≡ρ̅L, where MA represents the total number of particles in the tube. In equilibrium thermodynamics, the free energy function F_(L,M) is determined as F_(L,M)= F_(X_*;ρ̅) using the variational function F_(X;ρ̅) given by (<ref>). We then have the fundamental relation of thermodynamics dF_= -p dL+μ dM, where p and μ are the equilibrium values of pressure and chemical potential for a given (L,M). We now attempt to extend the relations (<ref>) and (<ref>) to non-equilibrium systems. First, following the relation (<ref>), we define the thermodynamic function F_ß(L,M,ϕ) as F_ß(L,M,ϕ)= F_ß(X_*;ρ̅,ϕ). Using the expression of the variational function (<ref>), we explicitly express (<ref>) as F_ß(L, M, ϕ) =L X_*f() +L (1-X_*)f() -ϕ/2 L(-) X_*(1-X_*) . The question here is to derive an extended form of (<ref>) for F_ß(L, M, ϕ). Then, using (<ref>) and (<ref>), we rewrite (<ref>) as F_ß(L, M, ϕ) =- L X_* - L(1-X_*)+ (X_* +(1-X) ) M . Defining the average pressure and chemical potential as p̅ = X_* + (1-X_*) , μ̅= X_* +(1-X_*) , we further rewrite F_ß as a suggestive form F_ß(L, M, ϕ) =- p̅ L + μ̅M, where p̅ and μ̅ are functions of (L,M,ϕ). Now, taking the derivative of F_ß in L, we have F_ß(L,M,ϕ)L= -p̅ - p̅ (L,M,ϕ)L L + μ̅(L,M, ϕ)L M . Here, using (<ref>), (<ref>), (<ref>), and (<ref>), we can confirm p̅ (L,M,ϕ)L = ρμ̅(L,M, ϕ)L. Substituting this result into (<ref>), we obtain F_ß(L,M,ϕ)L= -p̅. By repeating the similar calculation, we also have F_ß(L,M,ϕ)M= μ̅. Finally, we define a new variable Ψ by Ψ≡ - F_ß(L,M,ϕ)ϕ. From (<ref>), we have Ψ(L,M,ϕ) = 1/2 L(-) X_*(1-X_*). These three relations (<ref>), (<ref>), and (<ref>) are summarized as an extended form of the fundamental relation of thermodynamics dF_ß = -p̅ dL +μ̅d M -Ψ dϕ. Therefore, F_ß(L, M, ϕ) is regarded as a natural extension of the equilibrium free energy F_(L,M) defined in (<ref>). As seen from this analysis, the extension of the variational function from F_(X; ρ̅) to F_ß(X; ρ̅, ϕ) is closely related to the extension of the thermodynamic function from F_(L,M) to F_ß(L,M,ϕ). Without analyzing specific stochastic models, one can construct such an extended thermodynamic framework relying on the consistency, uniqueness, and predictability. This phenomenological argument, which is called global thermodynamics, was developed for heat conduction systems exhibiting phase coexistence <cit.>. Furthermore, global thermodynamics was applied to the order-disorder transition in heat conduction, and the prediction by global thermodynamics was confirmed by numerical simulations <cit.>. Similarly, in the present setup, starting from (<ref>) and (<ref>) without the explicit form of (<ref>), we can determine the variational function (<ref>). See <cit.> for the argument. Therefore, all the prediction made by global thermodynamics for the present setup are the same as the theoretical result for the stochastic model we study. In our research history, the variational function (<ref>) was first derived using global thermodynamics, and after that it was re-derived by analyzing the stochastic model. § NUMERICAL SIMULATION In this section, we perform numerical simulations of the discrete model and compare numerical results with the theoretical predictions presented in the previous sections. More explicitly, the time evolution of (ρ_i)_i=1^N is defined by (<ref>) accompanied with the current (j_i)_i=1^N defined by (<ref>). To obtain (j_i)_i=1^N using (<ref>), we determine (μ̃_i)_i=1^N by (<ref>) with μ(ρ_i)=(ρ_i-0.5)(ρ_i-1.5)(ρ_i-2.5) from (<ref>). We adopt a simple form of the conductivity σ(ρ)=ρ, where we have introduced a dimensionless time in this expression. For this specific model, we have =2.5, =0.5, and =0 from (<ref>), and we thus obtain =2.5 and =0.5 from (<ref>). Recalling that the independent parameters to be specified for numerical determination of the steady state are (, , ϕ, ρ̅, N) as discussed around (<ref>), we study the dependence of the steady state with fixing the other parameter values as (, ϕ, ρ̅, N)=(0.002, 0.05, 1.5, 64). Since A and Λ are contained in the renormalized quantities and , we do not need to specify the values of A and Λ, while the total volume and the total number of particles are given by LA and ρ̅LA with L=Λ N. To numerically solve (<ref>) and (<ref>), we adopt the Heun method with a time step dt=0.01Λ^2, where it should be noted that the time step dt is always coupled with Λ^2 for the time-discretized form of (<ref>) and (<ref>). In Fig. <ref>, we show the steady state for =0.5 and =1.5, where the density profile ρ_i and the chemical potential profile μ̃_i are plotted for i/N. We remark that the system reaches the steady state without dependence on initial conditions, (ρ_i)_i=1^N at t=0. Note that μ̃_i==0 for all i for the equilibrium system with ϕ=0, which is shown as the dotted line in Fig. <ref>(b). It is observed that μ̃ near the interface is close to =0 for the case =1.5, while it clearly deviates from =0 for the case =0.5. That is, the metastable gas stably appears at the left-side of the interface for the case =0.5. Here, we determine the chemical potential at the interface, which is denoted by μ^I, more quantitatively from the numerical data (ρ_i)_i=1^N and (μ̃_i)_i=1^N. In principle, we first determine the interface position X^I from the data (ρ_i)_i=1^N, and then read the value of the chemical potential at the interface position from the data (μ̃_i)_i=1^N. In practice, we use the linear interpolation of the data sets to systematically estimate μ^I for several parameters. That is, we define a piece-wise linear function ρ(x) for 0 ≤ x ≤ 1 by connecting two consecutive points (i/N, ρ_i) and ((i+1)/N, ρ_i+1) for 0 ≤ i ≤ N in the graph of (i/N, ρ_i)_i=1^N. We then define the interface position X^I as ρ(X^I)=1.5, where (+)/2=1.5. Similarly, we define μ̃(x) from (μ̃_i)_i=1^N. Using this X^I and μ(x), we obtain μ^I =μ̃(X^I). More explicitly, μ^I is determined as follows. First, we find i_* satisfying ρ_i_*-1 > 1.5 and ρ_i_* <1.5. From the construction of ρ(x), we obtain X^I= 1.5-ρ_i_*/ρ_i_*-1-ρ_i_*i_*-1/N + ρ_i_*-1-1.5/ρ_i_*-1-ρ_i_*i_*/N. We then have μ^I= 1.5-ρ_i_*/ρ_i_*-1-ρ_i_*μ̃_i_*-1 +ρ_i_*-1-1.5/ρ_i_*-1-ρ_i_*μ̃_i_* . Using this formula, we have μ^I=7.7 × 10^-3 for the data of =0.5, and μ^I=2.1 × 10^-4 for the data of =1.5. In Fig. <ref>, μ^I obtained by (<ref>) are plotted for several values of . Now, we compare the numerical results with the theoretical predictions. We developed the theory of the steady state in the weak noise limit ≪ 1 and the macroscopic limit N ≫ 1, with particularly focusing on the two regimes ≫ 1 and ≪ 1. When ≫ 1, the chemical potential at the interface is =0, as shown in Appendix <ref>. When ≪ 1, we have the formula (<ref>), where =0, =2.5, and =1.5 were already determined for the model in the first paragraph of this section. in the right-side of (<ref>) is determined as =1/2 using (<ref>) with ρ̅=1.5, =2.5, =0.5. By substituting these values into (<ref>), we obtain μ_X_*^=ϕ/6+O(ϕ^2). The dotted lines in Fig. <ref> represent the theoretical predictions μ^/ϕ=1/6 for ≪ 1 and μ^/ϕ=0 for ≫ 1. These are consistent with the numerical result in Fig. <ref>. It is quite interesting to elucidate the dependence of μ^I quantitatively. In particular, one may conjecture a phase transition at some value of in the limit N →∞. To investigate the validity of this naive conjecture, we have to numerically study the asymptotic behavior for N →∞, → 0 and ϕ→ 0 in more detail. From the theoretical viewpoint, we need to develop a calculation method for thermodynamic properties of the system with finite . § CONCLUDING REMARKS We have derived the variational function determining the steady state for a boundary-driven diffusive system with ≪ 1. The result is consistent with global thermodynamics, which is an extended framework of thermodynamics. Before ending the paper, we present a few remarks. The first remark is on the boundary conditions. It is natural to study a system with different boundary conditions leading to the same most probable profile. As more familiar boundary conditions, one considers the case that chemical potentials at boundaries are fixed. However, as far as we attempted, we could not evaluate the Zubarev-McLennan representation for this case. To study the boundary condition dependence of the system is a future problem. Second, in general, fluctuating hydrodynamics is regarded as a mesoscopic model obtained by coarse-graining microscopic dynamics. Thus, it is a significant problem to find relationship between microscopic dynamics and the discrete fluctuating dynamics. As the first step of such studies, parameter values of the model should be determined from the observation of microscopic dynamics. In particular, it seems highly challenging to identify the value of from microscopic models. Third, as a generalization of the present model, one may consider a discrete fluctuating hydrodynamics describing liquid-gas phase coexistence in heat conduction systems. One can numerically study the model by changing . It is interesting to observe the deviation of the interface temperature from the equilibrium coexistence temperature. Furthermore, following the theoretical method presented in this paper, we may develop a theory for calculating the deviation. We conjecture that the deviation formula is equivalent to that predicted by global thermodynamics. The most important future work is an experimental observation of non-equilibrium phase coexistence in which metastable states are stable as the influence of a non-equilibrium current. As shown in this paper, the phenomenon is expected to occur in systems described by a discrete fluctuating hydrodynamics. However, it is not obvious whether experimental systems are described by a discrete fluctuating hydrodynamics. It would be interesting to clarify an experimental condition for realizing a discrete fluctuating hydrodynamics. § ACKNOWLEDGMENTS The authors thank F. Kagawa, K. Saito, and Y. Yamamura for useful discussions about experiments on non-equilibrium phase coexistence; M. Kobayashi, S. Yukawa, and A. Yoshida for discussions on numerical simulations of non-equilibrium phase coexistence; and K. Hiura, M. Itami, H. Nakano and Y. Nakayama for discussions on fluctuating hydrodynamics. We also thank B. Derrida, C. Maes, H. Spohn, and H. Tasaki for suggesting us to study boundary-driven diffusive systems. This work was supported by JSPS KAKENHI Grant Number JP22H01144 and JP23K22415. Data Availability: Data sharing not applicable to this article as no datasets were generated or analysed during the current study. Conflict of Interest: The authors have no financial or proprietary interests in any material discussed in this article. seccntformat#1 nameuse@seccnt@prefix@#1 nameusethe#1 nameuse@seccnt@postfix@#1 nameuse@seccnt@afterskip@#1 seccnt@prefix@sectionAppendix seccnt@postfix@section: seccnt@afterskip@section seccnt@afterskip@subsection § DERIVATION OF (<REF>) In this section, we derive the the stationary distribution of ρ for the equilibrium system with ϕ=0. In general, to analyze statistical properties of ρ(t) obeying (<ref>) with (<ref>), it is convenient to use a new variable ψ=(ψ_i)_i=1^N for ρ. The introduction of the new variable is another purpose of this section. See Appendix <ref> for the analysis using ψ. We first define ψ_i(t) by ψ_i(t)t= -j_i(t) with ψ_i(0)=Λ∑_j=1^i ρ_j(0) at t=0. Substituting (<ref>) into (<ref>) and integrating it in time, we have ρ_i(t)= ψ_i(t)-ψ_i-1(t)/Λ for any t. Substituting (<ref>) into F(ρ) given by (<ref>), we can define F(ψ) from F(ρ). Taking the derivative of F(ψ) in ψ_i, we obtain Fψ_i = Fρ_i- Fρ_i+1 = μ̃_i- μ̃_i+1. Using (<ref>) and (<ref>), we rewrite (<ref>) as d ψ_i/dt= σ(ρ_i^ m)/Λ( - Fψ_i- ϕδ_i,N) + √(2 σ(ρ_i^ m) /Λ)·ξ_i with ρ_i^ m= ψ_i+1-ψ_i-1/2 Λ. Because σ(ρ_i^ m) is independent of ψ_i, the multiplication of σ(ρ_i^ m) and ξ_i is uniquely determined independently of the multiplication rule. From (<ref>), we obtain the stationary distribution of ψ for the equilibrium system with ϕ=0 as P_(ψ)=1/Z'exp[ -1/ F(ψ) ], where Z' is the normalization constant. This gives the stationary distribution of ρ as (<ref>). § ANALYSIS OF THE CONTINUUM MODEL In this section, we analyze the continuum model (<ref>) and (<ref>) which corresponds to the discrete model (<ref>) and (<ref>) in the limit →∞ and N →∞ with L and κ fixed. Explicitly, we derive the phase coexistence condition (<ref>) for the equilibrium system with ϕ=0 and the chemical potential at the interface for the case ϕ >0. §.§ Equilibrium phase coexistence Stationary solutions of (<ref>) and (<ref>) with ϕ=0 and T=0 satisfy ∂_x ρ(x) =0. Using (<ref>), (<ref>) is explicitly written as f'(ρ)-κ∂_x^2 ρ = , where is a constant in x. Furthermore, multiplying (∂_xρ) with (<ref>) and integrating in x, we obtain f(ρ)-κ/2 (∂_xρ)^2 -ρ= , where is also a constant in x. We first consider necessary conditions under which there is a stationary and spatially inhomogeneous solution, which we call the phase coexistence solution, because it connects two stationary and spatially homogeneous solutions that represent a liquid phase and a gas phase, respectively. To make the argument clear, we take the limit L →∞ and we assume ρ(0) > ρ(∞) for the phase coexistence solution. Because the phase coexistence solution approaches to the stationary and spatially homogeneous solutions ρ(0) and ρ(∞) as x → 0 and x →∞, we have ∂_x ρ(0)=0, ∂_x^2 ρ(0)=0, ∂_x ρ(∞)=0, ∂_x^2 ρ(∞)=0. Therefore, (<ref>) and (<ref>) lead to necessary conditions as μ(ρ(0))=μ(ρ(∞))=, and f(ρ(0)) -ρ(0)= f(ρ(∞)) -ρ(∞)=, which is further rewritten as = f(ρ(0))-f(ρ(∞))/ρ(0)-ρ(∞). For the function f(ρ) with two local minima, the conditions (<ref>) and (<ref>) represents the common tangent line at the special values ρ= and ρ=. We set > without loss of generality. We thus identify ρ(0)= and ρ(∞)=, and the values of the constants and are also determined. Note that (<ref>) and (<ref>) are regarded as the conditions giving , , and by the form μ()=μ()=, p()=p()=. Now, suppose that ρ̅ satisfies < ρ̅<, where and are determined by (<ref>). By setting ρ(0)= and ρ(∞)=, we solve (<ref>) with the determined value of . We here notice that (<ref>) is interpreted as Newton's equation describing the motion of a point particle under a potential field, where ρ and x correspond to position and time, respectively. κ is interpreted as the mass, and the potential function V(ρ) is given by V(ρ) ≡ρ -f(ρ). (<ref>) represents the energy conservation for the equation of motion. From (<ref>), we find that and are local maximal points with the same potential value. Therefore, the phase coexistence solution ρ(x) with ρ(0)= and ρ(∞)= is given by the heteroclinic orbit connecting the two maximal points with the same potential value. This result corresponds to the statement involving (<ref>) in the main text. §.§ Non-equilibrium phase coexistence Stationary solutions of (<ref>) and (<ref>) with ϕ > 0 and T=0 satisfy ∂_x ρ(x) = - J/σ(ρ(x)) , which corresponds to (<ref>) in the continuum limit →∞ and N →∞, where J is a constant given by J=( ∫^L_0 dx 1/σ( ρ(x)))^-1, which corresponds to (<ref>) in the continuum limit →∞ and N →∞. By analyzing (<ref>), we determine the value of the chemical potential at the interface. First, to uniquely identify the interface position, we introduce a scaled coordinate x̂ =x/L. Taking the limit L →∞, we find that the interface width in the scaled coordinate space becomes zero. Thus, the interface position X in the x̂ space is given by the discontinuous point of ρ(x̂) in the limit L →∞. We then define the chemical potential at the interface by μ(x̂=X). To determine the value of μ(x̂ =X), we consider the generalized chemical potential μ̃(x) given by μ̃(x) = μ(ρ(x))-κ∂_x^2 ρ, which corresponds to (<ref>) in the limit →∞ and N →∞. By integrating (<ref>) in the range [x_1,x_2], we have μ̃(x_2)- μ̃(x_1) = - J ∫_x_1^x_2 dx 1/σ(ρ(x)) for any x_1 and x_2. Even though σ(ρ(x̂)) is discontinuous at x̂=X, the integration in the right-side of (<ref>) gives a continuous function in x_2 and x_1. Thus, μ̃(x̂) is a continuous function in the limit L →∞. Here, the key idea for the determination of μ(x̂=X) is the introduction of the generalized pressure p̃ satisfying ρ∂_x μ̃= ∂_x p̃ . We can explicitly derive p̃ from (<ref>) and (<ref>) as p̃=p(ρ)-κρ∂_x^2 ρ +κ/2(∂_x ρ)^2, which was first obtained by van der Waals <cit.>. By integrating (<ref>) in the range [x_1,x_2], we have p̃(x_2)- p̃(x_1) = - J ∫_x_1^x_2 dx ρ(x)/σ(ρ(x)), for any x_1 and x_2. We find from (<ref>) that p̃(x̂) is a continuous function in the limit L →∞. Note that p̃(x)= for the equilibrium system, where is the constant given in (<ref>). Now, using the continuity of μ̃(x̂) and p̃(x̂) at x̂=X, we can determine the values of ρ(x̂=X-ϵ) and ρ(x̂=X+ϵ) for small ϵ >0 in the limit L →∞. Let ρ_- and ρ_+ be ρ(x̂=X-ϵ) and ρ(x̂=X-ϵ) for ϵ→ 0+ after taking the limit L →∞. The continuity of μ̃(x̂) and p̃(x̂) at x̂=X leads to μ(ρ_-)= μ(ρ_+), p(ρ_-)= p(ρ_+). Recalling (<ref>), we obtain ρ_-=, ρ_+= . Therefore, the chemical potential at the interface is equal to . The result is mentioned in the third paragraph of Sec. <ref> in the main text. Finally, we present a remark on the singular continuum description for the case → 0 in Sec. <ref>. The chemical potential μ(x̂) is continuous at the interface position x̂=X where ρ(x̂) is discontinuous. In this case, however, p(x̂) is discontinuous at x̂=X, as shown by (<ref>). That is, (<ref>) does not hold at the interface. This is the most essential difference between the two cases ≫ 1 and ≪ 1. § ZUBAREV-MCLENNAN REPRESENTATION In this section, we derive the Zubarev-McLennan representation (<ref>) in Sec. <ref>. We study stochastic processes of ψ=(ψ_i)_1≤ i ≤ N defined by (<ref>). The time evolution of ψ is described by (<ref>). Let ψ̂=(ψ_t)_t=0^τ be a trajectory in the time interval [0,τ]. The path probability density P_(ψ̂) in the system with ϕ >0 starting from a density profile sampled from an equilibrium distribution P_(ψ_0) is expressed as P_(ψ̂) = P_(ψ_0) × const×exp( - 1/4∫_0^τ dt ∑_i=1^N Λ/σ(ρ_i^ m)[ ψ_it+σ(ρ_i^ m)/Λ( ψ_i+ϕδ_i,N) ]^2 ) . For the time-reversed trajectory ψ̂^† =(ψ_τ-t)_t=0^τ of ψ̂, we have P_(ψ̂^†) = P_(ψ_τ) × const×exp( - 1/4 ∫_0^τ dt ∑_i=1^N Λ/σ(ρ_i^ m)[ -ψ_it+σ(ρ_i^ m)/Λ( ψ_i+ϕδ_i,N) ]^2 ) . The ratio of the two yields P_ (ψ̂)/ P_ (ψ̂^†) =exp[ -1/ϕ∫_0^τ dt ψ_N(t)t], where (<ref>) has been substituted into P_(ψ_0) and P_(ψ_τ). To simplify the notation, we introduce the accumulated current Q_τ(ψ̂)≡∫_0^τ dt  j_N(t) . Using (<ref>) and (<ref>), we rewrite (<ref>) as P_ (ψ̂)/ P_ (ψ̂^†) =exp[ ϕ Q_τ/]. The distribution of ψ at time t is expressed as P_τ(ψ) = ∫ Dψ̂  P_ (ψ̂) δ(ψ_τ-ψ) = ∫ Dψ̂^†exp[ ϕ Q_τ(ψ̂)/] P_ (ψ̂^†) δ(ψ_τ-ψ), where we have used Dψ̂= Dψ̂^† and (<ref>). When the path integration variable is transformed, the right side of (<ref>) is rewritten as ∫ Dψ̂ exp[ϕ Q_τ(ψ̂^†)/] P_ (ψ̂) δ(ψ_0-ψ), = ∫ Dψ̂ exp[-ϕ Q_τ(ψ̂)/] P_ (ψ̂) δ(ψ_0-ψ), where we have used Q_τ(ψ̂^†)=-Q_τ(ψ̂). We thus have the relation P_τ(ψ)= P_(ψ) ⟨exp|[-ϕ Q_τ/] |_⟩ψ. Taking the limit τ→∞, we have P_ß(ψ)= P_(ψ) ⟨exp|[-ϕ Q/] |_⟩ψ with Q= ∫_0^∞ dt  j_N(t). In the limit → 0, we estimate ⟨exp|[-ϕ Q/] |_⟩ψ≃exp[-ϕ⟨Q||_⟩ψ/]. We then expand ⟨Q|_∞|_⟩ψ in ϕ, we obtain P_ß(ψ)= P_(ψ) exp[-ϕ⟨Q||_⟩ψ^+ O(ϕ^2) /] . This is the Zubarev-McLennan representation of the steady state distribution. By using (<ref>), we obtain the stationary distribution of ρ as the form (<ref>). § DERIVATION OF (<REF>) In (<ref>), we consider the decomposition of j(0,t) into Φ(t) and Φ_0(t). In this section, we calculate Φ_0(t). §.§ Preliminaries for the calculation We first note that (<ref>) and (<ref>) yield d/dt=d/dt+1/2d||/dt, d/dt=d/dt-1/2d||/dt, d|| /dt=-d|| /dt . From the chain rule of the derivative, we have d/dt[||/+||/]^-1 =-[||/+||/]^-2∘[1/d||/dt+1/d||/dt] =-(1/-1/)[||/+||/]^-2∘d||/dt, and d/dt[1/∫_dx ψ+1/∫_dx ψ] =d/dt[1/∫_(t)^(t)dx ψ+1/∫_(t)^(t)+1dx ψ] =1/[ψ((t),t) ∘d/dt - ψ((t),t) ∘d/dt] +1/[ψ((t),t) ∘d/dt - ψ((t),t) ∘d/dt] +1/∫_dx ∂_t ψ + 1/∫_dx ∂_t ψ =(1/-1/) [ (ψ((t),t) -ψ((t),t) )∘d/dt +1/2(ψ((t),t) +ψ((t),t) )∘d||/dt] +1/∫_dx ∂_t ψ + 1/∫_dx ∂_t ψ . §.§ Relation between Φ(t) and j(0,t) Noting the two derivatives (<ref>) and (<ref>), we define Φ_1 ≡ -( 1/-1/) [ ||/+||/]^-2[ 1/∫_(t)dx ψ + 1/∫_(t)dx ψ] ∘d||/dt, Φ_2 ≡( 1/-1/) [ ||/+||/]^-1 ×[ (ψ((t),t) -ψ((t),t) )∘d/dt +1/2(ψ((t),t) +ψ((t),t) )∘d||/dt. ] Then, the definition of Φ(t) in (<ref>) leads to Φ(t)=Φ_1(t)+Φ_2(t)+j(0,t) by using (<ref>). Here, from the piece-wise linear nature of ψ(x,t), we have 1/∫_(t)dx ψ + 1/∫_(t)dx ψ =1/2[ ||/+||/] (ψ((t),t) +ψ((t),t) ). See also (<ref>) for the same equation. Using this relation, we obtain Φ_1 +Φ_2= ( 1/-1/) [ |(t)|/+|(t)|/]^-1(ψ((t),t)-ψ((t),t)) ∘d/dt . Finally, we note that ψ((t),t)-ψ((t),t) =∫_dx ∂_xψ, =|(t)|((t)-ρ̅). Substituting this into (<ref>), we obtain (<ref>) where Φ_0≡Φ_1+Φ_2. 99 boiling J. R. Thome, Boiling in microchannels: a review of experiment and theory, Int. J. Heat and Fluid flow 25, 128-139 (2004). crystal E. Ben-Jacob and P. Garik, The formation of patterns in nonequilibrium growth, Nature 343, 523-530 (1990). Cannell G. Ahlers, L. I. Berge, and D. S. Cannell, Thermal convection in the presence of a first-order phase change, Phys. Rev. Lett. 70, 2399 (1993). Yoshida A. Yoshida, N. Nakagawa, and S.-i. Sasa, Heat-induced liquid hovering in liquid-gas coexistence under gravity, ArXiv:2310.058171. Zhong J.-Q. Zhong, D. Funfschilling, and G. Ahlers, Enhanced heat transport by turbulent two-phase Rayleigh-Benard convection, Phys. Rev. Lett. 102, 124501 (2009). Ahlers S. Weiss and G. Ahlers, Nematic-isotropic phase transition in turbulent thermal convection, J. Fluid Mech. 737, 308-328 (2013). Urban P. Urbana, D. Schmoranzerb, P. Hanzelkaa, K. R. Sreenivasanc, and L. Skrbekb, Anomalous heat transport and condensation in convection of cryogenic helium, Proc. Nat. Acad. Sci. 110, 8036-8039 (2013). mips M. E. Cates and J. Tailleur, Motility-Induced Phase Separation, Annual Review of Condensed Matter Physics 6, 219-244 (2015). Global-PRL N. Nakagawa and S.-i. Sasa, Liquid-gas transitions in steady heat conduction, Phys. Rev. Lett. 119, 260602 (2017). Global-JSP N. Nakagawa and S.-i. Sasa, Global thermodynamics for heat conduction states, J. Stat. Phys. 177, 825-888 (2019). Global-PRR N. Nakagawa and S.-i. Sasa, Unique extension of the maximum entropy principle to phase coexistence in heat conduction, Phys. Rev. Res. 4, 033155 (2022). KNS M. Kobayashi, N. Nakagawa, and S.-i. Sasa, Control of metastable states by heat flux in the Hamiltonian Potts model, Phys. Rev. Lett. 130, 247102 (2023). Global-PRE S.-i. Sasa, N. Nakagawa, M. Itami, and Y. Nakayama, Stochastic order parameter dynamics for phase coexistence in heat conduction, Phys. Rev. E 103, 062129 (2021). Schmitz R. Schmitz, Fluctuations in nonequilibrium fluids, Physics Reports 171, 1-58 (1988). Eyink G. L. Eyink, Dissipation and large thermodynamic fluctuations, J. Stat. Phys. 61, 533-572 (1990). Spohn H. Spohn, Large scale dynamics of interacting particles (Springer-Verlag, Berlin Heidelberg, 1991). Bertini-RMP L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio, and C. Landim, Macroscopic fluctuation theory, Rev. Mod. Phys. 87, 593 (2015). Kuramoto Y. Kuramoto, Effects of diffusion on the fluctuations in open chemical systems, Prog. Theor. Phys. 52, 711 (1974). Yana Y. Yanagisawa and S.-i. Sasa, Phase coexistence in a weakly stochastic reaction-diffusion system, arXiv:2403.19198 Kado Y. Kado and S.-i. Sasa, Microscopic singularity of an entropic force in stochastic order parameter dynamics, Phys. Rev. Lett. 132, 057101 (2024). Zubarev D. N. Zubarev, Nonequilibrium Statistical Thermodynamics, (Consultants Bureau, New York, 1974). McLennan J. A. McLennan, Phys. Fluids 3, 493 (1960); Introduction to Non-equilibrium Statistical Mechanics (Prentice-Hall, 1988). Crooks G. E. Crooks, Path-ensemble averages in systems driven far from equilibrium, Phys. Rev. E 61, 2361 (2000). fh-zubarev S.-i. Sasa, A perturbation theory for large deviation functionals in fluctuating hydrodynamics, J. Phys. A: Math. Theor. 41, 045006 (2008). KN T. S. Komatsu and N. Nakagawa, Expression for the stationary distribution in nonequilibrium steady states, Phys. Rev. Lett. 100, 030601 (2008). Maes-rep C. Maes and K. Netočný, Rigorous meaning of McLennan ensembles, J. Math. Phys. 51, 015219 (2010). vanPress J. S. Rowlinson, Translation of J. D. van der Waals' “The thermodynamik theory of capillarity under the hypothesis of a continuous variation of density”, J. Stat. Phys. 20 197 (1979).
http://arxiv.org/abs/2407.13173v1
20240718052721
High-energy tunable ultraviolet pulses generated by optical leaky wave in filamentation
[ "Litong Xu", "Tingting Xi" ]
physics.optics
[ "physics.optics" ]
]Litong Xu [ ]Tingting Xittxi@ucas.ac.cn [ ]School of Physical Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China Ultraviolet pulses could open up new opportunities for the study of strong-field physics and ultrafast science. However, the existing methods for generating ultraviolet pulses face difficulties in fulfilling the twofold requirements of high energy and wavelength tunability simultaneously. Here, we theoretically demonstrate the generation of high-energy and wavelength tunable ultraviolet pulses in preformed air-plasma channels via the leaky wave emission. The output ultraviolet pulse has a tunable wavelength ranging from 250 nm to 430 nm and an energy level up to sub-mJ. An octave-spanning ultraviolet supercontinuum with a flatness better than 3 dB can also be obtained via longitudinally modulated dispersion. Such a high-energy tunable ultraviolet light source may provide promising opportunities for characterization of ultrafast phenomena such as molecular breakup, and also an important driving source for the generation of high-energy attosecond pulses. High-energy tunable ultraviolet pulses generated by optical leaky wave in filamentation * July 22, 2024 ======================================================================================= § INTRODUCTION Ultraviolet (UV) pulses have gained increasing popularity in the last few decades due to their versatile applications in biology <cit.>, material fabrication <cit.> and spectroscopy <cit.>. With the rapid development of laser technology, UV pulses in femtosecond or even attosecond time scales have been generated and widely applied <cit.>. Within the entire UV range, the UV pulses with wavelengths from 200 nm to 400 nm remain highly required because they can be resonantly absorbed by most substances and can serve as unique light sources to study the ultrafast dynamics of atoms and molecules, for instance, measuring absolute transition frequency of atoms <cit.>, revealing structure and dynamics of biomolecules <cit.>, and characterizing molecular breakup <cit.>. Moreover, when the UV pulse with high energy and tunable wavelength is used as a single driving source or one of the two-color fields, it enables the generation of high-order harmonics with high conversion efficiency <cit.>, which will be beneficial for the applications that require high photon flux, such as attosecond spectroscopy <cit.> and extreme-UV lithography <cit.>. At present, UV pulses in the spectral region of 200 nm to 400 nm are mainly generated from low-order harmonics <cit.> and four-wave mixing <cit.>. Nevertheless, the wavelength of the generated UV pulse is often limited by the available wavelength of the pump pulse. To overcome this problem, UV pulses with continuously tunable wavelength can be generated from resonant dispersive wave in gas-filled hollow-core fibers by varying the gas pressure <cit.>. In this case, the energy of UV pulses has reached a few tens of μJ, but it is difficult to be further enhanced due to the limited length and diameter of the fiber and the available energy of the few-cycle pump pulses <cit.>. Although the removal of the fiber confinement could break the restriction on energy enhancement, it also leads to the lack of negative dispersion provided by fiber waveguides. Correspondingly, UV pulses cannot be obtained because the propagation of the near-infrared femtosecond laser pulse in uniform gas results in supercontinuum emission with cut-off wavelength of ∼ 400 nm. In this article, we propose to generate wavelength-tunable UV pulses from the supercontinuum emission of the near-infrared femtosecond pulse by introducing a preformed air-plasma channel with the adjusted negative dispersion into air. The output UV pulse has a tunable wavelength ranging from 250 nm to 430 nm and an energy level up to sub-mJ, suggesting that this approach holds great potential in energy scaling of the tunable UV pulses. Besides, an octave-spanning UV supercontinuum with a flatness better than 3 dB can be obtained via the longitudinally modulated dispersion, which should be of particular interest to ultrafast spectroscopy <cit.> and attosecond physics <cit.>. § CONCEPTION Our basic idea is illustrated in Fig. <ref>. During filamentation in air, spectral broadening induced by self-phase modulation can be viewed as a cascaded process. The spectral energy is transferred from the central frequency to high frequency components, and then higher. This process could lead to supercontinuum generation with blue-side cut-off wavelength ∼ 400 nm, which is the common situation for air filamentation. The spectrum in UV region cannot be obtained because air is normally dispersive, and the high-frequency components move far behind the pulse peak gradually, with their intensity too low to participate in further frequency shifting. To overcome this limitation, we introduce a preformed uniform plasma channel, which can introduce negative dispersion into air. Based on this negative dispersion, the high-frequency components can be constrained near the peak of the pulse. They can undergo further frequency shift to the UV region. As shown in Fig. <ref>a, the cascaded frequency shift combined with the negative dispersion could result in the super-broad spectrum, which covers the UV region. To obtain an isolated UV spectral hump, we need a frequency boundary ω_L to terminate the cascaded process. For the components with frequency higher than ω_L, they fall behind the primary pulse and will not undergo further frequency shift due to the relatively low intensity. Therefore, the spectral energy is deposited in this region, as shown in Fig. <ref>. We term this process temporal leaky wave emission, because it is similar to the spatial leaky wave in the waveguide <cit.>. We will show the frequency boundary ω_L of the cascaded broadening, which corresponds to the peak frequency of the leaky UV spectrum, could be tuned by adjusting the density of the preformed plasma channel. To calculate ω_L, we need to find the spectral components which fall behind the primary pulse. Here we use a preformed air-plasma channel with plasma density ρ_0 to provide tunable dispersion condition, and the dispersion relation is given by k(ω)=n_a(ω)ω/c-ρ_0ω/2cρ_c, where n_a(ω) is the refractive index of air, and ρ_c=ϵ_0m_eω^2/e^2 is the frequency dependent critical plasma density that accounts for plasma dispersion. For the velocity of the primary pulse, we use the group velocity of the pulse peak with central frequency ω_0 and consider the self-steepening effect. It is written as v_peak=1/k'(ω_0)+n_2I_p/c, where n_2 is the nonlinear refractive index coefficient and I_p is the peak intensity of the pulse. The group velocity of spectral components falling behind the primary pulse is given by v_g(ω)<v_peak, and the critical frequency ω_L satisfies v_g(ω_L)=v_peak. From the above analysis, we can see that the peak frequency of the UV spectrum ω_L could be tuned by adjusting the plasma density ρ_0. To verify that the dispersion induced by the preformed low-density plasma channel can indeed support the generation of UV pulses via temporal leaky-wave emission, we calculate the group velocity of different spectral components for the plasma density ρ_0=1.5× 10^17 cm^-3, n_2=0.96× 10^-19 cm^2/W and I_p=100 TW/cm^2, as shown in Fig. <ref>c. For the blueshift components with frequency smaller than ω_L, their group velocities are larger than the pulse peak. Although they are generated at the pulse trailing edge by self-phase modulation, they can be confined within the primary pulse. For spectral components with frequency larger than ω_L, their group velocities are lower than the pulse peak. Therefore, this part of the spectrum will lag behind the primary pulse. This evolution of group velocity as a function of the frequency supports our scheme of the temporal leaky-wave emission. For the plasma density ρ_0=1.5× 10^17 cm^-3, the corresponding wavelength of the leaky-wave emission is 320 nm, located in the UV region. To confirm our model, we numerically solve the UPPE (see Methods) to study the propagation of femtosecond laser pulses in air-plasma channels. The input 800 nm pulse has a duration of 30 fs, a beam waist of 2 mm and a peak power of P_in=5P_cr. It is focused by a lens with f=1 m. § DISCUSSION OF RESULTS §.§ Tunable UV pulse Firstly, we investigate the pulse dynamics when the density of preformed air-plasma channel ρ_0=1.5× 10^17 cm^-3. Filamentation can be divided into three stages. At z=1 m, we can see supercontinuum generation (Fig. <ref>a) as well as pulse compression (Fig. <ref>b). Pulse-splitting also occurs at this stage (Fig. <ref>b), and it can be seen that the supercontinuum is mainly contributed by the rear sub-pulse. Since the supercontinuum has reached the critical wavelength of leaky wave, at z=1.1 m there arises a high-frequency lobe in the spectrogram (see Methods), which corresponds to the newly generated sub-pulse in Fig. <ref>b and the spectral hump at 283 nm in Fig. <ref>c. Then the leaky wave is continuously generated and lagged behind, which makes the time span and the spectral intensity of leaky wave increase. As the spectral components between ω_0 and ω_L are consumed , we can see the formation of isolated spectral hump. The simulation results confirm our scheme, except for a slight difference in the peak wavelength of the UV hump. The reason for this difference will be discussed below. For different preformed plasma densities ρ_0, the output spectra are shown in Fig. <ref>a. When ρ_0<0.3× 10^17 cm^-3, the spectrum is broadened without isolated UV spectral hump. As the density of the pre-plasma increases, the stronger negative dispersion leads to pulse self-compression, resulting in a higher peak intensity and a shorter pulse duration, as shown in the Fig. <ref>b. This gives rise to a much steeper intensity slope ∂_t I, thus a much broader spectrum, which covers the critical wavelength of leaky wave λ_L=2π c/ω_L and an isolated UV spectral hump is formed. The central wavelength of the UV spectral hump decreases as ρ_0 increases, resulting in a wavelength tuning range between 250 nm and 430 nm. The simulation results accord well with the theoretically predicted value of λ_L from Eq. (3), denoted by the red dashed line in Fig. <ref>(a). The deviation between our model and simulation mainly comes from the post-frequency-shift process of leaky wave: the spectral component with wavelength of λ_L generated near the pulse peak undergoes further frequency shift as it moves toward the pulse tail. The lower bound of wavelength tuning in our scheme is about 250 nm, which is limited by two factors. For one thing, the conversion efficiency decreases with the output wavelength. For another, the clamping intensity limits the spectral width of the supercontinuum, while to generate leaky wave, the supercontinuum has to cover the critical wavelength. The above restriction may be broken by using a pump beam with shorter central wavelength. §.§ Energy scaling As an important criteria of UV light source, the energy of UV pulse is typically a few μ J using photonic crystal fibers and hollow-core capillary fibers. The fiber system, though providing a simple and flexible platform to generate tunable UV pulse source, also puts an intrinsic limitation on the available energy. Our air-plasma channel scheme could be a promising candidate for obtaining unprecedented high energy UV pulse, without the concern of damage to the media. To explore the energy scaling rules of our scheme, we first investigate the influence of input pulse peak power on the output spectra. We fix ρ_0 at 1.5× 10^17 cm^-3 and gradually increase the input power from P_cr to 9P_cr. The output spectra evolution is shown in Fig <ref>a. Leaky wave emission with a wavelength of 280 nm is generated when the input peak power P_in≈ 1.5P_cr. When increasing the input power, the leaky wavelength first becomes longer (about 10 nm), because the minimum pulse width is slightly shorter, and the post-frequency-shift effect mentioned before is weaker. For P_in> 5P_cr, the minimum pulse width as well as the leaky wavelength is almost constant, which ensures the stability of our scheme. As the incident power increases, the UV spectral hump becomes more intense with an increased energy and a slightly decreased conversion efficiency. For P_in= 5P_cr and 9P_cr, the UV spectrum has a central wavelength of 283 nm and 285 nm, an energy of 13.3 μ J and 22.4 μ J, and a conversion efficiency of 0.88% and 0.83%, respectively. We also calculate the energy and conversion efficiency of the UV spectrum with different central wavelength by adjusting pre-plasma density. As expected, the conversion efficiency is lower for shorter wavelength due to the cascaded feature of spectral broadening, as shown in Fig. <ref>b. For wavelength shorter than 280 nm, the conversion efficiency is lower than 1%. However, since the conversion efficiency only decreases slightly when the input peak power is increased from 5P_cr to 9P_cr (less than 0.5%), energy scaling of UV pulses could be directly achieved by increasing the input power, as indicated by Fig. <ref>c. The output energy of UV pulses could be more than 40 μ J for wavelength longer than 300 nm, which is several times the value in literature <cit.>. If further increasing the input power, the influence of multiple filaments should be considered. Here we propose that a microlens array (MLA) may be used to obtain regularly distributed filaments, as demonstrated in Fig. <ref>d. Since the supercontinuum emitted from each single filament can be coherently combined <cit.>, using a MLA to organize the filaments generated by high-energy pulse could be a promising solution to mJ level UV pulse. As a proof-of-principle simulation, we use an incident pulse with peak power P_in= 20P_cr, focused by a MLA (the focal length and size of each square lenslet are 1 m and 2 mm, respectively). Four filaments are formed (Fig. <ref>e), and the output UV pulse energy reaches sub-mJ level, as shown in Fig. <ref>f. Note that when using MLA, the conversion efficiency is nearly doubled, about 6% for a 350 nm pulse. This is related to the interaction of the energy background of filaments: the energy that is defocused by one filament can be reused by other filaments. It can also be seen that it is necessary to provide a uniform plasma channel with a centimeter-diameter for the generation of high-energy UV pulses. Such a plasma channel has been reported to be generated by filamentation of high-energy picosecond laser pulses <cit.>. §.§ Superflat UV supercontinuum From Fig. <ref>a we know wavelength tuning of UV pulse can be achieved by changing the plasma density. To generate a flat UV spectrum, we could use a pre-plasma with a density gradient along the propagation direction. We design a plasma gradient shown in Fig. <ref>a, where the density of pre-plasma remains constant at 2× 10^17 cm^-3 initially, then decreases linearly from z=1.4 m to z=1.8 m, and finally remains the density of 0.4× 10^17 cm^-3. For an input laser power P_in= 9P_cr, the spectra at different positions are given in Fig. <ref>b-e. At z=1.4 m, a UV spectral hump is formed near 250 nm, which is consistent with the leaky wavelength for ρ_0=2× 10^17 cm^-3 shown in Fig. <ref>a. As the density of the pre-plasma decreases, the corresponding leaky wavelength decreases and the spectral gap between 250 nm and 500 nm is gradually filled. As a result, at z=2 m we obtain a supercontinuum that spans from 250 nm to 1100 nm, more than two octaves. Moreover, we find that the supercontinuum in the UV region is superflat, with very close spectral power. The UV supercontinuum is detailed in the inset of Fig. <ref>a, where the spectral power fluctuates less than 3 dB between 260 nm and 520 nm. The flatness of the UV supercontinuum ensures consistent measurement quality throughout the spectrum of interest, which can be particularly useful to increase the signal-to-noise ratio in ultrafast spectroscopy <cit.>. The ultrabroad and superflat UV spectrum is also highly beneficial for generating UV few-cycle pulses, serving as important driving sources for high-order harmonic generation <cit.>. § CONCLUSION In summary, we have proposed the leaky wave emission model that enables the generation of wavelength-tunable ultraviolet pulses. The model was verified by using a pre-formed plasma channel to modulate the dispersion condition of air in simulation. Compared to the resonant dispersive wave scheme in hollow-core fibers, our approach removes the limitation of fiber system, thus showing great potential in energy scaling. The output UV pulse has a tunable wavelength range of 250 nm to 430 nm and an energy level up to sub-mJ. When applying a longitudinally modulated plasma density, we could obtain an octave-spanning ultraviolet supercontinuum with a flatness better than 3 dB, which could greatly facilitate the applications in ultrafast spectroscopy. This study could provide an essential light source for ultrafast science and important driving source for attosecond physics. § METHODS UPPE model The unidirectional pulse propagation equation (UPPE) that governs the forward-propagating electric field envelope Ê(k_x,k_y,ω,z)=ℱ[E(x,y,t,z)] in the pulse local frame (t→ t-z/v_g) is written as <cit.>: ∂_zÊ=i(K_z-k_0-ω-ω_0/v_g)Ê+iω^2/2ϵ_0c^2K_zℱ[P_NL+i/ωJ]. Here K_z=√(k(ω)^2-k_x^2-k_y^2), k(ω)=n(ω)ω/c is the dispersion relation of the medium, k_0=k(ω_0) and v_g^-1=k'(ω_0). The second term on the right side denotes the Fourier transform of time-dependent nonlinear response. The nonlinear polarization includes an instantaneous Kerr response and a delayed Raman contribution with equal proportion <cit.>: P_NL=ϵ_0n_0n_2[I+∫_-∞^tℛ(t-t')I(t')dt']E, where the nonlinear refractive index coefficient n_2=0.96× 10^-19 cm^2/W, yielding a critical power for self-focusing P_cr=10 GW <cit.>. The specific form of ℛ(t-t') that denotes the Raman contribution can be found in Ref. <cit.>. The current term is composed of plasma current J_p=ie^2ρ E/m_eω and ionization loss J_loss=2W(|E|)U_iρ_n/E. The electron density ρ is calculated using the corrected Perelomov-Popov-Terent’ev formula <cit.>: ∂_t ρ=W(|E|)(ρ_n-ρ_0-ρ), where W(|E|) is the PPT ionization rate, U_i is the ionization potential of oxygen and ρ_n is the neutral species density. We assume the preformed plasma channel is uniformly distributed with transverse scale much larger than the spot size, then the effect of initial plasma density ρ_0 can be coupled into the dispersion relation: n(ω)=n_a(ω)-ρ_0/2ρ_c. n_a(ω) is the refractive index of air <cit.>, and ρ_c=ϵ_0m_eω^2/e^2 is the frequency dependent critical plasma density that accounts for plasma dispersion. Spectrogram representation The spectrogram of the electric field is calculated as follows: P(τ, ω)=|∫_-∞^∞ e^-i ω t E(t) h(t-τ) d t|^2, where the spectrogram P(τ,ω) is the function of time delay τ and angular frequency ω. We choose a Gaussian function with 5 fs FWHM as the gate function. The spectrogram is calculated respectively for each (x,y) data point, and then summed up. § ACKNOWLEDGEMENTS The authors acknowledge the supports of the National Natural Science Foundation of China (NSFC) (11874056, 12074228) and Fundamental Research Funds for the Central Universities. § DATA AVAILABILITY The data that support the findings of this study are available from the corresponding author upon reasonable request. § AUTHOR CONTRIBUTIONS L.X. and T.X. discussed and conceived the idea. L.X. performed the theoretical analysis and simulations. L.X. and T.X. analyzed the data and prepared the manuscript. § COMPETING INTERESTS The authors declare no competing interests. 26 #1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<https://doi.org/#1>et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<><#>1#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHook [Matsumoto et al.2019]matsumoto2019 Matsumoto, S., Cavadini, S., Bunker, R.D., Grand, R.S., Potenza, A., Rabl, J., Yamamoto, J., Schenk, A.D., Schübeler, D., Iwai, S., Sugasawa, K., Kurumizaka, H., Thomä, N.H.: DNA damage detection in nucleosomes involves DNA register shifting. Nature 571(7763), 79–84 (2019) 10.1038/s41586-019-1259-3 [Lombardo et al.2019]lombardo2019 Lombardo, D., Shah, P., Sarangan, A.: Single step fabrication of nano scale optical devices using binary contact mask deep UV interference lithography. Optics Express 27(16), 22917 (2019) 10.1364/OE.27.022917 [Mao et al.2021]mao2021 Mao, Y., Zhao, D., Yan, S., Zhang, H., Li, J., Han, K., Xu, X., Guo, C., Yang, L., Zhang, C., Huang, K., Chen, Y.: A vacuum ultraviolet laser with a submicrometer spot for spatially resolved photoemission spectroscopy. Light: Science & Applications 10(1), 22 (2021) 10.1038/s41377-021-00463-3 [Xie et al.2024]xie2024 Xie, X., Hung, Y., Deng, Y., Cavalieri, A.L., Baltuška, A., Johnson, S.L.: Generation of millijoule-level sub-5 fs violet laser pulses. High Power Laser Science and Engineering 12, 16 (2024) 10.1017/hpl.2023.100 [Géneaux et al.2016]geneaux2016 Géneaux, R., Camper, A., Auguste, T., Gobert, O., Caillat, J., Taïeb, R., Ruchon, T.: Synthesis and characterization of attosecond light vortices in the extreme ultraviolet. Nature Communications 7(1), 12583 (2016) 10.1038/ncomms12583 [Ozawa and Kobayashi2013]ozawa2013 Ozawa, A., Kobayashi, Y.: Vuv frequency-comb spectroscopy of atomic xenon. Physical Review A 87(2), 022507 (2013) 10.1103/PhysRevA.87.022507 [Borrego-Varillas et al.2019]borrego-varillas2019 Borrego-Varillas, R., Nenov, A., Ganzer, L., Oriana, A., Manzoni, C., Tolomelli, A., Rivalta, I., Mukamel, S., Garavelli, M., Cerullo, G.: Two-dimensional UV spectroscopy: A new insight into the structure and dynamics of biomolecules. Chemical Science 10(43), 9907–9921 (2019) 10.1039/C9SC03871J [Yue and Madsen2015]yue2015 Yue, L., Madsen, L.B.: Characterization of Molecular Breakup by Very Intense Femtosecond XUV Laser Pulses. Physical Review Letters 115(3), 033001 (2015) 10.1103/PhysRevLett.115.033001 [Marceau et al.2017]marceau2017 Marceau, C., Hammond, T.J., Naumov, A.Y., Corkum, P.B., Villeneuve, D.M.: Wavelength scaling of high harmonic generation for 267 nm, 400 nm and 800 nm driving laser pulses. Journal of Physics Communications 1(1), 015009 (2017) 10.1088/2399-6528/aa74f6 [Midorikawa2022]midorikawa2022 Midorikawa, K.: Progress on table-top isolated attosecond light sources. Nature Photonics 16(4), 267–278 (2022) 10.1038/s41566-022-00961-9 [Krausz and Ivanov2009]krausz2009 Krausz, F., Ivanov, M.: Attosecond physics. Reviews of Modern Physics 81(1), 163–234 (2009) 10.1103/RevModPhys.81.163 [Tseng et al.2023]tseng2023 Tseng, L.-T., Karadan, P., Kazazis, D., Constantinou, P.C., Stock, T.J.Z., Curson, N.J., Schofield, S.R., Muntwiler, M., Aeppli, G., Ekinci, Y.: Resistless EUV lithography: Photon-induced oxide patterning on silicon. Science Advances 9(16), 5997 (2023) 10.1126/sciadv.adf5997 [Zhou et al.2014]zhou2014 Zhou, H., Li, W., Shi, L., Wang, D., Ding, L., Zeng, H.: Efficient generation of vacuum and extreme ultraviolet pulses. Laser Physics Letters 11(2), 025402 (2014) 10.1088/1612-2011/11/2/025402 [Lekosiotis et al.2020]lekosiotis2020 Lekosiotis, A., Belli, F., Brahms, C., Travers, J.C.: Generation of broadband circularly polarized deep-ultraviolet pulses in hollow capillary fibers. Optics Letters 45(20), 5648 (2020) 10.1364/OL.400362 [Saleh and Biancalana2011]saleh2011 Saleh, M.F., Biancalana, F.: Understanding the dynamics of photoionization-induced nonlinear effects and solitons in gas-filled hollow-core photonic crystal fibers. Physical Review A 84(6), 063838 (2011) 10.1103/PhysRevA.84.063838 [Mak et al.2013]mak2013 Mak, K.F., Travers, J.C., Hölzer, P., Joly, N.Y., Russell, P.S.J.: Tunable vacuum-UV to visible ultrafast pulse source based on gas-filled Kagome-PCF. Optics Express 21(9), 10942 (2013) 10.1364/OE.21.010942 [Travers et al.2019]travers2019 Travers, J.C., Grigorova, T.F., Brahms, C., Belli, F.: High-energy pulse self-compression and ultraviolet generation through soliton dynamics in hollow capillary fibres. Nature Photonics 13(8), 547–554 (2019) 10.1038/s41566-019-0416-4 [Hong et al.2023]hong2023 Hong, L., Yang, H., Liu, L., Li, M., Liu, Y., Chen, B., Yu, H., Ju, W., Li, Z.-Y.: Intense and Superflat White Laser with 700-nm 3-dB Bandwidth and 1-mJ Pulse Energy Enabling Single-Shot Subpicosecond Pulse Laser Spectroscopy. Research 6, 0210 (2023) 10.34133/research.0210 [Monticone and Alu2015]monticone2015 Monticone, F., Alu, A.: Leaky-Wave Theory, Techniques, and Applications: From Microwaves to Visible Frequencies. Proceedings of the IEEE 103(5), 793–821 (2015) 10.1109/JPROC.2015.2399419 [Qian et al.2021]qian2021b Qian, J., Wang, P., Peng, Y., Li, Y., Shao, B., Su, H., Lv, X., Wang, D., Leng, Y., Li, R.: Pulse combination and compression in hollow-core fiber for few-cycle intense mid-infrared laser generation. Photonics Research 9(4), 477 (2021) 10.1364/PRJ.415794 [Tochitsky et al.2019]tochitsky2019 Tochitsky, S., Welch, E., Polyanskiy, M., Pogorelsky, I., Panagiotopoulos, P., Kolesik, M., Wright, E.M., Koch, S.W., Moloney, J.V., Pigeon, J., Joshi, C.: Megafilament in air formed by self-guided terawatt long-wavelength infrared laser. Nature Photonics 13(1), 41–46 (2019) 10.1038/s41566-018-0315-0 [Couairon et al.2011]couairon2011 Couairon, A., Brambilla, E., Corti, T., Majus, D., de J. Ramírez-Góngora, O., Kolesik, M.: Practitioner's guide to laser pulse propagation models and simulation: Numerical implementation and practical usage of modern pulse propagation models. The European Physical Journal Special Topics 199(1), 5–76 (2011) 10.1140/epjst/e2011-01503-3 [Kumar and Mishra2009]kumar2009 Kumar, A., Mishra, V.: Single-cycle pulse propagation in a cubic medium with delayed Raman response. Physical Review A 79(6), 063807 (2009) 10.1103/PhysRevA.79.063807 [Liu and Chin2005]liu2005a Liu, W., Chin, S.L.: Direct measurement of the critical power of femtosecond Ti:sapphire laser pulse in air. Optics Express 13(15), 5750 (2005) 10.1364/OPEX.13.005750 [Bergé et al.2007]berge2007 Bergé, L., Skupin, S., Nuter, R., Kasparian, J., Wolf, J.-P.: Ultrashort filaments of light in weakly ionized, optically transparent media. Reports on Progress in Physics 70(10), 1633–1713 (2007) 10.1088/0034-4885/70/10/R03 [Zhang et al.2008]zhang2008 Zhang, J., Lu, Z.H., Wang, L.J.: Precision refractive index measurements of air, N_2, O_2, Ar, and CO_2 with a frequency comb. Applied Optics 47(17), 3143 (2008) 10.1364/AO.47.003143
http://arxiv.org/abs/2407.12632v1
20240717150035
CerberusDet: Unified Multi-Task Object Detection
[ "Irina Tolstykh", "Mikhail Chernyshov", "Maksim Kuprashevich" ]
cs.CV
[ "cs.CV", "I.2.0; I.4.0; I.4.9" ]
: Unified Multi-Task Object Detection t]c@2emc Irina Tolstykh Mikhail Chernyshov irinakr4snova@gmail.com imachernyshov@gmail.com Maksim Kuprashevich mvkuprashevich@gmail.com Layer Team, R&D Department, SaluteDevices Received 04 December 2023 / Accepted 15 July 2024 ============================================================================================================================================================================================================================ empty § ABSTRACT Object detection is a core task in computer vision. Over the years, the development of numerous models has significantly enhanced performance. However, these conventional models are usually limited by the data on which they were trained and by the category logic they define. With the recent rise of Language-Visual Models, new methods have emerged that are not restricted to these fixed categories. Despite their flexibility, such Open Vocabulary detection models still fall short in accuracy compared to traditional models with fixed classes. At the same time, more accurate data-specific models face challenges when there is a need to extend classes or merge different datasets for training. The latter often cannot be combined due to different logics or conflicting class definitions, making it difficult to improve a model without compromising its performance. In this paper, we introduce , a framework with a multi-headed model designed for handling multiple object detection tasks. Proposed model is built on the YOLO architecture and efficiently shares visual features from both backbone and neck components, while maintaining separate task heads. This approach allows to perform very efficiently while still delivering optimal results. We evaluated the model on the PASCAL VOC dataset and additional categories from the Objects365 dataset to demonstrate its abilities. achieved results comparable to state-of-the-art data-specific models with 36% less inference time. The more tasks are trained together, the more efficient the proposed model becomes compared to running individual models sequentially. The training and inference code, as well as the model, are available as open-source. [https://github.com/ai-forever/CerberusDet] § INTRODUCTION Adding new categories to an existing real-time application that uses Object Detection (OD) involves several significant challenges. A key issue is that object categories annotated in one dataset might be unannotated in another, even if the objects themselves appear in images from the latter. Additionally, merging different datasets often may be impossible because of differing annotation logic and incomplete class overlaps. At the same time, such applications require efficient pipelines, which limits the usage of separate data-specific models. The goal of this work is to build a unified model trained on multiple datasets that does not degrade in accuracy compared to individually trained models, while utilizing less computational budget. We present the framework for training a single detection neural network on multiple datasets simultaneously. We also demonstrate an approach to identify the optimal model architecture, as not all tasks can be trained together. A notable challenge lies in determining which parameters to share across which tasks. Suboptimal grouping of tasks may cause negative transfer <cit.>, the problem of sharing information between unrelated tasks. Additionally, with computational resource constraints, proposed approach allows us to choose the architecture that fits the requirements. To evaluate the proposed model, we conduct experiments with open data and obtain results comparable to separated data-specific state-of-the-art models, but with one unified neural network. The presented architecture is based on YOLO<cit.>, but the method can be easily adapted to many other OD architectures." An alternative approach to extending the detector model with new categories is the use of Open-Vocabulary Object Detectors (OVDs)<cit.>, which have recently gained popularity. However, OVDs often lack the accuracy of data-specific detectors, require a lot of training data, and are prone to overfitting to base classes <cit.>. We prioritize high accuracy over the flexibility of OVDs. The proposed architecture allows us to add new classes as needed while preserving the accuracy of previously learned ones, making our approach more suitable for the required needs. Notably, this approach has been deployed and validated in our production environment, demonstrating its robustness and reliability in practical applications. The key contributions of our paper are as follows: * We provide a study of various methods for multi-dataset and multi-task detection, exploring different parameter-sharing strategies and training procedures. * We present experimental results using open datasets, offering insights into the effectiveness of various approaches. * We introduce a novel framework that can be tailored to different computational requirements and tasks, featuring a multi-branch object detection model named . * We publicly release the training and inference code, along with the trained model, to encourage further research and development in this area. § RELATED WORKS Object Detection There are many different detection models. Two-stage detectors, such as Faster R-CNN <cit.> and Cascade R-CNN <cit.> first generate region proposals, which are then refined and classified. Single-shot convolutional detectors like YOLO <cit.>, SSD <cit.> or EfficientDet <cit.> skips the region proposal stage and produce final localization and labels prediction at once. Recently popularized detection transformers like DAB-DETR <cit.> or CO-DETR <cit.> uses a transformer encoder-decoder architecture to predict all objects at once. We built model based on the implementation of the YOLO architecture by Ultralytics <cit.>, as YOLOv5/YOLOv8 models are fast and achieve SOTA results on various tasks. YOLOv5 uses anchors for the detection head which is composed of convolutional layers for multi-scale features. YOLOv8 is an anchor-free model with a decoupled head to process separately objectness, classification, and bounding box regression tasks based on multi-scale features. Multi-Task Learning(MTL) aims to improve both efficiency and prediction accuracy for individual tasks over separately trained models. The two most commonly used ways to perform MTL are hard or soft parameter sharing of hidden layers <cit.>. Authors of <cit.> apply the first one to share most of the parameters between all tasks and to find a representation that captures all of the tasks. Soft parameter sharing is utilized in works <cit.>, where individual tasks possess their own parameter sets interconnected either through information sharing or by requiring parameter similarity. Most multi-task models in the computer vision domain focus on addressing different CV tasks, such as classification and semantic segmentation <cit.>; segmentation, depth and surface normals <cit.>. UberNet <cit.> learns 7 computer vision problems under a single architecture. GrokNet <cit.> learns unified representation to solve several image retrieval and a large number of classification tasks. In this paper, we address to MTL to tackle various detection tasks. To design the optimal multi-task network architecture the authors of <cit.> apply different strategies based on understanding task relationships. In this work we use representation similarity analysis (RSA) <cit.> method to estimate the task affinity similar to <cit.>. Different optimization techniques were proposed in <cit.> for MTL systems, which aim to reduce conflicts between tasks by adjusting the direction of task gradients. In this paper, we employ a gradient averaging method, but any other optimization method can be utilized as well for training the proposed model. Multi-Dataset Object Detection aims to leverage multiple datasets to train one visual recognition model to detect objects from different label spaces. Some works <cit.> build specific modules to adapt feature representations related to different domains. Others <cit.> train one model with unified multi-dataset label spaces. To create a detector with an unified label space across all datasets, the authors of <cit.> automatically learn mappings between the common label space and dataset-specific labels during training. The current paper focuses on a model with shared parameters but dataset-specific outputs. The authors of <cit.> train a detection model with pseudo ground truth for each dataset generated by task-specific models to merge label spaces, while our framework does not require annotations from different datasets to be combined. ScaleDet <cit.> also unifies label space from multiple datasets by utilizing text CLIP <cit.> embeddings. Open-Vocabulary Object Detection (OVD) models aim to recognize objects of categories not present at training time. Novel categories are described by text inputs and the detectors are try to establish a semantic connection between object regions and object labels chosen from a possibly very large vocabulary <cit.>. The association of objects and labels typically is done through large pre-trained vision-language matching methods like CLIP <cit.>. OVD models may be used for expanding the label set of a detector model if pretrained models have knowledge of the target data domain, aligning textual embedding space with visual features during training <cit.>. The authors of <cit.> train an open-vocabulary detector based on CLIP text and image embeddings to detect 1,203 categories from the LVIS dataset. They initially train the detector on base categories and then expand it to cover all rare categories in the dataset. § MODEL §.§ Method In this paper, we propose the model that allows multiple detection tasks to be learned in a shared model. Each detection task is a separate task, which employs its own dataset with the unique set of labels. The model is built upon the YOLO <cit.> architecture. It optimizes computational resources by sharing all backbone parameters across tasks, while each task retains its own unique set of parameters for the head. The neck layers can either be shared or specific to the task. An example of YOLOv8-based architecture for three tasks is illustrated in Figure <ref>. With the standard YOLOv8x architecture and 640 input image resolution, the model backbone consists of 184 layers and 30M parameters. The neck has 6 shareable modules with 134 layers and 28M parameters. Each head consists of 54 layers and 8M parameters. By sharing the backbone across multiple tasks, our training approach achieves significant computational budget economy compared to the sequential inference of separate models for each task. Figure <ref> illustrates the inference speed of , which is based on the YOLOv8x architecture. The figure compares the inference times for two scenarios: one where all neck parameters are task-dependent, and another where these parameters are shared across tasks. The results highlight the computational efficiency gained through parameter sharing. §.§ Parameters sharing We decided to employ the hard parameter sharing technique for multi-task learning, given its demonstrated efficiency and its ability to enhance per-task prediction quality by leveraging information across tasks during training <cit.>. Hard parameters sharing allows us to have sets of parameters that are shared across tasks, and sets of parameters that are task-specific. Based on YOLO architecture we have sets of sharable parameters at the module level. E.g. YOLOv8x have 6 parameterized neck modules, so each task may share each of them with another task. To decide what modules to share across which tasks we employ the Representation Similarity Analysis <cit.> method to estimate task similarity at each neck module that can be shared or task-specific. Then for each possible architecture variant we calculate an RSA-based similarity score (𝑟𝑠𝑎 𝑠𝑐𝑜𝑟𝑒) and 𝑐𝑜𝑚𝑝𝑢𝑡𝑎𝑡𝑖𝑜𝑛𝑎𝑙 𝑠𝑐𝑜𝑟𝑒. The first one shows the potential performance of an architecture and the second one evaluates its computational efficiency. Within the available computational budget, we select the architecture with the best 𝑟𝑠𝑎 𝑠𝑐𝑜𝑟𝑒. Let the architecture contain l shareable modules and we have N tasks, the algorithm for selecting the architecture looks as follows: - Select a small representative subset of images from the test set of each task. - Using task-specific models, extract features for the selected images from each module. - Based on the extracted features, calculate Duality Diagram Similarity (DDS) <cit.> - computing pairwise (dis)similarity for each pair of selected images. Each element of the matrix is the value of (1 - Pearson’s correlation). - Using the Centered Kernel Alignment (CKA) <cit.> method on the DDS matrices, compute representation dissimilarity matrices (RDMs) - an NxN matrix for each module. Each element of the matrix indicates the similarity coefficient between two tasks. - For each possible architecture, using values from the RDM matrices, compute the 𝑟𝑠𝑎 𝑠𝑐𝑜𝑟𝑒. It is calculated as the sum of the task dissimilarity scores at every location in the shareable model layers. It is defined as 𝑟𝑠𝑎 𝑠𝑐𝑜𝑟𝑒 = ∑_m=1^l S_m, where S_m (equation <ref>) is found by averaging the maximum distance between the dissimilarity scores of the shared tasks in the module l. - For each possible architecture calculate 𝑐𝑜𝑚𝑝𝑢𝑡𝑎𝑡𝑖𝑜𝑛𝑎𝑙 𝑠𝑐𝑜𝑟𝑒 using formula <ref>. - We select the architecture with the best combination of 𝑟𝑠𝑎 𝑠𝑐𝑜𝑟𝑒 and 𝑐𝑜𝑚𝑝𝑢𝑡𝑎𝑡𝑖𝑜𝑛𝑎𝑙 𝑠𝑐𝑜𝑟𝑒 (the lower is the better), or we choose the architecture with the lowest 𝑟𝑠𝑎 𝑠𝑐𝑜𝑟𝑒 within the set constraint on 𝑐𝑜𝑚𝑝𝑢𝑡𝑎𝑡𝑖𝑜𝑛𝑎𝑙 𝑠𝑐𝑜𝑟𝑒. S_m = 1/|{Τ_i, …, Τ_k}| * ∑_j=i^kmax{ RDM(j, i), …, RDM(j, k) } where {Τ_i, …, Τ_k} - shared tasks at module l. 𝑐𝑜𝑚𝑝𝑢𝑡𝑎𝑡𝑖𝑜𝑛𝑎𝑙 𝑠𝑐𝑜𝑟𝑒 = 𝑖𝑛𝑓𝑒𝑟𝑒𝑛𝑐𝑒_𝑡𝑖𝑚𝑒/(N * 𝑠𝑖𝑛𝑔𝑙𝑒_𝑖𝑛𝑓𝑒𝑟𝑒𝑛𝑐𝑒_𝑡𝑖𝑚𝑒) To evaluate the chosen approach, we selected 4 architectures with different RSA scores and computational scores, trained models and compared the average metric values. Figure <ref> demonstrates that model accuracy increases along with RSA score decreases and computational complexity increases. To calculate the computational score, the V100 GPU was used and the batch size was equal to 1. §.§ Training procedure Let's consider a set of tasks {Τ_1, …, Τ_𝑛}. Different combinations of these tasks may share a set of model parameters. Let θ_shared = {θ_i..k, …θ_j..m} be the sets of shared parameters between different groups of tasks {i, …,k}, …, {j, …,m}. Algorithm <ref> represents the end-to-end learning process of the proposed model. During training, we iterate through tasks, sample mini-batches from the corresponding dataset, calculate the loss and gradients for the parameters related to the current task. Next, we average the gradients for shared parameters of each task group and update their values according to equation <ref>. θ_{i,…,k} = θ_{i,…,k} - ( * 1/|{i,…,k}| * ∑_j ∈{i,…,k}∂ L_j∂θ_{i,…,k}) where {i,…,k} represents the group of tasks with shared parameters θ_{i,…,k}, is the learning rate, L_j is the loss for task j. The speed and effectiveness of joint training are strongly influenced by the loss functions of individual tasks. Since these loss functions can have different natures and scales, it is essential to weigh them correctly. To find the optimal weights for the loss functions, as well as other training hyperparameters, we employ the hyperparameter evolution method. During the training process, we discovered that the model's performance significantly suffers if the samples within each batch are not balanced carefully and thoroughly. To address this, we implemented a strategy to ensure that all classes are adequately represented in each iteration according to their frequency in the dataset. §.§ The impact of training settings The described in the previous two sections techniques were used in a series of experiments with our proprietary data. Table <ref> presents the results of the impact of each technique. We use proprietary data in these experiments, as they exhibit a sufficient level of inter-task consistency to ensure the clarity of the experiments. Models were trained for 3 tasks, where the baseline being an architecture where all parameters of the model, except for the heads, were shared among tasks. The dataset of the first task comprises 22 categories with 27,146 images in the training set and 3,017 images in the validation set. The dataset of the second task consists of 18 categories with 22,365 images in the training set and 681 images in the validation set. The dataset of the third task comprises 16 categories with 17,012 images in the training set and 3,830 images in the validation set. To compare the influence of architecture search method on the result, we also trained the model, where all neck parameters are task-specific. Then, we compare the accuracy improvement of the discovered architecture relative to it. All models were built on top of YOLOv5x with an input image resolution of 640x640, measurements were made with fp16 precision on the V100 GPU. § OPEN-SOURCE DATASETS EXPERIMENTS Within this section, we outline 's experimental setup, results, and training configuration. Furthermore, we conduct a comparative analysis between and standalone YOLOv8 models using public datasets. This comparison also involves the previous best-performing model, Cascade Eff-B7 NAS-FPN <cit.> trained with the PASCAL VOC dataset. Our analysis includes the presentation of metrics such as mean average precision (mAP): mAP@0.5 and mAP@0.5:0.95, alongside measurements of the models inference speeds. §.§ Datasets We conducted experiments on two publicly available datasets: PASCAL VOC and Objects365. Objects365 was filtered to include only images containing specific animal objects not present in the PASCAL VOC dataset. This resulted in two datasets, both balanced in terms of class distribution and image count. In both cases, standard datasets with predefined splits are used. For Objects365, these include the train and val sets, while for PASCAL VOC, they encompass train/val/test for the year 2007 and train/val for the year 2012. To filter Objects365, all non-empty images with annotations containing at least one object belonging to the animal class were used. Empty images without annotations and objects from other classes were filtered out. A detailed list of classes and the filtering code are available in the project repository. The PASCAL VOC dataset comprises 20 classes with 16,551 images in the training set and 4,952 images in the validation set. For training, we combined the trainval sets of PASCAL VOC 2007 and PASCAL VOC 2012 and validated on the test set of PASCAL VOC 2007. The Objects365 dataset consists of 19 classes with 14,295 images in the training set and 5,413 images in the validation set. §.§ Model Training We trained on the aforementioned datasets concurrently using hyperparameters inherited from YOLOv8, which were fine-tuned via hyperparameter search. leverages the YOLOv8x as its backbone architecture. The vanilla YOLOv8 head is used for each detection task, all neck parameters are task-specific. Additionally, we utilized transfer learning by initializing 's backbone and neck with a pre-trained model on COCO by the authors of YOLOv8, while the heads parameters were initialized randomly. The training was completed in 27 hours using 1 Nvidia V100 GPU with a batch size of 32. The training was done with mixed precision, SGD optimizer and synchronized batch normalization. The input resolution was set to 640x640 pixels. §.§ Experimental Results Experiments yielded promising results in terms of mAP scores (see Table <ref>), comparable to state-of-the-art separated data-specific model. demonstrated superior performance, achieving higher mAP@0.5 scores compared to standalone YOLOv8 models for both datasets. Importantly, confidently surpasses the previous best-performing model on the PASCAL VOC dataset by a significant margin in terms of the mAP0.5 metric. Additionally, even though performs inference for two tasks, it remains efficient, completing inference in 7.2 milliseconds on a single NVIDIA V100 GPU with float16 numerical precision. This is faster than inference using separate YOLOv8 models, which take 11.2 milliseconds – equivalent to 5.6 milliseconds per model. The results demonstrate effectiveness across diverse datasets, surpassing standalone YOLOv8 models in terms of mAP@0.5 metric and computational efficiency. § CONCLUSIONS In this work, we introduced , a scalable and adaptive framework for multi-task object detection. The proposed method achieves comparable to separated data-specific state-of-the-art models while utilizing approximately 36% less computational budget in case of training for two tasks. The more tasks are trained together, the more efficient the proposed model becomes compared to running individual models sequentially. The challenge of handling separate and conflicting datasets without requiring unified annotation was addressed, offering significant value for future research and real-world applications. The proposed approach, based on YOLO, allows efficient sharing of visual features across different tasks. Hard parameter sharing and Representation Similarity Analysis (RSA) were employed to optimize task-specific performance while maintaining high computational efficiency. Extensive experiments were conducted on both proprietary production-scale data and open-source datasets (PASCAL VOC and Objects365). The findings highlighted the model's superior performance and versatility. Furthermore, is designed to be easily expandable, supporting additional tasks beyond object detection, such as attribute recognition and embedding calculation. While experiments utilized the YOLOv8 model, the method can be adapted to any object detection model architecture. To support the research community and practitioners, the framework—including algorithms' implementation, all necessary code, and the model trained on open-source data—has been made publicly available. These resources aim to facilitate further research and practical applications, providing a foundation for significant improvements and innovations in the future. § LIMITATIONS The training process outlined in the paper is highly sensitive to optimization hyperparameters such as learning rate, loss weights, momentum, and weight decay. Therefore, we recommend conducting a hyperparameter search to achieve the best training results. The necessary scripts for this process are provided with the code. Additionally, in certain cases, it may be beneficial to consider different multi-gradient descent algorithms instead of gradient averaging. Notable examples include MGDA<cit.> and Aligned-MTL<cit.>. When training a model on multiple tasks, it is crucial to identify which tasks can share more parameters and which require fewer shared parameters. To address this, we provide implementation of the RSA algorithm, which, while requiring models trained for individual tasks, is still faster than iteratively testing all possible architecture variations with different parameter-sharing schemes. However, if training such models is not feasible, our approach can still be applied using a parameter-sharing scheme where all neck layers are task-dependent. While this may not be the most optimal solution, it remains more efficient than sequentially inferring single models. plainnat
http://arxiv.org/abs/2407.13408v1
20240718112852
DISCOVER: A Data-driven Interactive System for Comprehensive Observation, Visualization, and ExploRation of Human Behaviour
[ "Dominik Schiller", "Tobias Hallmen", "Daksitha Withanage Don", "Elisabeth André", "Tobias Baur" ]
cs.HC
[ "cs.HC", "cs.AI", "J.4" ]
dominik.schiller@uni-a.de 1234-5678-9012 University of Augsburg Universitätsstraße 6a Augsburg Bavaria Germany 86159 tobias.hallmen@uni-a.de 0009-0005-6450-5694 University of Augsburg Universitätsstraße 6a Augsburg Bavaria Germany 86159 daksitha.withanage.don@uni-a.de 1234-5678-9012 University of Augsburg Universitätsstraße 6a Augsburg Bavaria Germany 86159 elisabeth.andre@uni-a.de 1234-5678-9012 University of Augsburg Universitätsstraße 6a Augsburg Bavaria Germany 86159 tobias.baur@uni-a.de 0000-0002-2797-605X University of Augsburg Universitätsstraße 6a Augsburg Bavaria Germany 86159 § ABSTRACT Understanding human behavior is a fundamental goal of social sciences, yet its analysis presents significant challenges. Conventional methodologies employed for the study of behavior, characterized by labor-intensive data collection processes and intricate analyses, frequently hinder comprehensive exploration due to their time and resource demands. In response to these challenges, computational models have proven to be promising tools that help researchers analyze large amounts of data by automatically identifying important behavioral indicators, such as social signals. However, the widespread adoption of such state-of-the-art computational models is impeded by their inherent complexity and the substantial computational resources necessary to run them, thereby constraining accessibility for researchers without technical expertise and adequate equipment. To address these barriers, we introduce – a modular and flexible, yet user-friendly software framework specifically developed to streamline computational-driven data exploration for human behavior analysis. Our primary objective is to democratize access to advanced computational methodologies, thereby enabling researchers across disciplines to engage in detailed behavioral analysis without the need for extensive technical proficiency. In this paper, we demonstrate the capabilities of using four exemplary data exploration workflows that build on each other: Interactive Semantic Content Exploration, Visual Inspection, Aided Annotation, and Multimodal Scene Search. By illustrating these workflows, we aim to emphasize the versatility and accessibility of as a comprehensive framework and propose a set of blueprints that can serve as a general starting point for exploratory data analysis. < g r a p h i c s > Overview of the Architecture Overview of the Nova-Server Architecture DISCOVER: A Data-driven Interactive System for Comprehensive Observation, Visualization, and ExploRation of Human Behaviour Tobias Baur July 22, 2024 =========================================================================================================================== empty § INTRODUCTION Understanding human behavior is a core pursuit in social sciences, crucial for unraveling the complexities of human interactions and societal dynamics. However, analyzing human behavior comes with its fair share of challenges, particularly with traditional methods that involve laborious data collection and intricate analyses, often limiting thorough exploration due to resource demands. To address these challenges, computational models have emerged as promising alternatives, offering automated identification of key behavioral cues like social signals. Yet, their widespread adoption faces hurdles, mainly due to their complexity and the hefty computational resources they require, making them less accessible to researchers without technical expertise or adequate equipment. In light of these issues, we introduce [https://github.com/hcmlab/discover], a flexible open-source software framework tailored to streamline computational-driven data exploration for human behavior analysis. Our primary goal is to democratize access to advanced computational tools, empowering researchers from diverse backgrounds to conduct in-depth behavioral analysis without extensive technical know-how. Throughout this paper, we showcase 's versatility through four illustrative data exploration workflows. First, we describe the interactive semantic content exploration of transcriptions, achieved through direct integration of large language models. Second, we illustrate how the integrated social signal processing capabilities of can aid researchers in the visual exploration of data. Third, we show how can be used to aid the annotation of meaningful behavioral cues. Lastly, we showcase our framework's capabilities for multimodal scene search, enabling users to automate the identification of key scenes within datasets. Although all suggested workflows can be applied individually and adapted to user-specific needs, we suggest to consider them as blueprints for starting data exploration with . § RELATED WORK Previous research in the field of human-computer interaction has suggested methods for displaying multimodal characteristics within particular conversational contexts to analyze human behavior. Over time, a range of annotation tools concentrating on affective computing and social cues has emerged, offering assistance to users in their efforts. Prominent examples include ELAN <cit.>, ANVIL <cit.>, and EXMARALDA <cit.>. These tools offer layer-based tiers to insert time-anchored labeled segments, that we call discrete annotations. Continuous annotations, on the other hand, allow an observer to track the content of an observed stimulus over time based on a continuous scale. One of the first tools that allowed labelers to trace emotional content in real-time on two dimensions (activation and evaluation) was FEELTRACE <cit.>. Its descendant GTRACE (general trace) <cit.> allows the user to define their own dimensions and scales. More recent tools to accomplish continuous descriptions are CARMA (continuous affect rating and media annotation) <cit.> and DARMA (dual axis rating and media annotation) <cit.>. Recently, these tools have evolved to include automatic calculation of behavioral cues and social signals, eliminating the need for the manual annotation of data. For example Emodash <cit.> has been designed to enhance tutors’ retrospective understanding of learners’ emotions, based on facial expressions, within a video-conferencing learning setting. REsCUE <cit.> helps to aid coaching practitioners to detected unconscious behavior of their clients. To this end, REsCUE uses an unsupervised anomaly detection algorithm to cluster. MultiSense <cit.> can assess the affective state of a person by inferring various indicators from audio-visual input signals. The tool focuses on the application within the mental health domain to assess indicators of psychological distress such as depression or post-traumatic stress disorder. MeetingCoach <cit.> is an AI-driven feedback dashboard designed to enhance the effectiveness and inclusively of video-conferencing meetings by providing personalized insights into meeting dynamics, such as engagement summaries of participants or speaking time distribution. MACH <cit.> social skills training, particularly focusing on job interview preparation, that analyses non-verbal behavior of an interviewee. The AffectToolbox <cit.> provides a software system aimed at aiding researchers in developing affect-sensitive studies and prototypes in affective computing. It provides accessible functions for analyzing users' affective states via a graphical user interface, including a variety of emotion recognition models for different modalities as well as a unified multimodal assessment. The ConAn Tool <cit.> has been developed with a focus on group conversation analysis. To this end, it automatically analyzes the gaze behavior, body movement, speaker activity, and facial expressions of participants using a single 360°camera. All these visualization-based methods were developed with specific goals and target groups in mind. As a result, the choice of features to be displayed is usually tailored to this specific use case. Therefore these solutions suffer from a lack of customizability that prevents users from adapting the features to their individual needs. The Social Signal Interpretation Framework(SSI) by <cit.> presents an alternative approach, by implementing a modular, multimodal signal processing pipeline, facilitating both online and offline recognition tasks. The plug-in system within SSI allows users to develop custom modules and integrate them into the processing pipeline. Similar to SSI the Opensense <cit.> platform has been designed to facilitate real-time acquisition and recognition of social signals through multiple modalities. It also follows a modular pipeline design and builds on Microsoft's Platform for Situated Intelligence <cit.>, which enables the processing of human behavioral signals and supports various sensor devices and machine learning tools. <cit.> developed the MultiSensorPipeline, a lightweight and adaptable framework for creating multimodal-multisensor interfaces using real-time sensor data. While the framework is conceptually similar to SSI and Opensense it focuses on a concise set of concepts and functionalities that ease the creation and execution of complex processing pipelines. The process of implementing specific modules is left to the user, making the tool better suited for developing prototypes of custom multimodal-multisensor processing pipelines than for using standard modules to analyze social signals. In contrast to the pure visual exploration of data, Providence implements a scene search approach to investigate social behavior in multimodal data. Hereby, a human analyst can formulate queries containing non-verbal (e.g. nodding, facial expressions), linguistic (e.g. sentiment), or para-linguistic (e.g. speech speed or volume) to search for specific scenes in a human conversation. Providence automatically extracts the respective features required by a query and searches for scenes fulfilling the specified conditions in the data. Although the approaches presented have their respective advantages and disadvantages, there are two common shortcomings: Efficient use of computing resources and a compromise between flexibility and complexity. Except for Providence, all the tools introduced are solely intended for operation on local machines. This leads to inefficient use in multi-user scenarios, in which users either have to take turns on a local machine or each user requires his own workstation that is capable to run such tools. Considering hardware demands and energy consumption of state-of-the-art machine learning models, this issue extends beyond financial concerns to ecological ones. Further, it becomes evident that off-the-shelf solutions have constraints in their applicability, whereas more adaptable approaches necessitate greater technical proficiency, thereby posing a barrier to entry. § FRAMEWORK The architecture of follows a modular design, which facilitates flexibility, scalability, and ease of maintenance. At its core, the framework comprises several key components, each serving a distinct purpose in enabling laymen to perform data exploration tasks efficiently. An overview of the system architecture is shown in Figure <ref>. Below, we provide a textual skeleton outlining the main components and their functionalities. §.§ User Interface Each element in the framework utilizes APIs to communicate over the network, granting users with programming skills significant flexibility by allowing access via scripts, irrespective of the programming language employed. In order to make more user-friendly for individuals without technical expertise, we've incorporated its API into the open-source tool NOVA <cit.>, to serve as the graphical user interface. NOVA aims to enhance the standard annotation process with the latest developments from contemporary research fields such as Cooperative Machine Learning and eXplainable Artificial Intelligence by giving annotators easy access to automated model training and prediction functionalities, as well as sophisticated explanation algorithms via its user interface. The NOVA user interface has been designed with a special focus on the annotation of long and continuous recordings involving multiple modalities and subjects. A screenshot of a loaded recording session is shown in Figure <ref>. On the top, several media tracks are visualized and ready for playback. Note that the number of tracks that can be displayed at the same time is not limited and various types of signals (video, audio, facial features, skeleton, depth images, etc.) are supported. In the lower part, we see multiple annotation tracks of different types (discrete, continuous, and transcriptions) describing the visualized content. NOVA provides several functions to process the annotations created by multiple human or machine annotators. For instance, statistical measures such as Cronbach's α, Pearson's correlation coefficient, Spearman's correlation coefficient or Cohen's κ can be applied to identify inter-rater agreement. §.§ Annotation Database To support a collaborative annotation process, maintains a database back-end, which allows users to load and save annotations from and to a MongoDB[<https://www.mongodb.com/>] database running on a central server. This gives annotators the possibility to immediately commit changes and follow the annotation progress of others. Besides human annotators, a database may also be visited by one or more “machine users”. Just like a human operator, they can create and access annotations. Hence, the database also functions as a mediator between human and machine. provides instruments to create and populate a database from scratch. At any time new annotators, schemes, and additional sessions can be added. §.§ Media File Storage uses a data storage component that follows the structure of the open-source cloud hosting framework Nextcloud[<https://nextcloud.com/de/>]. Data can be hosted on a local drive and be shared on demand with people who have been granted access to the respective Database. Both the NOVA user interface, as well as the processing backend can access these files for visualization and processing respectively. §.§ Processing Server The processing server acts as the computational engine powering data analysis tasks within our framework. It implements a lightweight web server that interacts with the annotation and data storage components to extract meaningful information from the data, that can subsequently be visualized in NOVA. The server-based architecture of provides key advantages concerning flexibility, accessibility, and resource efficiency. First of all, exposing a REST API makes the server not only accessible from the NOVA user interface but also via scripts. This results in a low entry barrier for not coding affine research, while maintaining the ability to include the processing functionality in custom scripts. Second, the server-based approach enables multiple users to process data in a central place to ensure optimal usage of computational resources. The standardized input and output formats of the processing modules also enable the sharing and further usage of results. §.§ Assistant Recently large language models (LLM), like Chat-GPT[<https://chatgpt.com/>] or LLama <cit.> have demonstrated remarkable capabilities for textual analysis tasks like text summarization <cit.>, sentiment analysis <cit.> or argument mining <cit.>. To incorporate such capabilities into , we supplement the processor component with an assistant. Fundamentally the assistant is a lightweight web server that aims to integrate numerous LLMs with the rest of the infrastructure. To this end, the assistant is exposing a unified API that reroutes requests to either an external service provider (e.g. OpenAI), a self-hosted large language model (e.g. via Ollama [<https://ollama.com/>]. Through seamless integration with the UI, users can interact with AI assistants via a chat interface to analyze textual data, such as dialogue transcripts. Depending on the performance of a model for specific tasks and privacy requirements a user can switch between available services dynamically during the exploration process. § MODULES relies on exchangeable modules to infer behavioral indicators from recorded data. Each of these modules can be understood as a configurable building block, consisting of predefined inputs and outputs with module-specific options. is fully extendable with custom modules. However, to keep the entrance barrier low and provide value for a non-coding affine target group we also provide a number of ready-to-use processing modules. The following section provides an overview of the currently integrated modules. §.§ Face Bounding Box Detector The automatic derivation of behavioral indicators from a human face usually requires the localization of the facial area in an image or video. To this end, we rely on the BlazeFace model proposed by <cit.> et al. The BlazeFace model is a lightweight face detection model that has been developed to run on mobile devices and thus requires only a minimum of computational resources to achieve super-realtime performance. Landmarks and Meshes For the further processing of the localized image, it's it is a common procedure to align facial images based on localized landmarks <cit.>. Facial image alignment involves geometric transformations like translation, rotation, and scaling to convert the input face image into a standardized form. To this end, we employ the face mesh model by <cit.>, which infers an approximate 3D mesh representation of a human face from a single camera. Action Units Facial action units originate from an anatomical examination of the face and can be categorized according to the Facial Action Coding System (FACS) outlined by <cit.>. For automatic action unit detection and intensity estimation, we integrated the LibreFace <cit.> framework which achieves state-of-the-art performance in both tasks while improving inference times over other methods. Facial Expression Facial expression analysis involves automatically detecting subtle movements in facial muscles and identifying typical facial displays. The recognition of these expressions yields valuable insights into users' social and emotional states. To facilitate robust facial expression prediction we integrated multiple models into Emonet <cit.>, Relevance-based data masking <cit.>, LibreFace <cit.>. §.§ Voice When analyzing human speech to gain behavioral insights, one needs to distinguish between the verbal and vocal components of speech. Verbal refers to communication that is expressed in words or language. It involves the use of language to convey ideas, thoughts, or information. Vocal on the other hand refers to the sounds produced by the voice or the act of speaking. In the context of communication, "vocal" can refer to the tone, pitch, volume, and other qualities of speech. §.§.§ Verbal A prerequisite to the analysis of the verbal content of spoken language is the conversion from speech to text (STT). STT-Systems have been an active area of research for decades. For the implementation of our STT module we rely on WhisperX <cit.>, an adaptation of the Whisper Model <cit.>, that provides improved timestamp accuracy, support for longer audio sequences, and faster transcription performance. Speaker Diarization The available datasets are typically recorded with the focus of manual human analysis rather than automatic processing. A common example of this is the recording of a single audio signal for several speakers. While it is a mostly trivial task for a human listener to distinguish between the voices of speakers and map the content of the spoken language to the respective person, this information gets lost during the STT process. To account for this loss of information implements a speaker diarisation module, which maps segments of a common dialogue transcript to individual speakers. To this end we rely on Pyannote <cit.> to cluster voiced segments in the audio signal. We then use an oracle approach to assign those clusters to individual speakers, by providing reference speaking terms within the audio signal. Sentiment Analysis Sentiment analysis is the process of computationally determining the emotional tone behind a piece of text, whether it's positive, negative, or neutral. It can be a valuable tool for analyzing human behavior, as it can provide a deeper understanding of individuals' emotions and opinions. To enable the automatic prediction of sentiment, currently integrates two approaches. A multi-lingual <cit.> ) and a German language specific <cit.> model. §.§.§ Vocal Speech emotion recognition (SER) refers to the task of automatically detecting and interpreting emotions conveyed through speech. It involves analyzing various acoustic features, such as pitch, intensity, and rhythm, to infer the underlying emotional state of the speaker. integrates a pre trained model, proposed by <cit.> to automatically detect valence, arousal, and dominance values from a human voice. §.§ Multimodal Feature Extraction Besides the above-mentioned modules, which directly provide insights about important indicators for human behavior to an analyst, also implements modality-specific feature extraction modules, that can be used to train custom detection models. Video For the video modality we use the DinoV2 pretrained vision transformer models <cit.> to extract features. Those models are pretrained in a self-supervised manner on a large dataset of 142 million images. As a result, DINOv2 models have demonstrated robust performance beyond training data, delivering usable general-purpose features without the need for fine-tuning. Audio For the audio modality we rely on a pretrained w2v-BERT 2.0 encoder <cit.>. Similar to the DinoV2 model, this model was trained unsupervised on a large data set of 4.5 million hours of audio, and demonstrates excellent performance for a variety of downstream tasks like speech to text, or expressive speech to speech translation. However, it is recommended to fine-tune the w2v-BERT 2.0 model before using it for a downstream task. Since this might not be feasible for technical unsavvy users we also integrated the openSMILE library <cit.>, which extracts various handcrafted feature sets for the audio domain. Specifically, the GeMaps feature set <cit.> has been developed for general voice research and affective computing tasks and provides a good starting point for any speech-related classification task. Text When it comes to extracting features from text representations, the language of the text is a necessary consideration. Since it is a key aspect of our framework to be employable in versatile scenarios across multiple languages, integrates a multilingual textual feature extraction using the XLMroBERTa (XLM-R) model by <cit.> This model consists of a transformer-based architecture, trained on vast amounts of multilingual data crawled from the internet. In their experiments, the authors analyzed the capabilities of XLM-R for several tasks, including name entity recognition, cross-lingual question answering, paraphrasing, and sentiment analysis. The reported results indicate that the model performs close to or even better than comparable monolingual models for languages where vast training resources are available. Furthermore, the model showed substantial improvements over other state-of-the-art models on low-resource languages across all tasks. § INTERACTIVE DATA EXPLORATION §.§ Semantic Content Exploration In the following section, we demonstrate the capabilities of using four exemplary workflows to examine unseen data and gather new insights. While each of our showcases is building upon the results of the previous one, every workflow can also be carried out independently of the others. The complete iterative data exploration pipeline is depicted in Figure <ref>. We demonstrate how our workflow operates by utilizing recordings of interactions between teachers and parents. These videos are captured to evaluate the communication abilities of aspiring teachers in consultative situations and offer them constructive feedback. Within this context, the teacher is a trainee, while the role of the parent is portrayed by an actor. The subject of the discussion revolves around the child of the parent, who is facing challenges in school. Since the original discussion is in German a translated version can be found in Appendix A1. The following is an example workflow of the use case for data analysis from an analyst's point of view. As a first step towards finding indicators of communicative quality in the recorded interactions, a user wants to get familiar with the data and the task. To facilitate those tasks, assistant enables the interactive exploration of the semantic content of the dialogue. To begin using the assistant a user first utilizes the WhisperX processing module to create a temporal-aligned transcript for each speaker from the recorded audio signal (see Appendix A1). After loading the transcription data into NOVA the user clicks on the Assistant tab and a chat window opens that establishes a connection with the assistant (see Figure <ref>). By checking the "Context-Aware" checkbox in the bottom NOVA the transcript that has been loaded into NOVA before will be automatically sent to the assistant accompanying each message. That way a user can directly ask the assistant any questions regarding the semantic content of the dialogue. Firstly the user requests a summary of the interaction. The assistant replies with a concise summary of the transcript, providing the user with information about the general setting and topic of the dialogue, the roles of the interlocutors, and the course of the conversation. As the user has gained those insights about the data, the next step is to gather more information on relevant behavioral aspects to look out for. To this end, the user asks the assistant directly about important criteria to assess the quality of communication in parent-teacher conferences. The assistant answers by providing ten indicators for the assessment of dialogue quality, like Positive outcome or Collaborative approach. Each point is accompanied by a short description to clarify the meaning. From the previous summary, it is already clear that the teacher and parent are working together and the outcome of the dialog is positive as both parties agree on how they want to proceed in the future to support the child's learning. The user then continues to ask the assistant to evaluate the transcript concerning the now-identified indicators to get deeper insights beyond the summary. In return, the assistant provides further information about indicators like Empathy and Active Listening, based on the full transcript. Finally, the user also wants to know about non-verbal indicators for communication quality, to which the assistant again provides a list of key points to look out for like Facial Expressions / Smiles, Gaze or Vocal Inflections and Tone. Appendix A2 shows the complete dialogue between the user and the assistant. §.§ Aided Annotation The processing modules described in Section <ref> are a core aspect of . However, they largely depend on the availability of pre-trained models for processing. Depending on the use case there might be no fitting model available for the task that the user is looking for. To alleviate this problem, provides support for feature extraction, that can be used to train custom models directly from within the NOVA user interface. To this end, NOVA implements a cooperative machine learning workflow that consists of the following 5 steps: Initially, the model undergoes training (Step 1) and is then utilized to predict unseen data (Step 2). Following this, an active learning module determines which portions of the prediction necessitate manual review by human annotators (Step 3). Subsequently, those labels are then reviewed by a human and corrected if necessary. Finally, the initial model is retrained using the updated labeled data (Step 5). This iterative process continues until all data is annotated. By actively integrating human expertise into the learning process, we enable interactive guidance and enhancement of automatic predictions. Following the suggestions of the Assistant, the user trains a new model that is able to detect smiles, based on the extracted face mesh data. Before continuing the next step the model is used to detect all instances of smiles for the teacher as well as the parent. §.§ Visual Inspection During this stage, the user visually examines the session by navigating through sessions and their timeline. The objective at this stage is to identify patterns and formulate hypotheses regarding their manifestation, aiming to extract further insights. To confirm or question these hypotheses, users can select a processing module (see Section <ref>) directly from the NOVA User interface and start an extraction job on the processing server. Once the processing is done, the results can be directly visualized in the UI. This iterative process can be repeated as often as necessary for different modules. Since the results of the previous modules are stored either in the annotation database or on the media storage, the user can reuse them at any time, without recomputing them. To provide a clearer illustration of this process, we pick up on the previous example. Building upon the results of the semantic content exploration described in <ref> the next steps are analyzing the non-verbal cues from the audio and the video signal and finding additional semantic indicators to assess the quality of communication based on the transcript. To continue the exploration, the user predicts the facial expressions of both the teacher and the parent using the EmoNet module. Upon loading the model's predictions into NOVA. As depicted in Figure <ref> it becomes apparent that the teacher's facial expressions predominantly oscillate between "happy" and "surprised," whereas the parent's expressions tend to skew towards anger or sadness, although it's noted that anger may not always be accurate as the parent might simply be displaying a serious expression. In addition, the user now loads the smile annotation generated in the previous step (see Section <ref>). Smiling occurs notably more often at the beginning and end of the conversation for both roles, with the teacher exhibiting smiles more frequently throughout the conversation. As the increased number of smiles at the beginning and end can likely be attributed to formal politeness the user first focuses on the middle sections of the conversation. Especially visually identifying, mirroring behavior where the smile of one interlocutor is mirrored by the other one, provides interesting insights. For example, there is a notable scene in which the parents' smiles mirror the teacher's smile, a moment in which the parents receive new and helpful information, indicating active participation in the dialog. This observation underscores the significance of integrating visual and verbal cues to capture the dynamics of communication and emotional expression within the interaction. §.§ Scene Search As evidenced, the exploration of conversational scenes through the analysis of participants' multimodal behavior presents considerable promise for investigating social dynamics. The process of visual inspection provides an effective method of identifying the constellation of social cues inherent in a specific scene. However, it is of limited use when it comes to retrieving all instances of such scenes in a recording session. First of all, it's easy to overlook certain scenes when scrolling through the timeline of longer recording sessions. Second, the analysis of social cue constellations that require precise assessment of either timings (e.g. the immediate facial expression to an event) or values (e.g. the degree to which an action unit is activated), can not be efficiently performed this way. Lastly, it is easy to see how the reliable identification of patterns that consist of the interplay of multiple clues can quickly become overwhelming for a human analyst. To take all these aspects into account, provides an API that allows the user to define a set of rules that are used to automatically identify scenes based on previously defined conditions. Building upon the results of the visual inspection workflow (see Section <ref>), a user can now follow a blueprint script to download the smile detection annotations for both roles from the database, move a sliding window over the annotations and create a new annotation based on a predefined mirroring condition (see Figure <ref>). § DISCUSSION Feedback To obtain initial feedback, we presented and the proposed workflow to an AI communication researcher who is involved in the interview and feedback process for the prospective teachers in the analyzed video recordings. In his comments he stated that the potential speed-up of data exploration through the proposed workflow is beneficial for the social science community: "Basically you are making videos searchable. You can search for specific events as you search for a word in a transcript. This functionality greatly improves the analysis of large volumes of data." Further, he mentions that he considered to be especially useful for inductive analytic strategies, where a researcher reads through the data and allows new concepts to emerge: "I can see how this would be useful for grounded theory approaches. In this process, researchers explore data iteratively without any previous assumptions about the findings to come up with new hypotheses and theories." On the other hand, he also mentioned how skeptical he is about using the tool to aid deductive strategies where a researcher applies an existing theory on new data. He stated that wrong or missing predictions of the automatically inferred cues could have a crucial impact on the results: "When I analyzed the transcripts generated with Whisper, I was looking for filler and backchannels. It took me a while to realize, that most of those short sequences are filtered out during the transcription process. I needed to add them again manually to be able to use the transcription." Lastly, he also suggested how could be deployed as an interactive feedback tool in the future: "This visualization could also be interesting for the students when providing feedback about their communication skills. It might be interesting to even let them explore their own conversations interactively." Limitations and Prospects While boasts a modular architecture, it's worth noting that certain modules may not yield perfect results, potentially introducing inaccuracies or limitations, especially in scenarios requiring precise results. Additionally, the absence of certain modules within could restrict the scope and comprehensiveness of analysis, given the varied nature of human behavior examination. On the other hand, the server-based architecture and modular structure of present notable advantages, facilitating the seamless exchange of processing components and enabling users to easily incorporate new modules or tailor modules to their specific needs. This adaptability empowers researchers to customize the tool according to their preferences and requirements. Furthermore, excels in fostering collaboration among researchers by facilitating the sharing of processed results and computing resources, thereby promoting efficient collaboration and nurturing a sense of community within the research domain. This collaborative aspect enhances the potential for collective insights and discoveries. Moreover, introduces innovative workflow components such as the assistant and cooperative machine learning functionalities, which enhance the efficiency and effectiveness of the analysis process, ultimately yielding more robust and insightful results. Despite its weaknesses, including potential module imperfections and missing functionalities, 's strengths in its modular architecture, collaborative features, and unique workflow components position it as a promising tool for democratizing access to advanced computational methodologies in human behavior analysis. § CONCLUSION We introduced , an open-source tool designed for human behaviour analysis utilizing state-of-the art machine learning models for feature extraction. Our framework offers a user-friendly interface through NOVA, streamlining the process and diminishing the necessity for laborious video and audio annotation. can be used to extract various helpful indicators to analyze human behavior such as transcription, facial expression, or emotions. Furthermore, we presented a prototypical workflow for the exploratory analysis of human behavior in new data. In the future, we plan to further enhance the predictive capabilities of by integrating new processing modules and improving the workflow. ACM-Reference-Format
http://arxiv.org/abs/2407.12081v1
20240716180001
Modular Invariant Starobinsky Inflation and the Species Scale
[ "Gonzalo F. Casas", "Luis E. Ibáñez" ]
hep-th
[ "hep-th", "astro-ph.CO", "gr-qc", "hep-ph" ]
decorations.pathmorphing
http://arxiv.org/abs/2407.12133v1
20240716193553
Spin-orbital-lattice entanglement in the ideal j=1/2 compound K$_2$IrCl$_6$
[ "P. Warzanowski", "M. Magnaterra", "Ch. J. Sahle", "M. Moretti Sala", "P. Becker", "L. Bohatý", "I. Císařová", "G. Monaco", "T. Lorenz", "P. H. M. van Loosdrecht", "J. van den Brink", "M. Grüninger" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
Institute of Physics II, University of Cologne, 50937 Cologne, Germany ESRF, The European Synchrotron, 71 Avenue des Martyrs, CS40220, 38043 Grenoble Cedex 9, France Dipartimento di Fisica, Politecnico di Milano, I-20133 Milano, Italy Sect. Crystallography, Institute of Geology and Mineralogy, University of Cologne, 50674 Cologne, Germany Department of Inorganic Chemistry, Charles University in Prague, 128 43 Prague 2, Czech Republic Dipartimento di Fisica e Astronomia "Galileo Galilei", Università di Padova, I-35121 Padova, Italy Institute of Physics II, University of Cologne, 50937 Cologne, Germany Institute for Theoretical Solid State Physics, IFW Dresden, 01069 Dresden, Germany Institute for Theoretical Physics and Würzburg-Dresden Cluster of Excellence ct.qmat, Technische Universität Dresden, 01069 Dresden, Germany Institute of Physics II, University of Cologne, 50937 Cologne, Germany § ABSTRACT Mott insulators with spin-orbit entangled j = 1/2 moments host intriguing magnetic properties. The j = 1/2 wave function requires cubic symmetry, while a noncubic crystal field mixes j = 1/2 and 3/2 character. Spectroscopic studies of 5d^5 iridates typically claim noncubic symmetry, e.g., based on a splitting of the excited j = 3/2 quartet. A sizable splitting is particularly puzzling in antifluorite-type K_2IrCl_6, a frustrated fcc quantum magnet with global cubic symmetry. It raises the fundamental question about the stability of j = 1/2 moments against magneto-elastic coupling. Combining resonant inelastic x-ray scattering with optical spectroscopy, we demonstrate that the multi-peak line shape in K_2IrCl_6 reflects a vibronic character of the j = 3/2 states rather than a noncubic crystal field. The quasimolecular crystal structure with well separated IrCl_6 octahedra explains the existence of well-defined sidebands that are usually smeared out in solids. Our results highlight the spin-orbital-lattice entangled character of cubic K_2IrCl_6 with ideal j = 1/2 moments. Spin-orbital-lattice entanglement in the ideal j = 1/2 compound K_2IrCl_6 M. Grüninger July 16, 2024 ========================================================================= § INTRODUCTION The entanglement of spin and orbital degrees of freedom via strong spin-orbit coupling leads to a multitude of novel quantum magnetic phases in 4d and 5d transition-metal compounds <cit.>. Particular interest has focused on the physics of j = 1/2 moments in 4d^5 Ru^3+ and 5d^5 Ir^4+ compounds. Despite strong spin-orbit coupling, these allow for isotropic Heisenberg exchange in 180^∘ corner-sharing bonding geometry but also yield Ising exchange in 90^∘ edge-sharing configuration <cit.>, opening the door for the realization of Kitaev spin liquids with bond-directional exchange <cit.> on tricoordinated lattices. Such local j = 1/2 moments are formed by, e.g., t_2g^5 Ir^4+ ions in octahedral configuration. Resonant inelastic x-ray scattering (RIXS) at the Ir L_3 edge is a sensitive tool to test the j = 1/2 character, probing excitations to the j = 3/2 quartet, i.e., the spin-orbit exciton. A noncubic crystal-field contribution Δ_ CF lifts the degeneracy of the quartet, see Fig. <ref>(c), giving rise to an admixture of j = 3/2 character to the j = 1/2 ground state wavefunction <cit.>. In Ir^4+ materials, RIXS typically detects such deviations from pure j = 1/2 character with crystal-field splittings of roughly 0.1 eV <cit.>. Surprisingly, a sizable splitting of the j = 3/2 states has even been reported for compounds that are found to exhibit cubic symmetry in x-ray diffraction such as the double perovskite Ba_2CeIrO_6 and the antifluorite-type halides K_2IrX_6 (X = Cl, Br) <cit.>. The halides show such splitting in Ir L-edge RIXS, Raman spectroscopy, and infrared absorption <cit.>. All three compounds host the local moments on an fcc lattice, giving rise to highly frustrated quantum magnetism where a spin-liquid phase emerges in the phase diagram based on the geometric frustration of antiferromagnetic nearest-neighbor Heisenberg exchange augmented by next-nearest-neighbor exchange <cit.>. Remarkably, finite antiferromagnetic Kitaev exchange in this case reduces the magnetic frustration and stabilizes long-range magnetic order <cit.>. However, the pronounced frustration on the fcc lattice can be lifted via magneto-elastic coupling, where even small lattice distortions may cause strong variation of nearest-neighbor exchange couplings <cit.>. Such small distortions may be reconciled with an apparent global cubic structure in diffraction experiments if distortions are essentially local and exhibit a very short correlation length. This raises the fundamental question on the stability of cubic j = 1/2 moments in frustrated quantum magnets. Recently, Iwahara and Furukawa <cit.> suggested an alternative scenario for K_2IrCl_6 in which the two-peak structure of the spin-orbit exciton observed in RIXS is attributed to the interplay of spin-orbit coupling and electron-phonon coupling that gives rise to vibronic sidebands, i.e., spin-orbital-lattice entangled states with a mixed vibrational-electronic character. This scenario of a dynamic Jahn-Teller effect in the j = 3/2 excited state <cit.> does not require a local breaking of cubic symmetry in the ground state and hence reconciles the spectroscopic data with the cubic symmetry reported in diffraction <cit.> and in electron spin resonance, where an isotropic g factor is found <cit.>. However, this explanation raises the puzzling question why distinct vibronic sidebands thus far have not been reported in the compelling series of experimental L-edge RIXS studies of other 5d^5 Ir^4+ oxides and halides <cit.>. Based on the excellent energy resolution of optical spectroscopy, vibronic sidebands of the spin-orbit exciton have been observed in optical data of the 4d^5 j = 1/2 Kitaev material α-RuCl_3 <cit.>. Furthermore, a dressing of the spin-orbit exciton with phonon sidebands has been claimed in oxygen K-edge RIXS on the related Kitaev material α-Li_2IrO_3 <cit.>. Compared to L-edge RIXS, however, RIXS at the O K edge is much more sensitive to vibrational features due to the very different character of the intermediate state in the scattering process and the much longer core hole life time. This has recently been demonstrated for 5d^1 Ba_2NaOsO_6 <cit.> and is further exemplified by the observation of an entire ladder of strong phononic peaks in α-Li_2IrO_3 at the O K edge <cit.> and the absence of any phononic features in L-edge RIXS of the same compound <cit.>. Beyond iridates, the observation of vibronic sidebands of electronic excitations by means of transition-metal L-edge RIXS has been claimed in 3d^9 Ca_2Y_2Cu_5O_10 <cit.>, 4d^4 K_2RuCl_6 <cit.>, and 5d^1 A_2MgReO_6 (A = Ca, Sr, Ba) <cit.> but the observed features are broad and individual sidebands are not or hardly resolved. This is the typical situation in solids. While molecules like O_2 exhibit distinct vibronic sidebands of electronic excitations <cit.>, in solids the existence of many different phonon modes and their dispersion smear out the sideband structure, most often turning the line shape into a broad hump even in optical data <cit.>. Remarkably, individual vibronic sidebands have been resolved in optical data of quasimolecular crystals such as K_3NiO_2 with isolated NiO_2 units <cit.>, and the same characteristic features have been detected recently in optical data of the sister compounds K_2ReCl_6 and K_2OsCl_6 <cit.>. Here, we join forces of RIXS and optical spectroscopy to thoroughly study the spin-orbit exciton in K_2IrCl_6. Phenomenologically, the two-peak structure seen in RIXS can be explained by either a noncubic crystal field or a vibronic picture. In contrast, the excellent energy resolution of the optical data allows us to resolve a multi-peak structure that highlights the vibronic origin, respecting cubic symmetry. These excitations are particularly well defined in K_2IrCl_6 due to its quasimolecular crystal structure with spatially isolated IrCl_6 octahedra. The competition of spin-orbit coupling and electron-phonon coupling gives rise to a dynamic Jahn-Teller effect in the j = 3/2 excited state, hybridizing the spin-orbit exciton with vibrational states <cit.>. Empirically, the overall line shape of these spin-orbital-lattice entangled excitations can be described by the Franck-Condon approximation, where the eigenstates are product states of electronic and vibrational states. However, the spin-orbital-lattice entangled nature is evident in the parameters, in particular in the peak splitting and its temperature dependence. We further demonstrate that the contribution of elementary phonon excitations in Ir L_3-edge RIXS is negligible in K_2IrCl_6. In contrast, the vibronic "phonon" sidebands of the spin-orbit exciton contribute in a direct RIXS process and are resonantly enhanced due to the electronic part of the wave function. Finally, we observe the double spin-orbit exciton at 1.3 eV in σ_1(ω) which also shows vibronic sidebands. Our results firmly establish the spin-orbital-lattice entangled nature of j = 3/2 excited states in cubic K_2IrCl_6 with ideal j = 1/2 moments. § EXPERIMENTAL High-quality single crystals were grown from a solution of commercially available K_2IrCl_6 in ≈ 5.2 molar HCl by controlled evaporation of the solvent at 293 K. Within typical growth periods of two weeks, crystals of dimensions up to 1×1×2 mm^3 were obtained, see Fig. <ref>(b). In single-crystal x-ray diffraction measurements performed at ten different temperatures from 120 to 290 K, we find the cubic space group Fm3̅m (Nr. 225) and lattice constants a = 9.7458(2) Å (290 K) and 9.6867(3) Å (120 K), see Appendix A. Our x-ray diffraction results agree very well with earlier data measured at 80 and 200 K on samples of the same batch <cit.>. A thorough analysis of the x-ray diffraction data supports the high sample quality with no indication of a significant amount of vacancies, see also Ref. <cit.>. Measurements of the magnetic susceptibility χ and the specific heat C_p (see Fig. <ref>) yield results that agree with previous reports <cit.>, e.g., concerning the Néel temperature T_N = 3.1 K, the Weiss temperature θ_ W = , the sizable frustration parameter f = |θ_ W|/T_N ≈ 14, and the effective magnetic moment μ_ eff = 1.69 μ_ B. The latter is in good agreement with the value 2√(j(j+1)) μ_ B ≈ 1.73 μ_ B expected for an ideal j = 1/2 ground state in a cubic environment. RIXS spectra were measured at the Ir L_3 edge at beamline ID20 of the European Synchrotron Radiation Facility. In order to resonantly enhance the spin-orbit exciton, we tuned the energy of the incident photons to 11.214 keV where an energy resolution of 25 meV was achieved <cit.>. RIXS data were collected at 10, 100, 200, and 300 K on a (111) surface with (001) and (110) lying in the horizontal scattering plane. The incident x-ray photons were π polarized. The RIXS spectra at different temperatures have been normalized by the spectral weight of the spin-orbit exciton. The transferred momentum 𝐪 is given in reciprocal lattice units. Furthermore, we study the linear optical response of K_2IrCl_6 in the range from 0.1 to 6 eV, i.e., from the infrared up to the UV. Using a Woollam VASE ellipsometer from 1 to 6 eV, we address the optical response above the Mott gap at 300 K. For cubic symmetry, such ellipsometric measurements directly yield the optical conductivity σ_1(ω) <cit.>. Due to inversion symmetry on the Ir site, the spin-orbit exciton in optics corresponds to a parity-forbidden excitation. However, it acquires finite spectral weight in a phonon-assisted process. For frequencies below the Mott gap, measurements of the infrared transmittance T(ω) are ideally suited to study such weak absorption features in transparent single crystals of appropriate thickness <cit.>, as recently demonstrated on the sister compounds K_2OsCl_6 and K_2ReCl_6 <cit.>. For K_2IrCl_6, we studied single-crystalline samples with a thickness d = 380(8) , 95(3) , and 46(3) . Infrared transmittance data were measured from 0.15 to 1.85 eV with an energy resolution of 8 cm^-1 (≈ 1 meV) using a Bruker IFS 66 v/S Fourier-transform infrared spectrometer equipped with a continuous-flow ^4He cryostat. § SPIN-ORBIT EXCITON IN RIXS The hallmark excitation of a local j = 1/2 ground state is the spin-orbit exciton, i.e., the excitation to the j = 3/2 quartet that is expected at 1.5 λ in cubic symmetry <cit.>, where λ is the spin-orbit coupling constant, see sketch in Fig. <ref>(c). RIXS data of K_2IrCl_6, measured up to 1.3 eV with transferred momentum 𝐪 = (8 8 5), show the spin-orbit exciton around 0.63 eV, see Fig. <ref>(a). The absence of further inelastic features in this energy range demonstrates the pure Ir^4+ valence of the compound, see inset of Fig. <ref>(a). The narrow line width is typical for local, weakly interacting j = 1/2 moments <cit.>, while broader, dispersive features are observed in compounds with larger exchange interactions such as Sr_2IrO_4 and Sr_3Ir_2O_7 <cit.>. In agreement with previous RIXS data on K_2IrCl_6 measured with 35 meV resolution <cit.>, we observe a two-peak structure of the spin-orbit exciton. The main peak at 0.63 eV features a weak shoulder that is about 0.05 eV higher in energy, see Fig. <ref>. The relative intensity of this shoulder increases with increasing temperature, see Fig. <ref>(a), in agreement with previous results for 10 and 300 K <cit.>. Similar two-peak structures were reported in L-edge RIXS on the sister compounds K_2IrBr_6 and (NH_4)_2IrCl_6, also in their cubic phases <cit.>. Furthermore, a splitting of the spin-orbit exciton in K_2IrCl_6 has been observed in Raman scattering <cit.> and in infrared absorption at room temperature <cit.>. In fact, such a splitting of the spin-orbit exciton is a common feature in RIXS studies on Ir^4+ compounds, see, e.g., Refs. <cit.>. Typically, the splitting is attributed to a noncubic crystal field contribution Δ_CF that lifts the degeneracy of the j = 3/2 quartet. For a single site and assuming a tetragonal distortion, the physics is described by <cit.> ℋ_single=λ𝐒·𝐋+Δ_CFL^2_z where L_z is the component of the angular moment L along the tetragonal axis. A finite Δ_CF≪λ lifts the degeneracy of the j = 3/2 quartet, resulting in an experimental peak splitting Δ_ exp = 2/3Δ_ CF. Following this scenario, we empirically fit the RIXS spectra with a sum of two Voigt profiles. At 10 K, the fit yields peak energies of 0.635 and 0.676 eV, see Appendix B. Solving Eq. (<ref>), we find λ = 434(1) meV and two possible values Δ_CF = 62 meV and for elongation and compression of the octahedra, respectively. However, a finite value of the noncubic crystal field splitting Δ_ CF≠ 0 is in conflict with the globally cubic structure observed in x-ray and neutron diffraction experiments <cit.> as well as with the isotropic g factor found in an ESR study <cit.>. There are two scenarios to resolve this apparent discrepancy. (i) Cubic symmetry is broken locally in the initial state of the excitation process. (ii) A vibronic character of the Jahn-Teller active j = 3/2 excited states <cit.> yields sidebands while the cubic symmetry of the ground state is preserved. In scenario (i), local deviations from cubic symmetry may be caused by either static defects or a strong magneto-elastic coupling that triggers distortions <cit.>. The latter can be reconciled with global cubic symmetry if the correlation length is small. Based on the short time scale of the RIXS process, also the thermal occupation of low-lying phonon modes effectively may break the cubic symmetry on the Ir site in the initial state <cit.>. In fact, x-ray studies reported comparably large atomic displacement parameters (ADPs) in the K_2PtCl_6-type antifluorite halides A_2MX_6 in general and also in K_2IrCl_6 <cit.>. Using a cubic crystal structure in the analysis of the x-ray data, such large ADPs in general may reflect local disorder or dynamical effects such as the thermal population of low-energy phonon modes. Based on a thorough analysis of the x-ray diffraction data, Bertin et al. <cit.> conclude that there is no indication for local disorder in K_2IrCl_6. Note that neighboring MX_6 or IrCl_6 octahedra are not connected in A_2MX_6 or K_2IrCl_6, see Fig. <ref>(a), i.e., they do not share a common ligand. This causes a rotational instability and low-energy librational modes <cit.>. For the halide ions, the largest ADPs indeed are found perpendicular to the Ir-Cl bonds <cit.>, as expected for a predominantly rigid rotation of the octahedra in contrast to a local, noncubic distortion away from octahedral symmetry. To scrutinize this scenario of a dynamical origin for our samples, we studied the temperature dependence of the ADPs U_ij of the Ir, K, and Cl ions in the range from 120 to 290 K via single-crystal x-ray diffraction measurements, see Appendix A. The results are plotted in Fig. <ref> and agree very well with previous data reported for 80 and 200 K that were collected on samples from the same batch <cit.>. For the Cl ions, U_11 and U_22 refer to the displacement along and perpendicular to an Ir-Cl bond, respectively. We find U_22 (Cl)≫ U_11 (Cl), in agreement with Refs. <cit.>. Using tanφ = √(U_22( Cl))/d, we estimate the largest rotation angle φ ≈ 5^∘ at 300 K. The temperature dependence strongly supports a predominantly dynamical origin of the large U_ij. Above 120 K, we find a linear behavior for all U_ij that extrapolates to small values at low temperature, in agreement with data reported by Khan et al. <cit.>. The data do not provide any evidence for a static contribution of local distortions or defects. A more precise quantitative determination of a possible small static contribution present already at very low temperatures requires to consider zero point fluctuations <cit.> and a possible temperature dependence of the phonon energies. This is beyond the scope of our study. In comparison, the cubic double perovskite Ba_2CeIrO_6 exhibits a much smaller temperature dependence of the ADPs, with U_22(O) = 0.022 Å^2 at 100 K and 0.024 Å^2 at 300 K <cit.>. This suggests static disorder that may be explained by the presence of a few percent of Ce-Ir site disorder <cit.>. In K_2IrCl_6, the pronounced temperature dependence of the U_ij is in striking contrast to the much smaller temperature dependence of the peak splitting in RIXS. We conclude that the sizable atomic displacement parameters at elevated temperature have a dynamical origin. In particular, U_22(Cl) predominantly can be attributed to rigid rotations of the octahedra. Thermal population of low-energy rotations of the octahedra preserves cubic symmetry on average. For spectroscopy, however, it may break the cubic symmetry on the Ir site if the time scale of the electronic excitation is short enough. Nevertheless, only a small effect is expected for a rigid rotation of the octahedra. More precisely, Khan et al. <cit.> predict a dynamical rotation angle of about 5^∘ in the cubic phase of the sister compound K_2IrBr_6 above 170 K, similar to the value that we find for K_2IrCl_6, as discussed above. This, however, is expected to cause a noncubic crystal-field splitting of less than 10 meV at high temperature <cit.>, which is by far not large enough to explain the splitting observed in RIXS. In particular, a thermal population of low-energy octahedral rotations cannot explain the presence of high-energy sidebands already at 10 K. A survey of many compounds of the AMX_6 antifluorite family reveals a clear correlation between the Goldschmidt tolerance factor t and the structural transition temperature T_s from the high-temperature cubic phase to a phase with lower symmetry <cit.>. The Goldschmidt tolerance factor t is based on the atomic radii and usually provides a criterion for rotational phase transition in perovskites. For the antifluorite halides, Bertin et al. <cit.> find a critical value of t ≈ 1 below which a rotational phase transition occurs, where T_s increases linearly with t decreasing below 1. For K_2IrCl_6, the tolerance factor amounts to 1.0019 <cit.>, in agreement with the observation of cubic symmetry down to at least 0.3 K <cit.>. Local distortions may also result from strong magneto-elastic coupling that has been proposed based on very similar results for the double perovskite Ba_2CeIrO_6 <cit.>, another compound with j = 1/2 moments forming an fcc lattice. Also Ba_2CeIrO_6 shows global cubic symmetry in x-ray diffraction measurements while RIXS reveals a splitting of the spin-orbit exciton of about 0.1 eV <cit.>, roughly a factor two larger than in K_2IrCl_6. Exchange couplings on the fcc lattice are highly frustrated, and this frustration may be lifted by small distortions <cit.>. Such distortions may escape detection in diffraction experiments if they are either very small or show a short correlation length. According to Iwahara and Furukawa <cit.>, an interpretation of the two-peak line shape of the spin-orbit exciton of K_2IrCl_6 in terms of a static crystal-field splitting requires a displacement of the Cl ligands of δ r = 0.007 Å. Such distortions would cause deviations from the cubic structure, but corresponding Bragg peaks have not been found <cit.>. Using a cubic structure in the analysis of diffraction data, deviations from cubic symmetry can also be detected via anomalously large atomic displacement parameters. However, effects as small as (δ r)^2 < 10^-4 Å^2 cannot be detected in the ADPs. Altogether, the structural data do not provide any evidence for deviations from cubic symmetry. On the contrary, a thorough analysis of the x-ray diffraction data (see also Ref. <cit.>) and the temperature dependence of the ADPs rather support cubic symmetry. Small distortions nevertheless cannot be excluded. Scenario (ii) explains the two-peak structure of the RIXS spectra in terms of a vibronic character of the spin-orbit exciton, as recently proposed for K_2IrCl_6 <cit.>. Vibronic excitations emerge from the coupling between electronic and vibrational excitations <cit.>. An electronic excitation such as the spin-orbit exciton may change the charge-density distribution such that the lattice is not in its corresponding ground state, causing a series of phonon sidebands of a given electronic excitation. A basic example is given by the phonon satellite lines observed in photoemission on H_2 molecules <cit.>. Suddenly removing an electron changes the equilibrium distance and leaves the molecule in a vibrationally excited state, i.e., the molecule rings. In K_2IrCl_6, this vibronic scenario does not break cubic symmetry in the j = 1/2 ground state. With the simple two-peak structure of the RIXS data, an unambiguous distinction between the two scenarios is challenging. To resolve the origin of the splitting in K_2IrCl_6, we study the spin-orbit exciton with optical spectroscopy, making use of the excellent energy resolution. We will show that a vibronic character is the key to understand the peculiar spectra of K_2IrCl_6 in RIXS and optics. The vibronic sidebands reflect the hybridization of spin-orbit exciton and vibrational states, i.e., spin-orbital-lattice entanglement. § SPIN-ORBIT EXCITON IN OPTICAL DATA §.§ Phonon-assisted character The optical conductivity σ_1(ω) in the energy range of the spin-orbit exciton is plotted in Fig. <ref>(b). The spin-orbit exciton is an intra-t_2g excitation. In σ_1(ω), such on-site d-d excitations are parity forbidden due to the presence of inversion symmetry on the Ir site. Finite spectral weight arises in a phonon-assisted process via the simultaneous excitation of an odd-parity phonon <cit.>, which explains the complex line shape. Such weakly dipole-active excitations can be studied very well in the transparency window above the phonons and below the Mott gap, as reported recently for K_2ReCl_6 and K_2OsCl_6 <cit.> as well as for the spin-orbit exciton in α-RuCl_3 <cit.>. In K_2IrCl_6 at 10 K, we observe the onset of excitations across the Mott gap around 1.7 eV, see inset of Fig. <ref>(b). At the peak of the spin-orbit exciton, σ_1(ω) reaches values of about 5 (Ωcm)^-1, which is roughly two orders of magnitude smaller than for the directly dipole-allowed intersite excitations |d_i^5 d_j^5⟩→ |d_i^4 d_j^6⟩ across the Mott gap, see Appendix C. At 10 K, the phonon-assisted character is evident from the shift of the peak energy in σ_1(ω) compared to RIXS and from the increase of the spectral weight with increasing temperature <cit.>. Despite the different excitation mechanisms, the overall line shape of the spin-orbit exciton is very similar in RIXS and σ_1(ω) at 10 K. This is highlighted in Fig. <ref>, where both data sets have been shifted by the peak energy to roughly compensate for the phonon shift. The RIXS data have been measured with an energy resolution of δ E = 25 meV and show a strong main peak and smaller intensity at higher energy. Qualitatively, the optical data show a very similar behavior but resolve an additional fine structure. The corresponding subbands are due to a superposition of phonon-assisted processes for different symmetry-breaking phonon modes with phonon energies in the range of roughly 10 to 40 meV, as discussed below. This superposition enhances the overall line width despite the much better energy resolution. The assignment of the subbands provides the key to understand the character of the weaker high-energy features. To this end, we need to have a close look at the peak assignment in the optical data. §.§ Assignment of phonon-assisted excitations At 10 K, the dominant phonon-assisted peaks in σ_1(ω) occur at E_0 + E_odd, where E_0 = 635 meV is the energy of the bare electronic spin-orbit exciton while E_odd denotes the energy of a symmetry-breaking phonon mode. The cubic crystal structure of K_2IrCl_6 hosts four odd-symmetry optical phonon modes that have been observed at 10, 18, 23, and 41 meV in infrared spectroscopy and/or inelastic neutron scattering <cit.>. Given the quasimolecular crystal structure with separate IrCl_6 octahedra, the latter three of these modes can be viewed as the three odd-symmetry normal modes of a single IrCl_6 octahedron <cit.> while the fourth one is a lattice phonon mode. The main peaks in σ_1(ω) are well described by considering E_odd = 6, 18, 27, and 40 meV, see peaks A–D in Fig. <ref>(a). Taking into account that the symmetry-breaking modes do not have to be at the Γ point, the energies show good agreement with the values reported above, lending strong support to our peak assignment. With increasing temperature, the spectral weight of a phonon-assisted process at E_0 + E_ odd increases with 1+n(T), where n(T) denotes the phonon occupation number of the odd-symmetry mode <cit.>. At finite temperatures, also phonon annihilation processes occurring at E_0 - E_ odd acquire spectral weight in σ_1(ω) that increases proportional to n(T). The phonon-assisted character of the spin-orbit exciton in σ_1(ω) hence explains the different temperature dependence observed in RIXS and optics, in particular the overall increase of the spectral weight in σ_1(ω) with increasing temperature as well as the pronounced enhancement of absorption below E_0. In the chloride, the energy E_ odd of the symmetry-breaking phonons is limited to about 40 meV. The phonon-assisted character in σ_1(ω) hence explains the dominant spectral weight and the subbands in the range of roughly 0.64 to 0.68 eV. This does not yet reveal the nature of the weak features at higher energies, e.g., at 0.71, 0.74, and 0.77 eV, reaching energies as high as about E_0 + 0.1 eV. In general, a phonon-assisted character of the spin-orbit exciton is expected in σ_1(ω) as long as there is inversion symmetry on the Ir site. This phonon-assisted picture successfully has been used to describe the equivalent peaks of IrCl_6 impurities in different AMX_6-type host compounds such as cubic Cs_2ZrCl_6 or K_2SnCl_6 <cit.>. The data of these IrCl_6 impurities reveal the existence of vibronic sidebands. Remarkably, both the observed energies and the overall structure are very similar to our observations in single crystalline K_2IrCl_6. § VIBRONIC EXCITATIONS §.§ Vibronic sidebands and the Franck-Condon principle A vibronic character emerges from the coupling of electronic excitations to vibrational degrees of freedom. An on-site intra-t_2g excitation may give rise to a change of the charge distribution such that the lattice is not in its ground state anymore. This yields a series of phonon sidebands of the electronic excitation <cit.> and has been claimed to describe the RIXS data of K_2IrCl_6 <cit.>. In fact, vibronic sidebands of on-site intra-t_2g excitations are a common feature in the optical conductivity of the AMX_6 family, e.g., in the sister compounds K_2ReCl_6 and K_2OsCl_6 <cit.>. In RIXS measurements on the Os and Re halides, however, the energy resolution was not sufficient to resolve the vibronic character <cit.>. To put things into perspective, we consider the 5d^3 compound K_2ReCl_6 that shows five different intra-t_2g excitations <cit.>. Optical data for one of them are depicted in Fig. <ref>(b). In σ_1(ω) of K_2ReCl_6, the subbands are very sharp and well resolved. The stronger peaks A–D reveal the subbands of the four odd phonon modes at E_0+E_ odd, demonstrating the phonon-assisted character <cit.>. On top, weak vibronic sidebands are resolved at E_0+E_ odd+ m E_ p, where m E_ p with integer m denotes the energies of a phonon progression according to the Franck-Condon principle, see Fig. <ref>. For the data of K_2ReCl_6 in Fig. <ref>(b), the electronic excited state is a Γ_7 doublet <cit.>. An interpretation of the high-energy sidebands in terms of crystal-field splitting is hence not applicable. These features unambiguously are of vibronic character. Remarkably, a very similar sideband structure has been observed for all five intra-t_2g excitations in K_2ReCl_6 <cit.>. This common motif also applies to the data of K_2IrCl_6 in Fig. <ref>(a). The Franck-Condon principle offers a simplified, analytic, and intuitive description of the vibronic line shape. It assumes that the timescale of electronic excitations is negligible compared to phonon timescales. For an optical absorption process, this corresponds to an instantaneous, "vertical" transition from the electronic ground state to the excited state, see Fig. <ref>(a), where "vertical" implies no change on the horizontal axis that denotes a generalized displacement coordinate. For illustration and simplicity, we assume that there is only one odd-symmetry mode with energy E_ odd that is relevant for the optical conductivity but not for RIXS. Furthermore, we assume that the electronic excited state is a Kramers doublet and that there is one dominant phonon mode with energy E_ p that governs the progression of vibronic sidebands. Neglecting the entanglement between electronic and lattice degrees of freedom, the vibronic sidebands are shifted by m E_ p in the Franck-Condon approximation, i.e., they yield an evenly spaced comb of peaks at E_m = E_0 + E_ odd + m E_ p in σ_1(ω). At T = 0, the line shape for optical absorption is described by <cit.> I(E) = I_0 ∑_m = 0^∞e^-S S^m/m! δ(E_0+E_ odd+mE_ p-E) with σ_1(E) = I(E)· E, where I_0 is proportional to the squared dipole matrix element of the (phonon-assisted) electronic transition, e^-S S^m/m! is the Franck-Condon factor, and S is the Huang-Rhys factor, which is equivalent to the average number of phonons that are present in the excited state. It is a measure of the electron-phonon coupling constant g and governs the line shape. A large value of S creates an envelope with a more Gaussian-like intensity distribution while a small S yields an asymmetric envelope, see Fig. <ref>(b). At finite temperatures, one has to consider the thermal occupation of the sideband phonon modes, n_p = 1/exp(E_ p/k_BT)-1. The sum hence has to include negative values of m <cit.>, I(E) = I_0 ∑_m = -∞^∞(n_p+1/n_p)^m/2 J_m(2S√( n_p(n_p+1))) × e^-S(2n_p+1) δ(E_0+E_ odd+mE_ p-E), where J_m is the modified Bessel function of m^ th order. For comparison with experiment, we replace the δ function in Eq. (<ref>) with a Gaussian profile. This yields excellent agreement with σ_1(ω) of 5d^3 K_2ReCl_6, see Fig. <ref>(b). The experimental spectrum can be described by a superposition of such Franck-Condon progressions for all four odd-symmetry phonon modes. In K_2ReCl_6, most of the spectral weight is in the peaks of order m = 0 with only a small contribution of m = 1 and basically negligible spectral weight for m = 2. This intensity distribution corresponds to a small Huang-Rhys factor S = 0.14. This reflects the spin-forbidden character of the excitation in K_2ReCl_6 that is observed at about 5J_H, where J_H denotes Hund's coupling. The excitation roughly can be viewed as a flip of the spin of one electron in the t_2g^3 configuration, which causes only a small change of the charge distribution and corresponds to a small S. The same approach also yields an excellent description of σ_1(ω) of K_2IrCl_6, see Fig. <ref>(a). The blue shading highlights the Franck-Condon sidebands of peak B with a larger but still small value of S ≈ 0.7. The larger S yields detectable spectral weight of the peaks of order m = 2 and even 3. Thereby, the vibronic model naturally explains the large number of peaks and the existence of weak features at energies as high as 0.74 and 0.77 eV, more than 0.1 eV above the bare electronic excitation energy E_0 = 635 meV. Thus far we discussed the Franck-Condon approximation for an optical excitation. We claim that Eq. (<ref>) can also be applied to vibronic sidebands of an electronic excitation studied in direct RIXS, which is supported by the overall agreement of the line shapes shown in Fig. <ref>. Note that the case is different for elementary phonons and multi-phonons. These are excited in an indirect RIXS process that can be approximated by using two Franck-Condon factors <cit.>, see Sec. <ref>. For K_2IrCl_6, Eq. (<ref>) provides an excellent empirical description of our RIXS spectra, see Fig. <ref>. At 10 K, the fit yields S = 0.30, E_ p = 44 meV, and the bare electronic energy E_0 = 633 meV. With an energy resolution of 25 meV, the latter agrees very well with E_0 = 635 meV found in σ_1(ω). In the next paragraph, we discuss E_p to collect evidence for the spin-orbital-lattice entangled character. §.§ Beyond the Franck-Condon approximation The Franck-Condon principle is valuable for an intuitive explanation of the overall line shape. For a microscopic description of K_2IrCl_6, Iwahara and Furukawa <cit.> treated electron-phonon coupling g and spin-orbit coupling λ on the same footing. They find that the spin-orbit exciton with its vibronic sidebands is of spin-orbital-lattice entangled character. Strictly speaking, the simple picture of the simultaneous excitation of a spin-orbit exciton and one or several phonons, causing a comb of equidistant peaks, is only applicable for g→ 0, in which case the intensity of the sidebands vanishes. For finite g, the excitation spectrum is more complex. Iwahara and Furukawa <cit.> predict a larger number of peaks but many of them are small and hard to resolve experimentally. Furthermore, hybridization shifts the energies such that the sidebands are not equidistant anymore. In K_2IrCl_6, however, we can only determine two peak energies in RIXS, and the complex line shape of the phonon-assisted excitations in σ_1(ω) does not allow us to clearly identify deviations from a comb-like behavior. The hybridized character may also give rise to differences in line shape between RIXS and σ_1(ω), e.g., due to the presence of the symmetry-breaking phonon mode in the case of the optical data. However, we are lacking a theoretical prediction for such differences. From an experimental point of view, the hybridized character at this stage can be detected in the value of the peak splitting, which is captured by E_ p in Eq. (<ref>). At 10 K, the fits yield E_ p = 40 meV in σ_1(ω) and 44 meV in RIXS, see Figs. <ref>(a) and <ref>. In general, the even electronic excitations may couple to phonons of a_1g, e_g, and t_2g symmetry <cit.>, where the latter usually is neglected. The elementary phonon modes of a_1g and e_g symmetry have been observed in Raman scattering at 44 meV and 37 meV, respectively <cit.>. In the Franck-Condon approximation, the experimental splitting thus appears to suggest a predominant coupling to the a_1g mode, the breathing mode of the IrCl_6 octahedron. This applies if, e.g., both the electronic ground state and the excited state show cubic symmetry, which is the case for ideal j = 1/2 and 3/2 states. For the doublet excited state of K_2ReCl_6 addressed in Fig. <ref>(b), the phonon sidebands with E_ p = 43 meV have been attributed to the a_1g mode <cit.>. However, the situation is different in K_2IrCl_6 due to the degeneracy of the excited j = 3/2 quartet that is lifted by coupling to the Jahn-Teller active e_g phonon mode <cit.>. In this case, the energy splitting between the lowest two strong RIXS peaks roughly is given by E_ p· (1+g^2/8). Note that g is dimensionless in the definition of Ref. <cit.>. With E_ p = 37 meV for the e_g mode and g = 1.2, Iwahara and Furukawa find a splitting of 45 meV between the two strongest peaks in the calculated RIXS response, in excellent agreement with the experimental data at 10 K. The fact that the splitting depends on g reflects the dynamic Jahn-Teller effect, lifting the degeneracy of the j = 3/2 quartet. This is in particular relevant for the temperature dependence of the splitting, as discussed in the next paragraph. Strictly speaking, the splitting also depends on λ due to a finite admixture of j = 1/2 character via the pseudo-Jahn-Teller effect <cit.>. This, however, is small and can be neglected. §.§ Temperature dependence The convincing description of the temperature dependence of the RIXS data is a particular asset of the vibronic scenario. Using g = 1.2, Iwahara and Furukawa <cit.> showed that a vibronic model with spin-orbital-lattice entangled states describes the temperature dependence observed in RIXS. First, we address the qualitative behavior of the experimental data, see Figs. <ref>(a) and <ref>. This can be understood within the Franck-Condon picture, see Eq. (<ref>) and the black and orange lines in Fig. <ref>(b). The thermal occupation of phonon modes gives rise to a redistribution of the intensity to high energies as well as to the range below the main peak, where the latter can be attributed to the emergence of a subband at E_0-E_ p. As at 10 K, a quantitative analysis reveals fingerprints of the entangled character. With increasing temperature, the fit yields an increase of the peak splitting E_ p and of S and a modest red shift of E_0, see Fig. <ref>. Remarkably, the behavior of all three observations is in line with an increase of the electron-phonon coupling constant g. An increase of g enhances S and, in a spin-orbital-lattice entangled scenario <cit.>, reduces the energy E_0 of the main peak via the dynamic Jahn-Teller effect while increasing the splitting E_ p· (1+g^2/8) of the dominant RIXS peaks, as discussed above. In fact, the fit yields a splitting as large as 51 meV at 300 K. This is hard to reconcile with the phonon energies of the chloride in a Franck-Condon scenario but can be attributed to the g dependence of the splitting in a spin-orbital-lattice entangled picture. A further contribution to the 1 % reduction of E_0 may originate from thermal expansion of the lattice and a corresponding increase of the Ir-Cl distance. This modifies the cubic crystal field 10 Dq and thereby the effective value of λ. For the cubic phase of K_2IrBr_6, the peak splitting has been reported by Khan et al. <cit.> to increase from about 50 meV at 170 K to 70 meV at 300 K, while Reig-i-Plessis et al. find a smaller value of 47 meV at 300 K <cit.>. This discrepancy may originate from the line shape of the spin-orbit exciton in K_2IrBr_6 which rather shows a subtle shoulder instead of a clear splitting. The temperature dependence of σ_1(ω) predominantly reflects the phonon-assisted character, which causes a pronounced increase of the spectral weight. The many subbands visible at 10 K are smeared out at elevated temperatures, see Fig. <ref>(b). Therefore we refrain from fitting σ_1(ω) at high temperature. §.§ Phonons and phonon sidebands The observation of vibronic "phonon" sidebands is particularly interesting given the fact that the intensity of elementary phonon excitations is negligible in L-edge RIXS on 5d^5 Ir^4+ j = 1/2 Mott insulators <cit.>. For K_2IrCl_6, this is highlighted in Fig. <ref>, which focuses on the RIXS response around zero loss. The data have been measured with a scattering angle 2θ close to 90^∘, which yields an elastic line of moderate strength. The symmetric behavior around zero loss is emphasized by a fit with a Voigt profile. The data are very well described by considering only the elastic line and do not provide any evidence for a sizable inelastic contribution below 0.1 eV. Even at 300 K, where the intensity of low-energy modes is enhanced by the Bose factor, the contribution of phonons is negligible. In L-edge RIXS on Mott-insulating 5d^5 iridates, spin and orbital excitations are boosted in a direct RIXS process <cit.>. The x-ray absorption step from the ground state to the intermediate state is followed by the x-ray emission step from the intermediate state to the final state. In general, phonons are excited with a much smaller cross section in indirect RIXS, i.e., via the dynamics in the intermediate state. In RIXS a phonon is created if the lattice distorts in the intermediate state to screen the core hole <cit.>. A simplified description of phonon excitations can again be achieved in a Franck-Condon picture. However, with the coupling taking place in the intermediate state, one has to use two Franck-Condon factors, one for the absorption step, and the second one for x-ray emission <cit.>. For L-edge RIXS with an incident energy as large as 11.214 keV, the absence of any phonon signatures can be rationalized via the Ir 2p^5 t_2g^6 intermediate state, which is well screened and shows a short life time of only a few femtoseconds. In contrast, phonon contributions can be observed in RIXS on Ir oxides at the O K edge due to the very different intermediate state <cit.>. Previously, this phonon approach with two Franck-Condon factors has also been applied to vibronic sidebands of on-site d-d excitations studied in RIXS at the O K edge, e.g., in Li_2CuO_2 and α-Li_2IrO_3 <cit.>, as well as at the Cu L edge in Ca_2Y_2Cu_5O_10 <cit.>. In our data of K_2IrCl_6, measured at the Ir L_3 edge, indirect RIXS processes are negligible. The spin-orbit exciton and its vibronic sidebands are resonantly enhanced in a direct RIXS process due to the electronic contribution to the wave function. Empirically, the data are described by Eq. (<ref>), where the squared matrix element of the electronic transition in direct RIXS yields I_0. As in optical absorption, the electronic excitation leaves the system in a vibrationally excited state, see Fig. <ref>. In this sense, the coupling to the lattice occurs in the final state, in contrast to the case of elementary phonons discussed above. § DOUBLE SPIN-ORBIT EXCITON The excitation of double spin-orbit excitons in the optical conductivity around 1.3 eV provides a further example for a vibronic excitation in K_2IrCl_6, see Fig. <ref>. The feature is very weak, the spectral weight being roughly a factor 50 smaller than for the single spin-orbit exciton, see inset of Fig. <ref>(b). At 10 K, the energy of the first peak E_2SO = 1286 meV = 2·643 meV is very close to twice the energy E_0 = 635 meV for the single spin-orbit exciton. This peak at 1.3 eV can hence be assigned to the simultaneous excitation of two spin-orbit excitons on neighboring sites. Double and even triple spin-orbit excitons previously have been observed in σ_1(ω) of the 4d^5 j = 1/2 compound α-RuCl_3 <cit.>. Similar overtones or double intra-t_2g excitations have also been reported in orbitally ordered YVO_3 as well as in K_2ReCl_6 and K_2OsCl_6 <cit.>. The spectral weight of this feature is insensitive to temperature at least up to 200 K. At still higher temperature, it drowns in the onset of excitations across the Mott gap which shifts to lower energies with increasing temperature. The temperature-independent spectral weight suggests that this feature is not phonon-assisted but rather directly infrared allowed, even though with a very small spectral weight. This agrees with results on K_2ReCl_6 and K_2OsCl_6 <cit.>. A finite dipole moment may arise if the total excited state breaks inversion symmetry on the bond. This is, e.g., the case for a spin-orbit exciton with j_z = 3/2 on one site but j_z = 1/2 on the neighboring site. At 10 K, the splitting between the main peak at E_2SO = 1286 meV and the first sideband amounts to 46 meV. The splitting between the second peak and the weak third peak is comparable but the large width prevents a more precise determination. The value of 46 meV is very similar to the vibronic splitting of the spin-orbit exciton, see Fig. <ref>, as well as to the vibronic splitting observed for the overtones in K_2ReCl_6 <cit.>. The line shape, the experimental peak splitting, and the analogy with the sister compound K_2ReCl_6 provide strong evidence for a vibronic character of the double spin-orbit exciton. § DISCUSSION AND CONCLUSION Spin-orbit entangled j = 1/2 moments are Kramers doublets and as such not Jahn-Teller active. In frustrated quantum magnets, magneto-elastic coupling nevertheless may give rise to distortions that break cubic symmetry and lift magnetic frustration in j = 1/2 compounds <cit.>. A possibly pronounced magneto-elastic coupling would play an essential role for the low-energy physics of j = 1/2 quantum magnets. We combined RIXS, optical spectroscopy, and single-crystal x-ray diffraction to study the possible role of magneto-elastic coupling in antifluorite-type K_2IrCl_6 with j = 1/2 moments on an fcc lattice with highly frustrated exchange couplings. The global cubic structure of K_2IrCl_6 is well established <cit.>. The sizable atomic displacement parameters reported for K_2IrCl_6 at elevated temperatures <cit.> exhibit a linear temperature dependence, underlining their dynamical origin based on rigid, low-energy rotations of the IrCl_6 octahedra. Even though the diffraction data do not provide any indication for deviations from cubic symmetry, the presence of local distortions cannot fully be excluded. Small distortions are a challenge for diffraction experiments and often are detected more easily in spectroscopy based on, e.g., a change of selection rules. The two-peak structure of the spin-orbit exciton observed in L-edge RIXS around 0.63 eV can be attributed to either a noncubic crystal-field splitting or to a vibronic character with phonon sidebands. Optical spectroscopy provides complementary information based on the excellent energy resolution and the different excitation process with different selection rules. We observe a multitude of phonon-assisted peaks in σ_1(ω) that reach as high as 0.74 and 0.77 eV. These features require an explanation in terms of vibronic sidebands, in agreement with previous results on IrCl_6 impurities in host crystals <cit.>, with the vibronic character of the double spin-orbit exciton that we find around 1.3 eV, and with optical results on related intra-t_2g excitations in the sister compounds K_2ReCl_6 and K_2OsCl_6 <cit.>. Furthermore, the vibronic picture is able to describe the temperature dependence observed in RIXS <cit.>. The success of the vibronic scenario suggests that magneto-elastic coupling is not decisive for the stability of the j = 1/2 moments in K_2IrCl_6. Altogether, we conclude that K_2IrCl_6 hosts cubic j = 1/2 moments and spin-orbital-lattice entangled j = 3/2 excited states. This spin-orbital-lattice entangled nature of the spin-orbit exciton is caused by electron-phonon coupling that yields a hybridization of the Jahn-Teller active j = 3/2 quartet with vibrational states <cit.>. The Franck-Condon approximation offers an analytic description of the vibronic line shape. It assumes a negligible timescale for electronic excitations, approximating the eigenstates as product states of electronic and vibrational states. Empirically, the Franck-Condon approach describes the data of RIXS and optics well and in an intuitive way. Equivalent to the analysis of optical data, a single Franck-Condon factor is appropriate to describe vibronic excitations studied in direct RIXS. This differs from the case of elementary phonons that contribute in an indirect RIXS process, and we have shown that this contribution is negligible in our L-edge RIXS data of K_2IrCl_6. This intuitively highlights the particular character of the vibronic "phonon" sidebands of the spin-orbit exciton, for which resonance enhancement in direct RIXS applies to the electronic part of the wave function. A more detailed quantitative description of the experimental data reveals the limitations of the Franck-Condon scenario, in which the splitting is equal to the phonon energy. In particular, the large splitting of 44 meV observed at 10 K and even 51 meV at 300 K is hard to reconcile with an elementary phonon energy in K_2IrCl_6. In contrast, a theory describing the microscopic coupling of the spin-orbit entangled states to the e_g phonon mode with energy E_ p yields E_ p· (1+g^2/8) for the splitting of the first two strong RIXS peaks <cit.>. We thus conclude that the large splitting and its temperature dependence are fingerprints of the spin-orbital-lattice entangled character beyond the Franck-Condon approximation. Concerning L-edge RIXS, the well resolved vibronic sideband in K_2IrCl_6 stands out in transition-metal compounds in general as well as compared to other 5d^5 iridates. The splittings larger than 0.1 eV observed in Ir^4+ oxides <cit.> are caused by a noncubic crystal field. Vibronic sidebands of electronic excitations have been claimed in Ca_2Y_2Cu_5O_10 <cit.>, K_2RuCl_6 <cit.>, as well as in 5d^1 compounds based on the asymmetric line shape of the excitation from j = 3/2 to 1/2 <cit.>, but individual peaks have not been resolved. Typically, the subbands of vibronic excitations are not resolved in solids, even in optical data. However, the vibronic features are particularly sharp in σ_1(ω) of K_2ReCl_6 <cit.> and still very well resolved in K_2IrCl_6, even in RIXS. This can be attributed to the quasimolecular crystal structure with well separated MCl_6 octahedra. The antifluorite-type A_2MX_6 compounds thus offer an ideal platform to further investigate the role of vibronic effects in RIXS. § APPENDIX §.§ Crystal structure determination The x-ray diffraction experiments for a crystal of K_2IrCl_6 with dimensions of 0.072 × 0.059 × 0.049 mm^3 were performed on a Bruker D8 VENTURE Kappa Duo PHOTONIII diffractometer with a IμS micro-focus sealed tube (Mo K_α radiation, 0.71073 Å). Using an Oxford Cryostream Cooler800, we collected data between 290 and 120 K. The crystal structure was solved by direct methods (SHELXT) <cit.> and refined by full matrix least squares based on F^2 (SHELXL2019) <cit.>. Multi-scan absorption correction was applied. The anisotropic atomic displacement parameters U_ij are given in Table I. The data were obtained by first cooling down in steps from 290 to 120 K, while a second set of data points was measured upon heating back to 290 K. The x-ray crystallographic data have been deposited at the Inorganic Crystal Structure Database via the joint CCDC/FIZ Karlsruhe deposition service <cit.> with the deposition numbers CCDC/CSD 2367387 to 2367405 and can be obtained free of charge from https://www.ccdc.cam.ac.uk/structures. §.§ Two-peak fit of RIXS Phenomenologically, the apparent two-peak structure of the RIXS data can also be described by employing two oscillators with a Voigt line shape, see Fig. <ref>. The Voigt profile corresponds to a convolution of Gaussian and Lorentzian peaks. For the Gaussian part we use the instrumental width determined from the elastic line. These fits yield the energy E_0 of the dominant peak and the peak splitting Δ_ exp of 40 to 50 meV. Compared to the vibronic fit shown in Fig. <ref>, we find the same result for E_0 while the peak splitting Δ_ exp is slightly smaller than the phonon progression energy E_p. The agreement between the two-peak Voigt fits and the experimental data does not achieve the same quality as for the vibronic fits shown in Fig. <ref>. This applies in particular to the low-frequency side below 0.6 eV and reflects the extended tails of the Lorentzian contribution. §.§ Optical conductivity above the Mott gap We address the optical conductivity at higher energies to put the phonon-assisted features into context. Figure <ref> shows σ_1(ω) above the Mott gap as determined by ellipsometry. Above about 4 eV, we observe charge-transfer excitations between Ir and Cl sites. In the energy range from 2 to 3.5 eV we find Mott-Hubbard excitations across the Mott gap, i.e., intersite excitations |d_i^5 d_j^5⟩→ |d_i^4 d_j^6⟩ between the Ir sites i and j. Compared to on-site d-d excitations such as the spin-orbit exciton, the value of σ_1(ω) for these directly dipole-allowed excitations is about two orders of magnitude larger. Accordingly, the strong intersite excitations cover the on-site crystal-field excitations from t_2g to e_g states. These are peaking in the energy range from 2.5 to 3.5 eV in RIXS on K_2IrCl_6 <cit.> but are not visible in the optical data. Considering the additional energy cost of the on-site Coulomb repulsion U for intersite excitations across the Mott gap, the two lowest peaks at about 2.4 and 2.9 eV have to be attributed to t_2g states. Their energies can be rationalized in terms of local multiplets. This approach is corroborated by the observation that the spectral weight of such intersite excitations in many transition-metal compounds reflects spin and orbital correlations between nearest neighbors <cit.>. In this picture, the lowest |d_i^4 d_j^6⟩ excited states exhibit a t_2g^6 ^1A_1 multiplet on site j while site i hosts a t_2g^4 multiplet with either ^3T_1 or ^1T_2/^1E symmetry. In a t_2g-only picture, the corresponding excitation energies E(t_2g^4)+E(t_2g^6)-2E(t_2g^5) amount to U-3J_H and U-J_H <cit.>. The energy difference of 2 J_H is reduced to about 1.5 J_H upon taking into account the interaction with e_g orbitals <cit.>. The observed energy difference of 0.5 eV thus agrees with the expectation J_H ≈ 0.3-0.4 eV <cit.>. Finally, we want to point out the close relation to intersite excitations in d^1 compounds such as Mott insulating YTiO_3 <cit.>. There, the energy of the |d_i^0 d_j^2⟩ excited states is governed by two-electron t_2g^2 multiplets that show the same symmetry as the two-hole t_2g^4 states discussed above for K_2IrCl_6. Remarkably, the lowest intersite excitation in YTiO_3, involving the ^3T_1 multiplet, exhibits a two-peak structure that has been explained in terms of a Mott-Hubbard exciton <cit.> for which the energy is reduced by the Coulomb interaction of an electron-hole pair on nearest-neighbor sites as well as by magnetic and orbital correlations. The term Mott-Hubbard exciton is employed not only for truly bound excitons below the Mott gap but also for a nearly-bound resonance within the continuum, as observed in YTiO_3. Mott-Hubbard excitons have been discussed also for the 5d^5 iridates Na_2IrO_3 and Sr_2IrO_4 <cit.>. In this light, one may speculate that the pronounced shoulder at 2 eV of the peak at 2.5 eV in σ_1(ω) of K_2IrCl_6 also reflects nearest-neighbor Coulomb interactions. We thank K. Hopfer and H. Schwab for experimental support and the European Synchrotron Radiation Facility for providing beam time at ID20 and technical support. Furthermore, we acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through Project No. 277146847 – CRC 1238 (projects A02, B02, B03) as well as from the European Union – Next Generation EU - “PNRR - M4C2, investimento 1.1 - Fondo PRIN 2022” - “Superlattices of relativistic oxides” (ID 2022L28H97, CUP D53D23002260006). 99 WitczakKrempa14 W. Witczak-Krempa, G. Chen, Y. B. Kim, and L. Balents, Correlated Quantum Phenomena in the Strong Spin-Orbit Regime, Annu. Rev. Condens. Matter Phys. 5, 57 (2014). Rau16 J. G. Rau, E. Kin-Ho Lee, and H.-Y. Kee, Spin-Orbit Physics Giving Rise to Novel Phases in Correlated Systems: Iridates and Related Materials, Annu. Rev. Condens. Matter Phys. 7, 195 (2016). Schaffer16 R. Schaffer, E. Kin-Ho Lee, B.-J. Yang, and Y. B. Kim, Recent progress on correlated electron systems with strong spin-orbit coupling, Rep. Prog. Phys. 79, 094504 (2016). Takagi19 H. Takagi, T. Takayama, G. Jackeli, G. Khaliullin, and S. E. Nagler, Concept and realization of Kitaev quantum spin liquids, Nat. Rev. Phys. 1, 264 (2019). Streltsov20 S. V. Streltsov and D. I. Khomskii, Jahn-Teller Effect and Spin-Orbit Coupling: Friends or Foes?, Phys. Rev. X 10, 031043 (2020). Takayama21 T. Takayama, J. Chaloupka, A. Smerald, G. Khaliullin, and H. Takagi, Spin–Orbit-Entangled Electronic Phases in 4d and 5d Transition-Metal Compounds, J. Phys. Soc. Jpn. 90, 062001 (2021). Khomskii21 D. I. Khomskii and S. V. Streltsov, Orbital Effects in Solids: Basics, Recent Progress, and Opportunities, Chem. Rev. 121, 2992 (2021). Jackeli09 G. Jackeli and G. Khaliullin, Mott Insulators in the Strong Spin-Orbit Coupling Limit: From Heisenberg to a Quantum Compass and Kitaev Models, Phys. Rev. Lett. 102, 017205 (2009). Chun15 S.H. Chun, J.-W. Kim, J. Kim, H. Zheng, C.C. Stoumpos, C.D. Malliakas, J.F. Mitchell, K. Mehlawat, Y. Singh, Y. Choi, T. Gog, A. Al-Zein, M. Moretti Sala, M. Krisch, J. Chaloupka, G. Jackeli, G. Khaliullin, and B.J. Kim, Direct evidence for dominant bond-directional interactions in a honeycomb lattice iridate Na_2IrO_3, Nat. Phys. 11, 462 (2015). Magnaterra23 M. Magnaterra, K. Hopfer, Ch. J. Sahle, M. Moretti Sala, G. Monaco, J. Attig, C. Hickey, I.-M. Pietsch, F. Breitner, P. Gegenwart, M. H. Upton, Jungho Kim, S. Trebst, P. H. M. van Loosdrecht, J. van den Brink, and M. Grüninger, RIXS observation of bond-directional nearest-neighbor excitations in the Kitaev material Na_2IrO_3, arXiv:2301.08340 Moretti14CEF M. Moretti Sala, S. Boseggia, D. F. McMorrow, and G. Monaco, Resonant X-Ray Scattering and the j_ eff = 1/2 Electronic Ground State in Iridate Perovskites, Phys. Rev. Lett. 112, 026403 (2014). Liu12 X. Liu, V. M. Katukuri, L. Hozoi, W.-G. Yin, M. P. M. Dean, M. H. Upton, J. Kim, D. Casa, A. Said, T. Gog, T. F. Qi, G. Cao, A. M. Tsvelik, J. van den Brink, and J. P. Hill, Testing the Validity of the Strong Spin-Orbit-Coupling Limit for Octahedrally Coordinated Iridate Compounds in a Model System Sr_3CuIrO_6, Phys. Rev. Lett. 109, 157401 (2012). Gretarsson13 H. Gretarsson J. P. Clancy, X. Liu, J. P. Hill, E. Bozin, Y. Singh, S. Manni, P. Gegenwart, J. Kim, A. H. Said, D. Casa, T. Gog, M. H. Upton, H.-S. Kim, J. Yu, V. M. Katukuri, L. Hozoi, J. van den Brink, and Y.-J. Kim, Crystal-Field Splitting and Correlation Effect on the Electronic Structure of A_2IrO_3, Phys. Rev. Lett. 110, 076402 (2013). Rossi17 M. Rossi, M. Retegan, C. Giacobbe, R. Fumagalli, A. Efimenko, T. Kulka, K. Wohlfeld, A. I. Gubanov, and M. Moretti Sala, Possibility to realize spin-orbit-induced correlated physics in iridium fluorides, Phys. Rev. B 95, 235161 (2017). Aczel19 A. A. Aczel, J. P. Clancy, Q. Chen, H. D. Zhou, D. Reigi- Plessis, G. J. MacDougall, J. P. C. Ruff, M. H. Upton, Z. Islam, T. J. Williams, S. Calder, and J.-Q. Yan, Revisiting the Kitaev material candidacy of Ir^4+ double perovskite iridates, Phys. Rev. B 99, 134417 (2019). Revelli19 A. Revelli, C. C. Loo, D. Kiese, P. Becker, T. Fröhlich, T. Lorenz, M. Moretti Sala, G. Monaco, F. L. Buessen, J. Attig, M. Hermanns, S. V. Streltsov, D. I. Khomskii, J. van den Brink, M. Braden, P. H. M. van Loosdrecht, S. Trebst, A. Paramekanti, and M. Grüninger, Spin-orbit entangled j = 1/2 moments in Ba_2CeIrO_6: A frustrated fcc quantum magnet, Phys. Rev. B 100, 085139 (2019). Ruiz21 A. Ruiz, N. P. Breznay, M. Li, I. Rousochatzakis, A. Allen, I. Zinda, V. Nagarajan, G. Lopez, Z. Islam, M. H. Upton, J. Kim, A. H. Said, X.-R. Huang, T. Gog, D. Casa, R. J. Birgeneau, J. D. Koralek, J. G. Analytis, N. B. Perkins, and A. Frano, Magnon-spinon dichotomy in the Kitaev hyperhoneycomb β-Li_2IrO_3, Phys. Rev. B 103, 184404 (2021). delaTorre21 A. de la Torre, B. Zager, F. Bahrami, M. DiScala, J. R. Chamorro, M. H. Upton, G. Fabbris, D. Haskel, D. Casa, T. M. McQueen, F. Tafti, and K. W. Plumb, Enhanced hybridization in the electronic ground state of the intercalated honeycomb iridate Ag_3LiIr_2O_6, Phys. Rev. B 104, L100416 (2021). Jin22 W. Jin, S. H. Chun, J. Kim, D. Casa, J. P. C. Ruff, C. J. Won, K. D. Lee, N. Hur, and Y.-J. Kim, Magnetic excitations in the double-perovskite iridates La_2MIrO_6 (M = Co, Ni, and Zn) mediated by 3d-5d hybridization, Phys. Rev. B 105, 054419 (2022). Magnaterra23Ti M. Magnaterra, M. Moretti Sala, G. Monaco, P. Becker, M. Hermanns, P. Warzanowski, T. Lorenz, D. I. Khomskii, P. H. M. van Loosdrecht, J. van den Brink, and M. Grüninger, RIXS interferometry and the role of disorder in the quantum magnet Ba_3Ti_3-xIr_xO_9, Phys. Rev. Research 5, 013167 (2023). delaTorre23 A. de la Torre, B. Zager, F. Bahrami, M. H. Upton, J. Kim, G. Fabbris, G.-H. Lee, W. Yang, D. Haskel, F. Tafti, and K. W. Plumb, Momentum-independent magnetic excitation continuum in the honeycomb iridate H_3LiIr_2O_6, Nat. Commun. 14, 5018 (2023). Kim12 Jungho Kim, D. Casa, M. H. Upton, T. Gog, Young-June Kim, J. F. Mitchell, M. van Veenendaal, M. Daghofer, J. van den Brink, G. Khaliullin, and B. J. Kim, Magnetic Excitation Spectra of Sr_2IrO_4 Probed by Resonant Inelastic X-Ray Scattering: Establishing Links to Cuprate Superconductors, Phys.Rev. Lett. 108, 177003 (2012). Kim14 J. Kim, M. Daghofer, A.H. Said, T. Gog, J. van den Brink, G. Khaliullin, and B.J. Kim, Excitonic quasiparticles in a spin–orbit Mott insulator, Nat. Commun. 5, 4453 (2014). Kim12327 Jungho Kim, A. H. Said, D. Casa, M. H. Upton, T. Gog, M. Daghofer, G. Jackeli, J. van den Brink, G. Khaliullin, and B. J. Kim, Large Spin-Wave Energy Gap in the Bilayer Iridate Sr_3Ir_2O_7: Evidence for Enhanced Dipolar Interactions Near the Mott Metal-Insulator Transition, Phys. Rev. Lett. 109, 157402 (2012). Moretti15 M. Moretti Sala, V. Schnells, S. Boseggia, L. Simonelli, A. Al-Zein, J. G. Vale, L. Paolasini, E. C. Hunter, R. S. Perry, D. Prabhakaran, A. T. Boothroyd, M. Krisch, G. Monaco, H. M. Ronnow, D. F. McMorrow, and F. Mila, Evidence of quantum dimer excitations in Sr_3Ir_2O_7, Phys. Rev. B 92, 024405 (2015). Lu18 X. Lu, P. Olalde-Velasco, Y. Huang, V. Bisogni, J. Pelliciari, S. Fatale, M. Dantz, J. G. Vale, E. C. Hunter, J. Chang, V. N. Strocov, R. S. Perry, M. Grioni, D. F. McMorrow, H. M. Ronnow, and T. Schmitt, Dispersive magnetic and electronic excitations in iridate perovskites probed by oxygen K-edge resonant inelastic x-ray scattering, Phys. Rev. B 97, 041102(R) (2018). Bertin24 A. Bertin, L. Kiefer, P. Becker, L Bohatý, and M. Braden, Rotational phase transitions in antifluorite-type osmate and iridate compounds, J. Phys.: Condens. Matter 36, 245402 (2024). Khan19 N. Khan, D. Prishchenko, Y. Skourski, V. G. Mazurenko,and A. A. Tsirlin, Cubic symmetry and magnetic frustration on the fcc spin lattice in K_2IrCl_6, Phys. Rev. B 99, 144425 (2019). Reig20 D. Reig-i-Plessis, T. A. Johnson, K. Lu, Q. Chen, J. P. C. Ruff, M. H. Upton, T. J. Williams, S. Calder, H. D. Zhou, J. P. Clancy, A. A. Aczel, and G. J. MacDougall, Structural, electronic, and magnetic properties of nearly ideal J_eff = 1/2 iridium halides, Phys. Rev. Mater. 4, 124407 (2020). Khan21 N. Khan, D.Prishchenko, M. H. Upton, V. G. Mazurenko, and A. A. Tsirlin, Towards cubic symmetry for Ir^4+: Structure and magnetism of the antifluorite K_2IrBr_6, Phys. Rev. B 103, 125158 (2021). Lee22 S. Lee, B. H. Kim, M.-J. Seong, and K.-Y. Choi, Noncubic local distortions and spin-orbit excitons in K_2IrCl_6, Phys. Rev. B 105, 184433 (2022). Meggle23 F. Meggle, M. Mikuta, M. Saule, V. Hermann, N. Khan, A. A. Tsirlin, and C. A. Kuntscher Optical signatures of the J_eff = 1/2 state in Ir^4+ halides, Phys. Rev. B 107, 235142 (2023). Bhaskaran21 L. Bhaskaran, A. N. Ponomaryov, J. Wosnitza, N. Khan, A. A. Tsirlin, M. E. Zhitomirsky, and S. A. Zvyagin, Antiferromagnetic resonance in the cubic iridium hexahalides (NH_4)_2IrCl_6 and K_2IrCl_6, Phys. Rev. B 104, 184404 (2021). Balla20 P. Balla, Y. Iqbal, and K. Penc, Degenerate manifolds, helimagnets, and multi-Q chiral phases in the classical Heisenberg antiferromagnet on the face-centered-cubic lattice, Phys. Rev. Res. 2, 043278 (2020). Iwahara23 N.Iwahara and W. Furukawa, Vibronic effect on resonant inelastic x-ray scattering in cubic iridium hexahalides, Phys. Rev. B 108, 075136 (2023). Plotnikova16 E. M. Plotnikova, M. Daghofer, J. van den Brink, and K. Wohlfeld, Jahn-Teller Effect in Systems with Strong On-Site Spin-Orbit Coupling, Phys. Rev. Lett. 116, 106401 (2016). Moretti14 M. Moretti Sala, K. Ohgushi, A. Al-Zein, Y. Hirata, G. Monaco, and M. Krisch, CaIrO_3: A Spin-Orbit Mott Insulator Beyond the j_ eff=1/2 Ground State, Phys. Rev. Lett. 112, 176402 (2014). Warzanowski20 P. Warzanowski, N. Borgwardt, K. Hopfer, J. Attig, T. C. Koethe, P. Becker, V. Tsurkan, A. Loidl, M. Hermanns, P. H. M. van Loosdrecht, and M. Grüninger, Multiple spin-orbit excitons and the electronic structure of α-RuCl_3, Phys. Rev. Res. 2, 042007(R) (2020). Lee21 J.-H. Lee, Y. Choi, S.-H. Do, B. H. Kim, M.-J. Seong, and K.-Y. Choi, Multiple spin-orbit excitons in α-RuCl_3 from bulk to atomically thin layers, npj Quantum Mater. 6, 43 (2021). Vale19 J. G. Vale, C. D. Dashwood, E. Paris, L. S. I. Veiga, M. Garcia-Fernandez, A. Nag, A. Walters, K.-J. Zhou, I.-M. Pietsch, A. Jesche, P. Gegenwart, R. Coldea, T. Schmitt, and D. F. McMorrow, High-resolution resonant inelastic x-ray scattering study of the electron-phonon coupling in honeycomb α-Li_2IrO_3, Phys. Rev. B 100, 224303 (2019). Agrestini24 S. Agrestini, F. Borgatti, P. Florio, J. Frassineti, D. Fiore Mosca, Q. Faure, B. Detlefs, C. J. Sahle, S. Francoual, J. Choi, M. Garcia-Fernandez, K. -J. Zhou, V. F. Mitrovic, P. M. Woodward, G. Ghiringhelli, C. Franchini, F. Boscherini, S. Sanna, and M. Moretti Sala, The origin of magnetism in a supposedly nonmagnetic osmium oxide, arXiv:2401.12035, Phys. Rev. Lett. (in press, 2024). Revelli20 A. Revelli, M. Moretti Sala, G. Monaco, C. Hickey, P. Becker, F. Freund, A. Jesche, P. Gegenwart, T. Eschmann, F. L. Buessen, S. Trebst, P. H. M. van Loosdrecht, J. van den Brink, and M. Grüninger, Fingerprints of Kitaev physics in the magnetic excitations of honeycomb iridates, Phys. Rev. Research 2, 043094 (2020). Lee14 J. J. Lee, B. Moritz, W. S. Lee, M. Yi, C. J. Jia, A. P. Sorini, K. Kudo, Y. Koike, K. J. Zhou, C. Monney, V. Strocov, L. Patthey, T. Schmitt, T. P. Devereaux, and Z. X. Shen, Charge-orbital-lattice coupling effects in the dd excitation profile of one-dimensional cuprates, Phys. Rev. B 89, 041104(R) (2014). Iwahara23Ru N.Iwahara and S. Shikano, Vibronic excitations in resonant inelastic x-ray scattering spectra of K_2RuCl_6, Phys. Rev. Res. 5, 023051 (2023). Takahashi21 H. Takahashi, H. Suzuki, J. Bertinshaw, S. Bette, C. Mühle, J. Nuss, R. Dinnebier, A. Yaresko, G. Khaliullin, H. Gretarsson, T. Takayama, H. Takagi, and B. Keimer, Nonmagnetic J = 0 State and Spin-Orbit Excitations in K_2RuCl_6, Phys. Rev. Lett. 127, 227201 (2021). Frontini24 F. I. Frontini, G. H. J. Johnstone, N. Iwahara, P. Bhattacharyya, N. A. Bogdanov, L. Hozoi, M. H. Upton, D. M. Casa, D. Hirai, and Y.-J. Kim, Spin-orbit-lattice entangled state in A_2MgReO_6 (A = Ca, Sr, Ba) revealed by resonant inelastic x-ray scattering, arXiv:2311.01621, Phys. Rev. Lett. (in press, 2024). Hennies10 F. Hennies, A. Pietzsch, M. Berglund, A. Föhlisch, Th. Schmitt, V. Strocov, H. O. Karlsson, J. Andersson, and J.-E. Rubensson, Resonant Inelastic Scattering Spectra of Free Molecules with Vibrational Resolution, Phys. Rev. Lett. 104, 193002 (2010). Henderson89 B. Henderson and G. F. Imbusch, Optical spectroscopy of inorganic solids (Oxford, 1989). Figgis B. N. Figgis and M. A. Hitchman, Ligand Field Theory and its Applications (Wiley, 1999). Ballhausen C. F. Ballhausen, Introduction to Ligand Field Theory (McGraw-Hill, New York, 1962). Rueckamp05 R. Rückamp, E. Benckiser, M. W. Haverkort, H. Roth, T. Lorenz, A. Freimuth, L. Jongen, A. Möller, G. Meyer, P. Reutler, B. Büchner, A. Revcolevschi, S.-W. Cheong, C. Sekar, G. Krabbes, and M. Grüninger, Optical study of orbital excitationsin transition-metal oxides, New J. Phys. 7, 144 (2005). Warzanowski24 P. Warzanowski, M. Magnaterra, G. Schlicht, Q. Faure, Ch. J. Sahle, P. Becker, L. Bohatý, M. Moretti Sala, G. Monaco, M. Hermanns, P.H.M. van Loosdrecht, and M. Grüninger, Spin-orbit coupling in a half-filled t_2g shell: the case of 5d^3 K_2ReCl_6, Phys. Rev. B 109, 155149 (2024). Warzanowski23 P. Warzanowski, M. Magnaterra, P. Stein, G. Schlicht, Q. Faure, Ch. J. Sahle, T. Lorenz, P. Becker, L. Bohatý, M. Moretti Sala, G. Monaco, P. H. M. van Loosdrecht, and M. Grüninger, Electronic excitations in 5d^4 J = 0 Os^4+ halides studied by resonant inelastic x-ray scattering and optical spectroscopy, Phys. Rev. B 108, 125120 (2023). Cooke59 A. H. Cooke, R. Lazenby, F. R. McKim, J. Owen, and W. P. Wolf, Exchange interactions in antiferromagnetic salts of iridium II. Magnetic susceptibility measurements, Proc. R. Soc. London Ser. A 250, 97 (1959). Moretti13 M. Moretti Sala, C. Henriquet, L. Simonelli, R. Verbeni, and G. Monaco, High energy-resolution set-up for Ir L_3 edge RIXS experiments, J. Electron Spectrosc. Relat. Phenom. 188, 150 (2013). Moretti18 M. Moretti Sala, K. Martel, C. Henriquet, A. Al Zein, L. Simonelli, Ch.J. Sahle, H. Gonzalez, M.-C. Lagier, C. Ponchut, S. Huotari, R. Verbeni, M. Krisch, and G. Monaco, A high-energy-resolution resonant inelastic X-ray scattering spectrometer at ID20 of the European Synchrotron Radiation Facility, J. Synchrotron Rad. 25, 580 (2018). Azzam87 R. M. A. Azzam and N. M. Bashara, Ellipsometry and Polarized Light (Elsevier, New York, 1987). Lynn78 J. W. Lynn, H. H. Patterson, G. Shirane, and R. G. Wheeler, Soft rotary mode and structural phase transition in K_2ReCl_6, Solid State Commun. 27, 859 (1978). Mintz79 D. Mintz, R. L. Armstrong, B. M. Powell, and W. J. L. Buyers, Soft rotary mode in the antifluorite crystal K_2OsCl_6, Phys. Rev. B 19, 448, (1979). Oleary70 G. P. O'Leary and R. G. Wheeler, Phase Transitions and Soft Librational Modes in Cubic Crystals, Phys. Rev. B 1, 4409 (1970). Schweiss94 P. Schweiss, W. Reichardt, M. Braden, G. Collin, G. Heger, H. Claus, and A. Erb, Static and dynamic displacements in RBa_2Cu_3O_7-δ (R=Y, Ho; δ = 0.05, 0.5): A neutron-diffraction study on single crystals, Phys. Rev. B 49, 1387 (1994). Sawatzky89 G.A. Sawatzky, Testing Fermi-liquid models, Nature 342, 480 (1989). Hitchman79 M. A. Hitchman and P. J. Cassidy, Polarized Crystal Spectrum of Bis(methylphenethylammonium) Tetrachlorocuprate(II): Analysis of the Energies, Vibrational Fine Structure, and Temperature Dependence of the “d-d” Transitions of the Planar CuCl_4^2- Ion, Inorg. Chem. 18, 1745 (1979). Adams63 D.M. Adams and H.A. Gebbie, Absorption spectra of some inorganic complex halides by far infra-red interferometry, Spectrochim. Acta 19, 925 (1963). Bottger72 G.L. Bottger and A.E. Salwin, The vibrational spectra of alkali salts of hexahaloiridates, Spectrochim. Acta 28A, 925 (1972). Parker98 S. F. Parker and J. B. Forsyth. K_2MCl_6 (M = Pt, Ir), location of the silent modes and forcefields, J. Chem. Soc., Faraday Trans. 94, 1111 (1998). Keiderling75 T.A. Keiderling, P.J. Stephens, S.B. Piepho, J.L. Slater, P.N. Schatz, Infrared absorption and magnetic circular dichroism of Cs_2ZrCl_6:Ir^4+, Chem. Phys. 11, 343 (1975). Yoo86 R. K. Yoo and T. A. Keiderling, Intraconfigurational absorption spectroscopy of IrCl_6^2- and IrBr_6^2- in A_2MX_6-type host crystals, Chem. Phys. 108, 317 (1986). Pross74 L. Pross, K. Rössler, and H. J. Schenk, Optical studies on Hexahalorhenates - I Low temperature absorption spectra of K_2[ReCl_6] single crystals, J. inorg. nucl. Chem. 36, 317 (1974). Yoo87 R.K. Yoo, S.C. Lee, B.A. Kozikowski and T.A. Keiderling, Intraconfigurational absorption spectroscopy of ReCl^2-_6 in various A_2MCl_6 host crystals, Chem. Phys. 117, 237 (1987). Bettinelli88 M. Bettinelli and C. D. Flint, Magnon sidebands and cooperative absorptions in K_2ReCl_6 and Cs_2ReCl_6, J. Phys. C: Solid State Phys. 21, 5499 (1988). Kozikowski83 B. A. Kozikowski and T. A. Keiderling, Intraconfigurational absorption spectroscopy of Os^4+ ion in K_2SnCl_6 and K_2OsCl_6 crystals, J. Phys. Chem. 87, 4630 (1983). Huang50 K. Huang and A. Rhys, Theory of light absorption and non-radiative transitions in F-centres, Proc. R. Soc. Lond. A 204, 406 (1950). Ament11EPL L. J. P. Ament, M. van Veenendaal, and J. van den Brink, Determining the electron–phonon coupling strength from Resonant Inelastic X-ray Scattering at transition metal L-edges, EPL 95, 27008 (2011). Geondzhian20 A. Geondzhian and K. Gilmore, Generalization of the Franck-Condon model for phonon excitations by resonant inelastic x-ray scattering, Phys. Rev. B 101, 214307 (2020). Gilmore23 K. Gilmore, Quantifying vibronic coupling with resonant inelastic X-ray scattering, Phys. Chem. Chem. Phys. 25, 217 (2023). Braicovich20 L. Braicovich, M. Rossi, R. Fumagalli, Y. Peng, Y. Wang, R. Arpaia, D. Betto, G.M. De Luca, D. Di Castro, K. Kummer, M. Moretti Sala, M. Pagetti, G. Balestrino, N.B. Brookes, M. Salluzzo, S. Johnston, J. van den Brink, and G. Ghiringhelli, Determining the electron-phonon coupling in superconducting cuprates by resonant inelastic x-ray scattering: Methods and results on Nd_1+xBa_2-xCu_3O_7-δ, Phys. Rev. Research 2, 023231 (2020). Black75 A. M. Black and C. D. Flint, Jahn-Teller effect in the Γ_8(^2T_2g, t_2g^3) state of ReBr_6^2-, J. Chem. Soc., Faraday Trans. 2, 71, 1871 (1975). Liu19 H. Liu and G. Khaliullin, Pseudo-Jahn-Teller Effect and Magnetoelastic Coupling in Spin-Orbit Mott Insulators, Phys. Rev. Lett. 122, 057203 (2019). Ament11 L.J.P. Ament, M. van Veenendaal, T.P. Devereaux, J.P. Hill, and J. van den Brink, Resonant inelastic x-ray scattering studies of elementary excitations, Rev. Mod. Phys. 83, 705 (2011). Devereaux16 T. P. Devereaux, A. M. Shvaika, K. Wu, K. Wohlfeld, C. J. Jia, Y. Wang, B. Moritz, L. Chaix, W.-S. Lee, Z.-X. Shen, G. Ghiringhelli, and L. Braicovich, Directly Characterizing the Relative Strength and Momentum Dependence of Electron-Phonon Coupling Using Resonant Inelastic X-Ray Scattering, Phys. Rev. X 6, 041019 (2016). Johnston15 S. Johnston, C. Monney, V. Bisogni, K.-J. Zhou, R. Kraus, G. Behr, V.N. Strocov, J. Málek, S.-L. Drechsler, J. Geck, Th. Schmitt, and J. van den Brink, Electron-lattice interactions strongly renormalize the charge-transfer energy in the spin-chain cuprate Li_2CuO_2, Nat. Commun. 7, 10563 (2015). Benckiser08 E. Benckiser, R. Rückamp, T. Möller, T. Taetz, A. Möller, A. A. Nugroho, T. T. M. Palstra, G. S. Uhrig, and M. Grüninger, Collective orbital excitations in orbitally ordered YVO_3 and HoVO_3, New J. Phys. 10, 053027 (2008). Sheldrick15A G.M. Sheldrick, SHELXT – Integrated space-group and crystal-structure determination, Acta Cryst. A 71, 3 (2015). Sheldrick15C G.M. Sheldrick, Crystal structure refinement with SHELXL, Acta Cryst. C 71, 3 (2015). Database Joint CCDC/FIZ Karlsruhe deposition service, www.ccdc.cam.ac.uk/discover/news/2018-07-new-joint-services/ Khaliullin04 G. Khaliullin, P. Horsch, and A. M. Oleś, Theory of optical spectral weights in Mott insulators with orbital degrees of freedom, Phys. Rev. B 70, 195103 (2004). Oles05 A. M. Oleś, G. Khaliullin, P. Horsch, and L. F. Feiner, Fingerprints of spin-orbital physics in cubic Mott insulators: Magnetic exchange interactions and optical spectral weights, Phys. Rev. B 72, 214431 (2005). Fang03 Z. Fang, N. Nagaosa, and K. Terakura, Anisotropic optical conductivities due to spin and orbital ordering in LaVO_3 and YVO_3: First-principles studies, Phys. Rev. B 67, 035101 (2003). Kovaleva04 N. N. Kovaleva, A. V. Boris, C. Bernhard, A. Kulakov, A. Pimenov, A. M. Balbashov, G. Khaliullin, and B. Keimer, Spin-Controlled Mott-Hubbard Bands in LaMnO_3 Probed by Optical Ellipsometry, Phys. Rev. Lett. 93, 147204 (2004). Lee05 J. S. Lee, M. W. Kim, and T. W. Noh, Optical excitations of transition-metal oxides under the orbital multiplicity effects, New J. Phys. 7, 147 (2005). Goessling08 A. Gössling, R. Schmitz, H. Roth, M.W. Haverkort, T. Lorenz, J.A. Mydosh, E. Müller-Hartmann, and M. Grüninger, Mott-Hubbard exciton in the optical conductivity of YTiO_3 and SmTiO_3, Phys. Rev. B 78, 075122 (2008). Reul12 J. Reul, A. A. Nugroho, T. T. M. Palstra, and M. Grüninger, Probing orbital fluctuations in RVO_3 (R = Y, Gd, or Ce) by ellipsometry, Phys. Rev. B 86, 125128 (2012). Vergara22 I. Vergara, M. Magnaterra, P. Warzanowski, J. Attig, S. Kunkemöller, D.I. Khomskii, M. Braden, M. Hermanns, and M. Grüninger, Spin-orbit coupling and crystal-field splitting in Ti-doped Ca_2RuO_4 studied by ellipsometry, Phys. Rev. B 106, 085103 (2022). Zhang17 G. Zhang and E. Pavarini, Mott transition, spin-orbit effects, and magnetism in Ca_2RuO_4, Phys. Rev. B 95, 075145 (2017). KKM14 B. H. Kim, G. Khaliullin, and B. I. Min, Electronic excitations in the edge-shared relativistic Mott insulator: Na_2IrO_3, Phys. Rev. B 89, 081109(R) (2014). Winter17 S. M. Winter, A. A. Tsirlin, M. Daghofer, J. van den Brink, Y. Singh, P. Gegenwart, and R. Valentí, , Models and materials for generalized Kitaev magnetism, J. Phys.: Condens. Matter 29, 493002 (2017). Alpichshev15 Z. Alpichshev, F. Mahmood, G. Cao, and N. Gedik, Confinement-Deconfinement Transition as an Indication of Spin-Liquid-Type Behavior in Na_2IrO_3, Phys. Rev. Lett. 114, 017203 (2015). Mehio23 O. Mehio, X. Li, H. Ning, Z. Lenarčič, Y. Han, M. Buchhold, Z. Porter, N. J. Laurita, S. D. Wilson, and D. Hsieh, A Hubbard exciton fluid in a photo-doped antiferromagnetic Mott insulator, Nat. Phys. 19, 1876 (2023).
http://arxiv.org/abs/2407.13287v1
20240718084055
On the Logical and Algebraic Aspects of Reasoning with Formal Contexts
[ "Prosenjit Howlader", "Churn-Jung Liau" ]
math.LO
[ "math.LO", "03Bxx 03B45" ]
http://arxiv.org/abs/2407.13383v1
20240718104041
NeuroPlug: Plugging Side-Channel Leaks in NPUs using Space Filling Curves
[ "Nivedita Shrivastava", "Smruti R. Sarangi" ]
cs.CR
[ "cs.CR" ]
=8.5in =11in arabic arabic [1] Initialize: [t].8#1 B-.05emi-.025em b-.08em T-.1667em.7exE-.125emX stbox[1] [colback=gray!5!white,colframe=gray!75!black,title=#1, left=0.2pt,right=0.2pt,top=0.2pt,bottom=0.2pt] mathescape, frame=single, language=C, aboveskip=2mm, belowskip=2mm, showstringspaces=false, columns=flexible, numbers=left, numberstyle=, keywordstyle=, commentstyle=, stringstyle=, breaklines=true, breakatwhitespace=true, basicstyle=44, basicstyle=, emph=_user_toggle , emphstyle=violet, float=*,basicstyle=, linewidth=8.0cm, xleftmargin=.5cm, shadows skins firstpage [C] arabic NeuroPlug: Plugging Side-Channel Leaks in NPUs using Space Filling Curves Nivedita Shrivastava Dept. of Electrical Engineering Indian Institute of Technology Delhi New Delhi, India nivedita.shrivastava@ee.iitd.ac.in Smruti R. Sarangi Dept. of Electrical Engineering Indian Institute of Technology Delhi New Delhi, India srsarangi@cse.iitd.ac.in July 22, 2024 ========================================================================================================================================================================================================================================================================================= firstpage plain § ABSTRACT Securing deep neural networks (DNNs) from side-channel attacks is an important problem as of today, given the substantial investment of time and resources in acquiring the raw data and training complex models. All published countermeasures (CMs) add noise N to a signal X (parameter of interest such as the net memory traffic that is leaked). The adversary observes X+N; we shall show that it is easy to filter this noise out using targeted measurements, statistical analyses and different kinds of reasonably-assumed side information. We present a novel CM that is immune to these attack methodologies mainly because we use a different formulation CX+N. We introduce a multiplicative variable C that naturally arises from feature map compression; it plays a key role in obfuscating the parameters of interest. Our approach is based on mapping all the computations to a 1-D space filling curve and then performing a sequence of tiling, compression and binning-based obfuscation operations. We follow up with proposing a theoretical framework based on Mellin transforms that allows us to accurately quantify the size of the search space as a function of the noise we add and the side information that an adversary possesses. The security guarantees provided by are validated using a battery of statistical and information theory-based tests. We also demonstrate a substantial performance enhancement of 15% compared to the closest competing work. § INTRODUCTION Deep neural network (DNN) models embody a significant amount of intellectual property (IP). Creating such models requires a massive amount of investment in terms of financial resources and human effort: data collection via mechanical turks, model design and rigorous training <cit.>. Attackers thus have a very strong incentive to steal the model parameters associated with these DNN models such as the nature/number of layers, details of weights, etc. <cit.>. Such security breaches can potentially compromise the privacy, security and integrity of large autonomous systems. For example, an attacker can make a small human-invisible change to lane markings or traffic lights, which are sufficiently powerful to confuse lane detection and traffic light sensing models, respectively <cit.>. Possessing even partial knowledge about the inner workings of a DNN model significantly enhances the feasibility of orchestrating successful attacks – adversarial attacks <cit.>, membership inference attacks <cit.> and bit-flip attacks <cit.>. For the last 5 years, attackers and defenders have been locked in an arms race to design more potent attacks and countermeasures (CMs), respectively. The attacker relies on a memory/timing based side channel that leaks a signal X, which carries a lot of information about the aforementioned model parameters. X could be the number of off-chip bytes transferred (volume), computational time, the number of bytes read between a write→read to the same memory address (read-write distance), the number of times a byte is read (count), or any combination thereof. Existing countermeasures (CMs) present Y=X+N to the attacker, where N is the added noise. State-of-the-art attacks <cit.> look at a subset of these four variables – count, distance, time and volume (CDTV) – and try to reduce the search space of X by collecting multiple measurements and using specially crafted inputs. On the other hand, CMs use a combination of shuffling data, adding dummy accesses and on-chip buffering to hide the noise parameter. Shortcomings of Current CMs The most relevant, state-of-the-art CMs are shown in Table <ref> (details in Section <ref>). Given that X is a constant for a given layer as the model parameters will remain the same, repeated measurements with possibly different inputs and possibly across multiple devices shall yield the distribution of N. If the mean of N is 0 or the minimum is 0, then the noise gets effectively filtered out (statistical (SS) attack)(<cit.>). Assume that this is not the case, then any measurable statistical metric is a non-zero constant N_c, which is typically a hardwired value <cit.>. Any such hardwired value can be leaked by a malicious insider <cit.>. It also violates the Kerckhoff's principle <cit.>, which clearly states that security cannot be derived from hiding aspects of the design. Such Kerckhoff (KK) attacks can yield all the constants that characterize the parameters of the noise distribution and this leads to an estimate of X. Now, assume that despite statistical and Kerckhoff attacks, our search space is still large. We can then glean some knowledge from sister neural networks (belonging to the same genre) and try to make educated guesses. For instance, layer sizes are mostly multiples of squares(see<cit.>. Such numbers are formally known as non-square-free (NSQF) numbers <cit.>. With other reasonable assumptions regarding layer skewness, our search space can reduce drastically. Let us refer to this as a side-information (SI) attack. Using these three attacks – SS, KK and SI – we were able to break all the CMs mentioned in Table <ref>. Breaking these approaches required a maximum of 60,000 runs. If the real fabricated NPUs were available (7nm technology), we estimate the process to take a maximum of 8-10 hours. Using layer-based obfuscation approaches (dynamically fuse/split layers) <cit.> did not prove to be very useful other than enlarging the search space moderately, primarily because it does not fundamentally obfuscate the CDTV metrics. Our Insights We introduce a new degree of freedom by augmenting the X+N formulation with a multiplicative random variable C. The new formulation becomes an affine transform of the form CX+N. We shall show in Section <ref> that for our proposed design , this formulation increases the search space by roughly 10^70 times after filtering out as much of the noise as we can (using the SS, KK and SI attacks). This is the bedrock of our design. Our Solution: This factor C naturally arises from data compression in our design and can further be augmented by adding custom noise. Insofar as the adversary is concerned, all the CDTV metrics are affected to different extents because of compression – the effect is equivalent to reading and writing CX bytes. To enable data compression as an obfuscatory tool, we need to represent a computation in a DNN differently. This choice segues into the novel notion of using 1-D space filling curves (SFCs) <cit.> to map a complex 3-D space of DNN computations to an equivalent 1-D space. This 1-D curve conceptually snakes through all the computation layers of a DNN (possibly with skip connections) and allows us to create contiguous chunks of bytes (tiles), compress them and bin them. A tile can be split across bins and we can also leave space within a bin empty (contribution to the N in CX+N). Now, C is tied to the plaintext. Many may argue that given that the plaintext is known, it is possible to estimate C, at least in the first few layers. In the later layers due to both aleatoric (inherent) and epistemic (lack of knowledge of model parameters) uncertainty, estimating C is difficult <cit.>. Hence, in the first few layers, we need to enhance the uncertainty in the noise N and also add some random text in the data to obfuscate C. Plaintext-aided encryption is known to work well with such safeguards <cit.>. Now, to not run afoul of the SS and KK attacks, we make these dummy text snippets and all the parameters that determine the nature of the noise distribution N a part of the key (the encrypted DNN model in this case). This is allowed. This ensures that even if the key is compromised, the effects will be limited to systems that use the key. The key can always be changed (refer to the paper on secure leasing <cit.>). Figure <ref> shows the outline of our scheme. The SFC and the affine transform (CX+N) are the central themes, which lead to an architecture based on tiling, binning, compression and noise addition. Regardless of the side-information available, we show that introducing the extra degree of freedom (C) has a dramatic effect on the search space, whose size is computed using our novel Mellin transform <cit.> based framework. Given that we can modulate C by adding random noise, the size of the search space can be increased (at the cost of performance). This may be required in the future if the model owner desires greater levels of protection or the HW becomes faster. To the best of our knowledge, such a systematic and mathematically grounded approach of search space estimation is novel. Note that the size of the search space yielded by our analysis is a function of the side information possessed by the adversary. The adversary is limited to only memory and timing-based side channels. The specific list of contributions in this paper is as follows: 1 A comprehensive experimental study of the information leaked regarding input/output feature dimensions by sister NNs. 2 A thorough analysis of the limitations in existing state-of-the-art security solutions in the light of recent threats – SS, KK and SI attacks. 3 A formal theoretical framework to quantify the “smart” search space in the system given a bound on the information leaked to the adversary. 4 A novel approach to enhance the security of NN accelerators hiding the leaked signatures using an SFC-based execution pattern. 5 A detailed theoretical and performance analysis of , which shows a 15% performance improvement over the closest competing work (DNNCloak). 6 Two case studies of state-of-the-art attacks on sparse and dense accelerators (resp.), a rigorous evaluation based on computing the size of the search space for a given amount of added noise (and side information), a performance-security trade-off analysis, and thorough statistical and information theory-based tests. <ref> provides the background and outlines the threat model. <ref> provides the motivation. <ref> provides a theoretical description of the proposed scheme. <ref> presents the proposed design, <ref> and <ref> present the performance and security analyses, respectively. <ref> presents the related work. We finally conclude in <ref>. § BACKGROUND AND THREAT MODEL §.§ Convolution Neural Networks (CNNs) Convolution serves as the core component of a DNN. To process a layer, we sequentially traverse the input and output feature maps (referred to and , respectively), and apply the convolution operation. The K , each of size (P × Q) are generated by convolving a filter of size (R × S) with C . Each 's size is H × W. For the sake of explanation, we assume that the and have the same dimensions (P=H; Q=W) (refer to Table <ref>). Due to the limited on-chip storage, the elements in an , an , and a filter are often grouped into tiles <cit.> as shown in Listing <ref>. [caption=A basic convolution operation,captionpos=b,label=lst:conv] // Off-chip for(k_o=0; k_o<K; k_o+=T_k) // K: #output fmaps for(c_o=0; c_o<C; c_o+=T_c) // C: #input fmaps for(h_o=0; h_o<H ; h_o+=T_h) // H: #rows in a fmap for(w_o=0; w_o<W ; w_o+=T_w) // W: #cols in a fmap // On-chip for(k=k_o; k<k_o+T_k ; k++) // Tile level processing for(c=c_o; c<c_o+T_c ; c++) for(h=h_o; h<h_o+T_h ; h++) for(w=w_o; w<w_o+T_w ; w++) for(r=0; r<R ; r++) // R: #rows in a filter for(s=0; s<S ; s++) // S: #cols in a filter ofmap[k][h][w]+=ifmap[c][h+r][w+s] * weights[k][c][r][s]; §.§ System Design and Threat Model Our threat model reflects a scenario where a DNN model is performing inference tasks locally on an NPU using a pre-trained model. We present the high-level system design and threat model in Figure <ref> (similar to previous attacks and CMs <cit.>). We assume that the system consists of a central processing unit (CPU), NPU and caches. The CPU sends commands to the NPU through a secure channel. Like prior work, all transfer of data between the CPU and NPU takes place via main memory. The data in the main memory is encrypted and tamper-proof (using <cit.>). We assume the existence of a trusted execution environment (TEE) like Intel SGX <cit.>, which is used to download the secure model and get the decryption key via a trusted third-party attestation based mechanism (refer to <cit.>). It can also periodically renew the model and the key using secure leasing <cit.>. The NPU, CPU, caches and the TEE are all within the trusted computing base (standard design decision:<cit.>). Potential adversaries might include the operating system, the hypervisor, a malevolent application, or somebody having physical access to the main memory or memory bus. Attacker's Aim The primary objective of an adversary is to reverse-engineer the architecture of the DNN model in order to facilitate subsequent attacks, including adversarial attacks and membership inference. She can also aim to retrain a comparable model with enhanced or similar accuracy. Attacker's Capabilities and Limitations We assume a strong attacker, who can target the system memory and memory interface. She can craft specialized inputs and feed them to the accelerator. The adversary possesses the ability to read the memory addresses that are sent as plaintext on the memory bus and identify the type of memory access instruction (read or write). She can also monitor the transfer of encrypted data to/from DRAM memory. She however cannot tamper with the addresses or the encrypted data, or mount replay attacks (this can be ensured via Securator <cit.>). She can also use a high-resolution timer to track the timing between memory accesses. Then, she can mount SS, KK and SI attacks. This means that a model can be run any number of times with the same or different inputs and it can be run on as many devices as needed (to thwart CMs that use PUFs<cit.>). The attacker has full access to all hardwired parameters and the design of the NPU. Finally, the attacker has access to all contemporary DNN models and has learnt whatever it could from them. This information can be leveraged to mount SI attacks (we will quantify this in Section <ref>). We do not consider power, thermal and EM-based attacks. Cache based attacks are not relevant here because the CPU's cache hierarchy is not used. §.§ State-of-the-art Attacks and Countermeasures (CMs) An attack aims to find the parameters for each CNN layer: size, width, height, number of (input channels) and (output channels). Reading a fully-computed signifies the termination of the preceding layer that produced it. This helps the adversary identify layer boundaries. There is a RAW dependency here (across layers) that is characterized by the write-to-read distance (bytes accessed between a write and the corresponding read). This is indicative of the layer size. Next, note that the same is used to generate different using a plurality of filters. This means that the same datum is accessed several times (non-zero count): this leaks information regarding the number of . Finally, note that every single byte of an is read by a layer – the total data volume (volume) that is read is indicative of the number of . In fact, using these insights it is possible to setup a system of equations and solve them to find all the layer parameters (see<cit.>). Since the filters are read-only and never modified, the attacker can easily distinguish between filter memory accesses and /accesses. This leaks the size of the filter. In addition to tracking off-chip traffic, an adversary can monitor the execution time of different modules <cit.> to establish a correlation between the dimensions of the model and the timing <cit.>. A recent attack HuffDuff <cit.> proposes a method to craft specialized inputs such that the total data volume of non-zero values leaks the filter sizes. They use a series of inputs that have one 1 and the rest 0s. Countermeasures - Recent works <cit.> have proposed a set of obfuscation strategies to conceal these leaked signatures. They propose to shuffle accesses, widen the convolution layer by padding zeros, increase the dimensions of the filters, augment the input with dummy data/zeros, buffer a part of the or fuse/split layers. All of these methods increase the model search space but fail to hide important signals such as the access count or read-after-write dependence information (also pointed out in HuffDuff <cit.>). § MOTIVATION §.§.§ 1^st Generation Statistical Attack (SS) Attackers can always collect multiple measurements across various runs, inputs and devices, and perform sophisticated analyses. As long as doing this is practically feasible, it will continue to be a method of choice because these attacks are easy to mount. For instance, when adding dummy writes (redundancy based noise <cit.>) in a system, it is necessary to read the dummy data, and then write something back. Otherwise, it is obvious that the original data is ineffectual. Figure <ref> shows an example of a CM <cit.> that had this vulnerability and a method to detect and eliminate such writes. On similar lines, shuffling the addresses <cit.> will also prove to be ineffective, as they also preserve the average read-write (RW) distance <cit.>. Can we stop statistical attacks? The adversary will estimate the pattern in the leaked data or parameters of the noise distribution by analyzing multiple samples collected over space and time. §.§.§ 2^nd Generation Kerckhoff Attack (KK) It is possible that even after collecting multiple samples, an adversary cannot filter out the noise or estimate parameters of the distribution such as the value of the non-zero mean or the minimum. The 2^nd generation Kerckhoff attacks are built over the 1^st generation attacks to leak such hardwired parameters. It is possible for a malicious insider to leak them in accordance with the Kerckhoff's principle. Some popular approaches <cit.> obfuscate the layer size by adding dummy accesses (redundancy in space), while DNNCloak <cit.> removes the accesses by buffering the data on-chip (removal). Let us assume that the size of the layer is X and the noise is N. To counter this, an adversary first conducts a statistical attack by collecting a multitude of data samples. They will converge to the mean as per the law of large numbers (shown in Figure <ref>). In this case, the noise added is 22400, which is a hardwired constant and can be leaked. Even if it is tied to the plaintext or the hardware (via PUFs <cit.>), taking multiple samples will ultimately produce the mean and the minimum (the most useful parameters). Do not violate the Kerckhoff principle Any hardwired secret parameter or constant may be leaked by a malicious insider. §.§.§ 3^rd Generation Residual Side-channel Attack (SI) To circumvent the KK attack, we can make many of the hardwired constants a part of the key. The advantage of this method is that a malicious insider cannot leak them, and the key can be changed as frequently as we want (see  <cit.>). An adversary always has some idea of the nature of the neural network that is being used based on what she knows from prior work and the nature of the problem that is being solved – it would be unwise to assume otherwise. This information can be weaponized. Consider the example shown in Figure <ref>. It shows an execution snippet from DNNCloak (Che et al. <cit.>). Here, a layer is split into sublayers and dummy accesses are added to the sublayers. The idea is to read the output of the previous sublayer into the current sublayer and write the output of both the sublayers simultaneously (current and previous). This is done with the hope that dummy writes cannot be filtered out because they are read and used in the next sublayer – artifical RAW dependencies are being created as everything that is being written is being read. The adversary can check the contents of the addresses that are getting updated. As shown in the figure, it is easy to spot that the values of some addresses that do not get updated. This will still be the case even if the data as well as the addresses are encrypted with the same key. Based on an analysis of access patterns in state-of-the-art DNNs, any adversary will know that the occurrence of this phenomenon is highly unlikely and thus they are dummy writes in all likelihood. Even if we shuffle the addresses, this pattern can still be discerned for individual blocks of addresses. To reduce the search space, we can use another trick. We analyzed all the contemporary neural networks. By and large, each layer is of the form M^2× N, where M is a large integer and N is comparatively smaller for the first few layers. Such numbers are known as NSQF (non square-free) numbers. If we only consider NSQF numbers <cit.> with a set of constraints derived from layers' dimensions in contemporary DNNs, the search space can reduce considerably. There is always a residual side-channel An adversary can collect side-information from contemporary DNNs and use it to estimate the parameters of the target model and reduce the search space. § THEORETICAL ANALYSIS Consider a random variable Y, which corresponds to the number of tiles that the attacker observes for a given layer. This is related to the actual number of tiles X, the compression ratio C and the amount of noise N as Y = CX + N. Given that C is a compression ratio, the following constraint holds: 0 < C ≤ 1. The aim is to arrive at X from Y. §.§ The Additive Noise: N Any adversary would like to reduce the degrees of freedom and keep C constant such that the effect of the noise N (dummy data) can be eliminated. This can be ensured by making minor changes to the input such that the neural network's layers effectively filter them out. For example, we can add speckled noise that does not correspond to any identifiable feature <cit.>. Multiple measurements will yield CX + N_min. Here, N_min is the minimum value that N can take. If N_min is known (as per the Kerckhoff's principle), then CX is exposed. If an adversary can estimate C (based on prior studies), the system's security is compromised. We can substitute N_min with any other statistical metric like the mean or variance – it does not make a difference. As we discovered in our experimental analyses, tying N to the model input or PUFs does not significantly help because it is possible to collect a distribution of N anyway by running experiments on multiple devices or using different model inputs<cit.>. For the sake of explanation, let us use N_min as the metric of interest. To make a significant difference, it should not be derived from the model input, it cannot be zero and it definitely cannot be known. If it arises from a distribution, then we shall continue to have the same problem. The only “Kerckhoff safe” approach is to make the parameters that govern N's distribution like N_min a part of the key (encrypted model in our case). Even if N_min is a part of the key, we are still allowing the attacker to guess it with a finite number of choices. Hence, it is a good idea to create a distribution on top of it just to increase the computational difficulty. Let us thus modify our equation as Y = CX + α + N'. In this case, α is a constant (should be a part of the key, proxy for N_min) and N' is a distribution whose support is from 0 till R, where R along with the parameters of the distribution should also be a part of the key. Let us again give the adversary the benefit of doubt, and assume that she can isolate measurements where N'=0. In this case, Y = CX + α. §.§ The Multiplicative Noise: C Let us assess the difficulty of guessing C. Assume that the adversary knows some distribution of C (f(C)) for a given layer (learnt from other DNNs). Let us give the adversary the benefit of doubt and assume that she knows when a layer begins and ends. Having some knowledge of f(C) gives a good starting point to search for X. The problem is finding the actual compression ratio β. In this case, Y = β X + α. Unlike the noise N, statistical and Kerckhoff attacks are hard to mount on the compression ratio mainly because of two reasons. The first is that it cannot be a hardwired constant and thus hardware designers will not know what the compression ratio for a given layer will be. In the case of the additive noise, a standard approach is to conduct measurements using a diversity of inputs <cit.> and find the mean or minimum value of X+N. With multiplicative noise of the form CX, an analogous metric is the geometric mean. It is not very helpful mainly because either C does not change with a perturbed input or it does not change in a way that is known to the attacker. The latter is because of aleatoric uncertainty, which comes from randomness in the data, and epistemic uncertainty, which arises from limited knowledge about the model. Because of these uncertainties, the only extra information that the attacker can have is a prior distribution f(C) (learnt from other models that may be similar). This is side information and its quality depends on how well the model under attack is similar to known models in the literature. Note that C can always be modulated by the model creator by adding random dummy data in a layer. The amount of dummy data to be added can be a part of the key. §.§ Predicted Distribution of X The actual distribution of X is not known to the adversary. It however knows the following formula without knowing the values of α and β. X = (Y-α) ×1/β Y is the value that the adversary observes, which in our design will be the DRAM traffic volume for compressed and then encrypted data. We are basically looking at a product of two random variables here. Let the random variable U denote Y- α and V denote 1/β. We have: X = UV If we take the Mellin transform <cit.> of both sides, we get. ℳ(X) = ℳ(UV) = ℳ(U) ×ℳ(V) A Mellin transform is defined as: g(s)= ℳ(f(x)) = ∫_0^∞ x^s-1f(x)dx The inverse transform is: f(x) = ℳ^-1(g(s)) = 1/2π i∫_c-i ∞^c + i ∞ x^-sg(s)ds Given a Y, an adversary can thus get a distribution of X using an estimate of the distributions of α and β (f(C)), and then by using Mellin transforms. Let us represent the adversary's predicted distribution by the pdf h(X). §.§.§ Difficulty of Verification Given a distribution of X, the most optimal way of making guesses is to first choose the most probable value (supremum), then the second most probable value, and so on (see Massey <cit.> for the proof). Let us thus define the rank function that takes the adversary's predicted distribution h(X) and the real value of X (X_r) as input. It sorts the values of X in descending order of their probabilities and returns the rank of X_r. For example, if X_r is the mode of the distribution, its rank is 1. Hence, we have a way of quantifying the search space. If there are N_i choices in layer i, then the total number of choices is Π_i N_i. Note that layers' dimensions/parameters in this approach cannot be independently verified. This creates an all-or-nothing scenario, requiring the verification of a predicted combination of dimensions for the entire neural network in one comprehensive step. This is done by training and verifying the accuracy of the predicted network. §.§.§ Imperfect Knowledge of f(X) The Mellin transform's computation involves approximating the integral using methods like the Riemann sum<cit.>. However, its quadratic complexity imposes limitations on its practical utility for extensive computations <cit.>. Sena et al. <cit.> proposes an alternative that uses the FFT to compute the Mellin transform. This is achieved by performing spline interpolation, exponential resampling, exponential multiplication followed by FFT computation. The adversary uses this approach to estimate the distribution h(X). Next, she performs an interpolation based on the piecewise cubic hermite polynomial (PCHIP)<cit.>, to modify h(X) such that the probability of all non-NSQF numbers is zero. Every NSQF number accumulates the probabilities of all the non-NSQF numbers in its neighborhood (half the distance on either side to the nearest NSQF number). This is our smart search space (probability distribution: h'(X)). Rank Estimation We ran this experiment for estimating the rank of X (layer volume) for the first five layers of Vgg16 (see Figure <ref>), where the true value of (X_r) for each layer is 150528, 802816, 401428, 802816, and 200704, respectively. In order to estimate the side-information available to the adversary regarding the compression ratio, we studied the compression ratios for nine cognate open-source DNN models – AlexNet <cit.>, Vgg16 <cit.>, Vgg19 <cit.>, SqueezeNet <cit.>, ShuffleNet <cit.>, ResNet <cit.>, MobileNet <cit.>, Googlenet <cit.> and Inception Net <cit.>. Based on the observed range of the compression ratios (1.5×– 40×) (also observed in <cit.>) across different DNN models (also pruned and quantized), we generate different distributions for U and V. We vary α between [100, 224 × 224 × 128 (layer size)]. Next, we compute h'(X) using our Mellin transform based approach. We observe that the uniform distribution has the highest rank of X_r followed by the geometric distribution. § DESIGN OF NEUROPLUG §.§ Overview The high-level design of is shown in Figure <ref>. The host CPU securely delivers instructions to the NPU via a PCIe link to execute a layer of the DNN. Each layer is divided into a set of tiles (basic unit of computation). Subsequently, the NPU starts tile-wise computation of convolutions and other linear/nonlinear operations. We map the complex multi-dimensional computation space of a DNN's layer to a 1-D space using a SFC (space-filling curve). The SFC conceptually moves through all the tiles in a layer – first along the channels, then along the rows (see Listing <ref>). In the SFC, we compress a sequence of contiguous tiles and fit them in a bin, where all the bins have the same size. Tiles can be split across bins and we can also leave space empty within a bin. A Bin Table is stored at the beginning of a bin that stores the starting address of each compressed tile in a bin. The bin is the atomic unit of storage in our system, whereas a tile is the atomic unit of computation. A bin is read and written in one go. Furthermore, we ensure that it takes exactly the same amount of time to process each bin such that timing side-channels are plugged. We place a bound κ on the maximum number of tiles that can be stored within a bin – the processing time of a bin corresponds to processing all κ tiles (included in the calculation of performance overheads). The compute engine is a traditional systolic PE array; each PE has local storage in the form of register files and local buffers and a tile decompression unit. As the first layer is the most exposed, we enhance its security by initially packing dummy data within the tiles before performing compression – its location is stored in the Bin Table. A dedicated compression engine <cit.> compresses the tiles, packs them in bins, leaves some space empty (as per the noise distribution) and writes them in a sequential order to the main memory (as an SFC for the next layer). This ensures that writes are also at the granularity of bins and follow the same semantics. SFCs provide a convenient abstraction for representing computations within a DNN along with providing performance benefits in the form of enhanced spatial locality. §.§ Deep Look at SFC-based Computation Let us analyze the computation patterns with SFCs (refer to Table <ref> for the terminology). In a simplistic avatar of our design, we assume that all the and weights fit on chip (will be relaxed later). The is stored as a simple 1-D SFC as shown in Figure <ref>(a). Each cuboid represents a deep tile – same tile across all the input channels. We start with the first deep tile (iterate across all the channels) and then move to the next deep tile in the same row and so on. Once we finish processing a row, we move to the next row (first column), so on and so forth. The weights are also stored as an SFC. A single 1-D SFC contains CK kernels – we first read all the weights to generate 1, then 2, so on and so forth (refer to Figure <ref>(b)). The have exactly the same format as – this needs to be the case because they are inputs to the next layer. Refer to Figure <ref>(c) that shows the order in which we write. Table <ref> outlines the key design principles using SFCs. Let us now make our design more realistic. §.§.§ Case I: Weights do not fit but input maps do Consider the case where the weights do not fit in the NPU memory buffers but the inputs do. In this case, we partition the weights on the basis of a partitioning of . This is akin to partitioning the set of K into a sequence of contiguous subsets of k_1, k_2, … k_n where ∑_i=1^n k_i = K. In practice, this is not visible to the attacker because we read the filters for the first k_1 and compute them fully, then do the same for the next k_2, up till k_n. The are stored one after the other – exactly in the same format as they would have been stored had all the weights fit in memory. The timing of filter reads could be a side-channel, however, we obscure this by ensuring that each bin takes a fixed amount of time to execute and the inter-bin and inter-filter-bin read time is the same. §.§.§ Case II: Input maps do not fit but weights do We split an bin-wise such that each set of bins fits within the NPU (row-major order). Then the deep tiles in each set of bins are processed and the corresponding output tiles/bins are written to memory. This is indistinguishable from the case in which all inputs and weights fit in on-chip memory. §.§.§ Case III: Both inputs and weights do not fit We partition an 's SFC bin-wise such that each partition fits within the NPU. Then we proceed on the lines of Case I (single pass through all the weights). We need to subsequently read the next set of bins and re-read all the filters once again. This is where a side-channel gets exposed. The frequency at which the same filter weights are read correlates directly with the total volume of input data. A straightforward solution is to unroll the weights. For instance, consider partitioning the weights into two sets: 𝒲_1 and 𝒲_2. We create an unrolled SFC of the form 𝒲_1 𝒲_2 𝒲_1 𝒲_2 …. To save space, we can perform limited unrolling, where if we need to read the weights τ times, we set the unrolling factor to η (model-specific secret). In extreme cases, we can have a situation where a deep tile does not fit in the NPU's on-chip memory; this case can be solved by writing to multiple small SFCs and reading them in an interleaved fashion in the next layer. Note that in all cases, the partitioning factor is a random variable that is generated in the same fashion as N §.§ Dealing with Halo Pixels Halo pixels <cit.> are the additional pixels that are required to compute the convolution results in a tile (beyond its boundaries). In Figure <ref>, the halo pixels will be towards the north and west. Let the processing begin at the Northwest corner. We then proceed along the same row (towards East). Each tile in the northernmost row get its halo pixels from its western neighbor that has already been read. These pixels can be stored in a small on-chip buffer for immediate use. The number of halo pixels is quite small as compared to the sizes of the tiles. We proceed till we reach the last element of the row (eastern most). We then start at the westernmost column of the next row, and proceed eastward. In this case, the halo pixels need to come from the western neighbor and northern neighbor. The western neighbor (tile) was just read, however the northern neighbor (tile) was a part of the previous row. The first option is to store all the halo pixels in an on-chip buffer. In this case, when we are processing a row, we need to store the halo pixels for the southern neighbors in an on-chip buffer. However, if there are space overflows, then we can write them as an SFC, and read them back in the same order when we are processing the next row. §.§ Adding Custom Noise The uncertainty in the number of tiles in each bin is integral to maintaining the overall security of the design as it obfuscates the total data volume and the RW distance. In CX+N, C naturally arises from compression and it can further be augmented with adding random dummy data <cit.>. The additive noise N= α + N' can be added by keeping N bytes empty within a bin. For computing N', we sample it from a heteroskedastic noise distribution, which makes the adversary's job even harder. This distribution is characterized by its non-deterministic variance. This makes any kind of regression-based analysis hard. We performed the White <cit.> and Breusch-Pagan <cit.> tests using different combinations of distributions and estimated that a strong heteroskedastic noise distribution can be generated using a combination of the uniform and Gaussian distributions – both have very efficient hardware implementations <cit.>. We use the uniform distribution(choice supported by Figure <ref>) to set the variance of the Gaussian distribution. §.§ Layer Splitting and Fusion In our architecture, performing layer splitting <cit.> and fusion are easy. In layer splitting, we partition an and assign each part to a sub-layer. In this case, the input SFC will be read once but the weight SFC may be read multiple times. We need to adopt the solution proposed in Case III. For layer fusion, we can split the if there is an on-chip space issue. Then for each set of deep tiles, we need to read the filters for two consecutive layers one after the other (Figure <ref>(d) shows how the filters are fused). For handling issues concerning limited on-chip storage, we can adopt the solutions for Cases II and III. §.§ Pooling, ReLU and Skip Connections The pooling and ReLU operations can be performed along with the convolution layer. Note that the tile size will change if we are pooling. If we are replacing a pixel with the maximum value in a window of 4 pixels, then the tile size will reduce by a factor of 4. In case, the tile size reduces to a very small value (let's say 1 × 1), then we assume enough buffer space to read a large part of the input, recreate tiles, compress and place them into bins. Our design can easily handle skip connections as well – it simply requires re-reading the SFC of a previous layer. This will convey to the adversary that a skip connection exists (limitation in our design), however because of layer-boundary obfuscation, finding the source of the skip connection will be hard. § PERFORMANCE EVALUATION §.§ Setup and Benchmarks Our experimental setup integrates the state-of-the-art DNN modeling tool (Timeloop <cit.>) and a cycle-accurate DRAM simulator (Ramulator). Both the simulators have been validated against real-world HW and are widely used in the literature. Timeloop <cit.> takes as input the details of the workload, the target NPU's configuration (see Table <ref>), and the data reuse strategy as the input and generates performance statistics and traces. We developed an in-house trace generator tool that extracts the memory access patterns of tiles from Timeloop's T-traces <cit.> and then compresses and randomizes the tiles in accordance with our compression and binning strategies. Then, the tool generates a new sequence of memory accesses corresponding to bins. Ramulator takes traces from our trace generator tool as its input and generates the performance statistics. Workloads We consider a set of dense as well sparse NN workloads (same as  <cit.>). We generated pruned NNs using the layer-adaptive magnitude-based pruning technique <cit.> with a fairly good prediction accuracy as shown in Table <ref>. Note that will work for all types of convolution-based DNNs such as GANs and Transformers) [10]r0.42 black!10 Model Acc. gray!20 2|c|Pretrained ResNet18 <cit.> 89.1% Vgg16 <cit.> 91.32% AlexNet <cit.> 81% gray!20 2|c|Pruned ResNet18 <cit.> 92.1% Vgg16 <cit.> 90% AlexNet <cit.> 80% Workloads since other basic layers such as the fully-connected layer can be easily expressed as a convolution <cit.>. We are not showing the results for Transformers and LLMs due to a paucity of space. Their sizes are huge and mounting a physical attack to estimate the model parameters is very hard (massive search space). The main target of such attacks are smaller DNNs. We use an NVIDIA Tesla T4 GPU with CUDA 12.1 to train and prune the NN models using the CIFAR-10 dataset. We achieved an average accuracy of 90% for Vgg16, 80% for Alexnet and 92.1% for ResNet18 – same as the original authors. During our analysis, we observed that many other popular CNN models demonstrated a similar behavior (not discussed due to paucity of space). Configurations We select an unsecure system as the baseline configuration. The bin size was set to 60 kB. The TCB can store three bins at a time. We observed that the tile size varied with the layers and was roughly between 10 kB - 95 kB. We assume that an adversary analyzes the compression ratio (β) using other sister DNNs and crafts bespoke inputs; she observes that the values are normally distributed in the range 1.5-40× <cit.> (also seen in our experiments). The Compr design represents a two-level compression (pruning <cit.> followed by Huffman compression) similar to the Deep Compression Algorithm <cit.>. The other configurations are – DNNCloak (DNNCloak)<cit.>, Layer deepening (NeurObf_1) <cit.>, Layer widening (NeurObf_2) <cit.>, and Kernel widening (NeurObf_3) <cit.>. In , for the given set of models, the upper limit of the heteroskedastic noise distribution α was set to 8000 (in bytes). §.§ Performance Analysis The performance results are shown in Figure <ref>. The performance is proportional to the reciprocal of the total #simulation cycles. The results are normalized relative to the baseline setup. We find that Compr is 2.52 × faster than the baseline. This is due to the fact that compression leads to a reduction in memory traffic by a factor of 2.52 × as shown in Figure <ref>. There is a good correlation between performance and DRAM traffic, which is aligned with the findings reported in prior studies (TNPU <cit.>, Securator <cit.>, GuardNN <cit.>). We see that on an average, the rest of the proposals <cit.> are 1.8× worse than in terms of performance. Note that DNNCloak performs only weight compression and incorporates an on-chip buffer to store the , thereby reducing the memory traffic. But, it also includes dummy accesses and reads/writes the previous sublayer, which lead to an increase in the memory traffic, resulting in DNNCloak being 15.48 % slower than . On the contrary, employs compression to reduce all the off-chip communication, which allows us to afford more dummy accesses. §.§ Sensitivity Analysis Figure  <ref> shows the variation in the smart search space with the noise level (α) for . The smart search space for ResNet is of the order of 10^196, for Vgg16 is of the order of 10^70 and for AlexNet, it is of the order of 10^26 (small network, deliberately chosen). Figure <ref> shows the trade-off between the search space size and the performance overheads introduced by modulating C (adding random values). §.§ Hardware Overheads incorporates additional HW, which can be added to conventional NPUs. The code for all the additional components was written in Verilog, and the designs were fully synthesized, placed, and routed using the Cadence Genus tool for a 28 nm ASIC technology node at 400 MHz frequency. We present the results in Table <ref>. Our design is generic and can work with any NPU (that works at the tile level) subject to making minimal changes. We show the results for an NPU that is based on the Eyeriss-2 accelerator<cit.> (its area is 5.5mm^2). We also implemented the design on a Virtex-7 FPGA board (xc7vx330t-3fg1157). The number of LUTs, registers and DRAMs required to implement the additional logic are 84K, 77K and 8 (resp.) at a frequency of 360 MHz. § SECURITY EVALUATION We perform a thorough security analysis using multiple DNNs. We primarily report the results for the Vgg16 model, which is more vulnerable to attacks because of its lack of skip connections, good accuracy, and relatively small model size that reduce the search space. The results for other NNs are similar or better (not presented due to space constraints). §.§ Search Space Sensitivity We estimate the variation in the smart search space size as shown in Figure <ref> with different noise levels. We vary the level of the noise α in the presence and absence of compression – Y = β X + α. Here β is the ratio of the size of the compressed data and the size of the original data. We observe that with the same level of noise(α), the search space of Y = β X + α is of the order 10^70, while for Y = X + α, it is of the order 10^26 (significantly smaller). We also mount an attack in the presence of and report the accuracy of the top nine models in CIFAR-10 that can be estimated by the adversary in Figure <ref>. These accuracies are similar to the accuracy of a randomly generated model (10-20 %), where no leakage hints are available. Next, we perform two distinct case studies to examine the security of our scheme against two highly potent recent attacks: HuffDuff <cit.> and Reverse Engg <cit.>. There is no known countermeasure for HuffDuff. A total of 2048 unique memory traffic traces were collected using various inputs (as in the HuffDuff model) in order to perform the security analysis for the Vgg16 model. Note that the conclusions for other DNNs, such as ResNet18 and AlexNet, are identical (not presented due to space constraints). §.§ Case Study 1 – Defense from HuffDuff We analyze the first five most vulnerable layers of the Vgg16 model across three distinct configurations – scheme without any CM, our proposed scheme () as the CM, and a hypothetical secure system random that provides complete security guarantees, ensuring that any data observed by an adversary is entirely random and unrelated to the architecture of the models. Other competing CMs are not considered since they cannot stop this attack. We employ a combination of information-theory and statistics-based tests to assess the security of . This approach is favored in the cryptography community as it avoids reliance on a single metric. For example, the classical correlation coefficient could be zero for random variables x and x^2, however, other security metrics reveal their underlying connection. Information Theoretic Tests - We use FI (Fisher Information) and MI (Mutual Information), which are extensively used metrics to estimate the information content of leaked signatures and their association with secret data/keys <cit.>. These metrics can never be zero for a distribution with a finite support. Our aim is to bring them as close to the random configuration as possible. ▸ Fisher information (FI) The FI is a robust and classical method for establishing a lower bound on the error (variance) of the estimator – the lower the better <cit.>. We mount the HuffDuff attack and observe that the FI is quite low for the first 5 layers (comparable to random). It reduces as we consider deeper layers (due to epistemic uncertainty). HuffDuff relies on the information distortion (number of non-zero(NNZ) values) at the edges of an due to the way the convolution operation handles the edge pixels. This discrepancy decreases with the layer depth as shown in Figure <ref>. The FI in Figure <ref> validates the same fact. A similar behavior was observed in all the other layers and NNs. Note how close our FI is to a truly random source (within 6%). ▸ Mutual information (MI) We estimate the MI between the leaked traces and the hidden parameter (filter size) for all the three configurations. We observe a similar pattern here, indicating that the amount of information between the hidden parameter and the leaked traces for is nearly the same as the information between random data and the hidden parameter. Statistical Tests We perform the runs test <cit.> (part of the NIST suite <cit.>) and CVM tests <cit.>, which are considered as the standard approaches to quantify security. ▸ Runs test The average p-value for is 0.41, which is more (better) than the standard threshold value of 0.05 (corresponding to the null hypothesis). This establishes the random nature of the source as per this test. ▸ Cramer–von Mises (CVM) test <cit.> We estimate the CVM distance, which decreases as the difference between the random data and the sampled data decreases. We observe that exhibits the lowest CVM (closest to random, see Figure<ref>). ▸ Correlation coefficient (CC) We calculate the classical CC to estimate the degree of correlation between the leaked signal and the hidden parameter using Pearson's CC (see Figure<ref>). Here also we have make a similar inference. §.§ Case Study 2 – Defense from Reverse Engg. We implemented ReverseEngg<cit.>, which is a state-of-the-art memory-based SCA (side-channel attack) that steals the model architecture in the absence of any CMs; it is the basis for many other attacks that use the same principle such as <cit.>. We implemented two CMs (based on this): (NeurObfuscator <cit.> and DNNCloak <cit.>) – the former introduces dummy computations and the latter uses a combination of dummy accesses, partial encryption and extensive on-chip buffering. We compute the distribution for the two most important architectural hints (exposed via side channels): RW distance and the total traffic distribution (see Table <ref>). We observe minimal differences in the FI between and DNNCloak indicating that both the schemes are effective at mitigating address-based SCAs. However, DNNCloak can be broken using timing and Kerckhoff-based attacks (see Section <ref>), whereas is immune to them. § RELATED WORK §.§ DNN Architecture Extraction: CDTV Metrics Table  <ref> shows a summary of the most popular DNN architecture stealing attacks. Hua et al. <cit.> propose a state-of-the-art attack, which builds constraint equations by observing memory access patterns and the traffic volume. Similarly, DeepSniffer <cit.> exploits the relationship between architectural hints such as memory reads/write volumes and a DNN's architecture. The authors of HuffDuff <cit.> exploit the boundary effect and the timing side-channel for attacking sparse CNN accelerators. The boundary effect occurs at the outermost edge of a convolution layer where we perform fewer multiplications – this helps the attacker discern filter dimensions. Cache Telepathy <cit.> steals the CNN's architecture by estimating the timing information using cache-based prime-probe attack. They use this information to infer the total number of calls for performing multiplication, which helps estimate the model architecture. §.§ DNN Architecture Protection §.§.§ Shuffling-based CMs Liu et al. <cit.> proposed to shuffle the addresses, while Li et al. <cit.> obscured the tile access pattern by shuffling the order of the tile accesses. Sadly, their scheme still preserves counts and average RW distances <cit.>. Additionally, shuffling requires a huge mapping table that is hard to store on chip. <cit.>. §.§.§ Redundany-based CMs The number of memory accesses or the process execution time information can be hidden by incorporating dummy memory accesses or by adding delays. ▸ Redundancy in Space Li et al. <cit.> employ different obfuscation techniques such as layer deepening and kernel widening that introduce dummy memory accesses. This makes it necessary to read all the dummy data at least once during the execution of the DNN, and then use or hide the results. Che et al. <cit.> emphasize that attackers can distinguish between layer types by examining memory access intensities. They suggest adding dummy accesses to equalize the number of accesses across the layers, which leads to high performance overheads. The CDTV information is nonetheless available. ▸ Redundancy in Time Wang et al. propose NPUFort <cit.> – a custom HW accelerator that encrypts only security-critical features. Sadly, partial encryption of the weights only provides partial security. Similarly, cache-telepathy <cit.> proposes to run an entire toy NN to obfuscate the timing information of the DNN – this still ends up exposing a lot of CDV information (much more than the state of the art CMs). §.§.§ Access Removal using On-Chip Buffering Che et al. <cit.> choose to buffer a part of the due to which an adversary will find it difficult to infer the size of the (can be broken using schemes shown in Section <ref>). § DISCUSSION AND CONCLUSION The key contribution of this paper is as follows. Assume a threat model where an adversary can observe all the addresses being sent to encrypted, tamper-proof DRAM memory, and record the volume of encrypted data transferred as well as the associated timing information. All known attacks and CMs on NNs use a subset of the CDTV metrics. We are not aware of any fifth metric that can be derived from an observation of DRAM accesses and timing, which is fundamentally different. In the backdrop of this observation, we create a comprehensive universal attack that relies on a combination of SS, KK and SI attacks to break all state-of-the-art CMs in 8-10 hours on current machines. The primary deficiency of current CMs is that they rely on additive noise, hardwire parameters that can be leaked by malicious insiders and do not take cognizance of the side-information available to an adversary. is a novel CM that prevents SS, KK and SI attacks using CDTV metrics in the context of the aforementioned threat model. Its cornerstone is the addition of multiplicative noise of the form Y=CX+N, where the factor C arises naturally from feature map compression. This formulation inherently leads to a mathematical framework based on Mellin transforms where the search space size can be precisely quantified. Its size can further be increased by modulating C (adding random data to layers) depending on the model creator's requirements. This multiplicative formulation naturally leads to a design that represents the 3D computation in a CNN as a read-once-write-once 1D computation using SFCs (novel contribution). We further show that all kinds of corner cases including handling halo pixels and weights that do not fit in on-chip memory can be easily handled using our SFC-based framework. All the proposed HW structures were meticulously coded in Verilog and synthesized for two targets: 28-nm ASIC process and the Virtex-7 FPGA. The HW overheads are quite modest (1.5 mm^2 on the ASIC). Our design has a large search space size (10^196 for ResNet18 and 10^70 for Vgg16) with a minimal performance degradation with respect to the unsecure baseline. Moreover, for all the known statistical tests (also part of the NIST test suite), the information leaked by is statistically very similar to a truly random source for the CDTV metrics, even in the presence of specially crafted inputs (Case Studies 1 and 2). With a minimal loss in performance, it is possible to substantially increase the search space size (e.g. multiply it by 10^50) by modulating the multiplicative parameter. Owing to compression, which is intrinsically aligned with our SFC-based framework, we note a performance improvement of 15.48% over our nearest competitor DNNCloak <cit.>. IEEEtranS
http://arxiv.org/abs/2407.12760v1
20240717173743
Effective equidistribution of orbits under semisimple groups on congruence quotients
[ "Andreas Wieser" ]
math.DS
[ "math.DS", "math.NT" ]
Effective equidistribution of orbits under semisimple groups]Effective equidistribution of orbits under semisimple groups on congruence quotients Einstein Institute of Mathematics, Edmond J. Safra Campus, Givat Ram, The Hebrew University of Jerusalem, Jerusalem 91904, Israel andreas.wieser@mail.huji.ac.il This project was supported by the ERC grant HomDyn, ID 833423 as well as the SNF Doc. Mobility grant 195737. § ABSTRACT We prove an effective equidistribution result for periodic orbits of semisimple groups on congruence quotients of an ambient semisimple group. This extends previous work of Einsiedler, Margulis and Venkatesh. The main new feature is that we allow for periodic orbits of semisimple groups with nontrivial centralizer in the ambient group. Our proof uses crucially an effective closing lemma from work with Lindenstrauss, Margulis, Mohammadi, and Shah. [ Andreas Wieser July 22, 2024 ================== § INTRODUCTION Let G be a Lie group and let Γ < G be a lattice. In her seminal work, Ratner <cit.> classified probability measures on X = ΓG invariant and ergodic under a one-parameter unipotent subgroup of G. The classification asserts that any such measure is homogeneous (or algebraic), i.e. it is the invariant probability measure μ_xL on a periodic orbit xL of a closed subgroup L ⊂ G. Subsequent work by Mozes and Shah <cit.> established the same type of rigidity for limits of such measures using the linearization technique by Dani and Margulis <cit.>. Let μ_i be a sequence of probability measures on X so that each μ_i is invariant and ergodic under a one-parameter unipotent subgroup. Suppose that μ_i→μ for a probability measure μ in the weak^∗-topology. Then μ is homogeneous. Giving quantitative equidistribution results in homogeneous dynamics, in particular in the context of orbits of semisimple groups, has been a major theme in recent years, and appears prominently in Margulis list of open problems in homogeneous dynamics <cit.>. In particular the problem of quantifying the Mozes-Shah Theorem has attracted considerable attention. The progress is arguably most complete for orbits of horospherical groups. Here, Margulis' thickening technique <cit.> paired with effective mixing for diagonalizable flows can give a polynomially effective rate – see for example <cit.>. Other known instances with effective rates include the equidistribution of Hecke points (see e.g. <cit.>) and of translates of a fixed orbit of a symmetric group (see e.g. <cit.>). In this article, we consider periodic orbits of semisimple groups in the following arithmetic setup: * G = () where is a connected semisimple -group, and G^+ the identity component of G. * Γ⊂() is a congruence subgroup. * H < G is a connected semisimple subgroup without compact factors. We now turn to phrase the main theorem of the current article; important preceeding results will be discussed below in <ref>. We denote by the family of connected -subgroups of whose radical is unipotent. By a theorem of Borel and Harish-Chandra <cit.>, for any ∈ℋ the orbit Γ() is periodic i.e. Γ∩() is a lattice in (). Translates of these orbits with small volume form natural obstacles to equidistribution of periodic H-orbits. Instead of the volume, we shall use its arithmetic counterpart – the discriminant (·) – to measure the complexity of orbits; see <ref>. When X is noncompact, one can construct explicitly (see <ref>) a proper map :X → [0,∞); for a subset B ⊂ X we write (B) = inf{(x): x∈ B} (note that this height has nothing to do with any arithmetic height which is closely related to our notion of discriminant). The following is our main theorem; the main innovation is a different, general treatment of the centralizer of H. There exist δ >0 and d ≥ 1 depending on G and H with the following properties. Let xH = Γ g H be a periodic orbit in X and let μ_xH be the Haar probability measure on xH. Suppose that D >0 is such that (Γ()g) ≥ D for all orbits Γ() g containing xH where ∈ is a proper -subgroup of . Then | ∫_xH f μ_xH - ∫_xG^+ f μ_xG^+| ≪_G,H,Γ D^-δ(xH) _d(f) where _d is an L^2-Sobolov norm of degree d (cf. <ref>). For a given (semisimple) -subgroup 𝐇 of it is often difficult to classify all intermediate -subgroups 𝐇⊂⊂. However, in view of the rate in Theorem <ref> it is usually sufficient to verify (<ref>) for a suitable subcollection of . We illustrate this for orthogonal groups in Section <ref> below and give a more general statement of this kind in Theorem <ref>. §.§ Some context Known effective equidistribution results for periodic H-orbits (in the arithmetic setup of this article) heavily rely on the uniformity of the spectral gap for H-representations L^2_0(xH) where xH runs over all periodic H-orbits in X. This is a deep result (Clozel's property (τ)) due to various authors, among others Selberg <cit.> and Jacquet-Langlands <cit.> (for groups of type A_1), Kazhdan <cit.>, Burger-Sarnak <cit.>, and Clozel <cit.>; we refer to <cit.> for a more detailed account of the history. Using uniformity of the spectral gap, breakthrough work of Einsiedler, Margulis, and Venkatesh <cit.> established the following effective equidistribution result for periodic H-orbits in X. Suppose that the centralizer of H in G is finite. Then there exists δ >0 and d≥ 1 depending on G and H and V_0>0 depending on G, H, and Γ with the following properties. Let xH⊂ X be a periodic H-orbit. For any V ≥ V_0 there exists a connected intermediate group H ⊂ S ⊂ G for which xS is periodic with (xS) ≤ V and so that for any f ∈(X) | ∫_xH f μ_xH - ∫_xS f μ_xS| < V^-δ_d(f). In practice, one can take V to be a sufficiently small power of the volume of xH to find a well-approximated orbit of a larger intermediate group. The exponent δ>0 is in principle computable, see <cit.> for the special case (2,1)< _3(). We also refer to the surveys of Einsiedler <cit.> and Einsiedler, Mohammadi <cit.>. While the centralizer assumption does not seem crucial to the method, it is intricately built into the proof of Theorem <ref>. It does assert among other things that periodic H-orbits appear `discretely'. Indeed, for any periodic orbit xH and any c∈ G centralizing H the orbit xcH is also periodic and hence periodic H-orbits appear in families parametrized by the centralizer. Moreover, the centralizer assumption restricts the possibilities for intermediate subgroups (cf. <cit.>): there are finitely many and all are semisimple. In view of applications, one would like the centralizer assumption to be removed. Previously, this had been achieved in the following special cases: * In <cit.>, Aka, Einsiedler, Li, and Mohammadi established the analogue of Theorem <ref> for = _d (or a real split -form of _d) and H = {[ g_1 0; 0 g_2 ]: g_1 ∈_k(),g_2 ∈_d-k()} for (k,d)≠ (2,4). In this case, the centralizer of H is a one-dimensional (split) torus. In particular, an H-orbit can lie far up in the cusp, which is why the statement as in Theorem <ref> needs to be adapted by a suitable height function (as in Theorem <ref>). The case (k,d)= (2,4) is ruled out due to the presence of an intermediate symplectic group. * In <cit.>, Einsiedler and Wirth established the analogue of Theorem <ref> for = _Q where Q is a rational quadratic form of signature (n,1) and H ≃(2,1)^+. In this case, the centralizer is compact (it is isomorphic to (n-2)). The above theorems could also be phrased as equidistribution theorems in the ambient space with a rate polynomial in the minimal intermediate volume (in analogy to Theorem <ref>). The current article removes the centralizer assumption in this phrasing (see also Remark <ref> below). This heavily relies on an effective closing lemma from work of the author with Lindenstrauss, Margulis, Mohammadi, and Shah <cit.> (see Theorem <ref> below). The rate given in any equidistribution result as in Theorems <ref> or <ref> can only be as fast as the minimal volume or discriminant of an intermediate orbit. In that sense, the above theorems are optimal. The decay exponents are likely far from optimal in all of the above theorems. In principle, the techniques of this paper ought to give a similarly statement as Theorem <ref> (the main theorem of <cit.>). Indeed, one might imagine applying Theorem <ref> by 'downward induction': if D >0 does not satisfy (<ref>) for some 𝐌, switch to the equidistribution problem for xH ⊂Γ() g. Difficulties in doing so arise for example from the fact that might not be semisimple or from the (here inexplicit) dependency of the implicit constants on G and Γ. Theorem <ref> has been extended to adelic periods by Einsiedler, Margulis, Mohammadi, and Venkatesh <cit.> for maximal subgroups and by Einsiedler, Rühr, and Wirth <cit.> in a special case for a partial resolution of a conjecture of Aka, Einsiedler, and Shapira <cit.>. The former clarifies the dependency of Theorem <ref> on H. Lastly, we remark that a full resolution of the effective equidistribution problem for semisimple adelic periods promises an abundance of interesting arithmetic applications. This includes an effective version of the integral Hasse principle for representations of quadratic forms (cf. <cit.>). The currently available effective results on unipotent flows do not require orbits to be periodic. Effective analogues of Ratner's equidistribution theorem <cit.> are known in various instances. As mentioned earlier, when the acting unipotent group is horospherical equidistribution is well understood – see for instance <cit.>. In a somewhat different direction, effective equidistribution results have been established for the horospherical subgroup of _n() acting on arithmetic quotients of _n() ⋊ (^n)^k <cit.>. In a recent breakthrough, Lindenstrauss and Mohammadi <cit.> and Lindenstrauss, Mohammadi, and Z. Wang <cit.> proved effective equidistribution for one-parameter unipotent flows on arithmetic quotients of _2() ×_2() or _2(); for quotients of rank two real split groups see <cit.>. We refer to <cit.> for a more detailed account of the history. §.§ A special case: Special orthogonal groups To illustrate Theorem <ref>, we give explicit conditions to verify (<ref>) in a special case related to <cit.>. For the remainder of the section, consider the semisimple -group = _Q< _d where Q is an indefinite rational quadratic form of signature (p,q) where p+q=d≥ 4 and p,q>0. We let Γ < _Q() ∩_d() be a congruence subgroup and G = (), X = ΓG as before. As acting group, take H to be the identity component of the pointwise stabilizer subgroup of a positive definite real subspace of ^d of dimension less than p. In particular, H ≃(p',q)^+ for some p'<p and the centralizer of H is a compact group isomorphic to (p-p'). For any collection of vectors ℬ⊂^d we set _ℬ = {g ∈: g.v = v for all v ∈ℬ}. Moreover, we define for any periodic orbit xH = Γ g H '(xH) = inf{(Γ_v() g): v ∈^d with Q(v) ≠ 0 and Γ_v() g ⊃ xH}. Note that many periodic intermediate orbits as in Theorem <ref> are not of the above form Γ_v() g. Nevertheless, Theorem <ref> together with an analysis of intermediate groups yields the following theorem. There exist δ >0 and d ≥ 1 depending on G and H with the following properties. For any periodic H-orbit xH and any f ∈(X) | ∫_xH f μ_xH - ∫_xG^+ f μ_xG^+| ≪_G,H,Γ'(xH)^-δ_d(f). Theorem <ref> can be seen as an extension of the effective equidistribution result of Einsiedler and Wirth <cit.> (where q=1) mentioned earlier. Orbits of special orthogonal subgroups are of particular interest in view of number theoretic applications, most notably towards the integral Hasse principle – see <cit.> and Remark <ref>. As the spin group has strong approximation for indefinite forms, the integral Hasse principle for indefinite forms is readily obtained. (In particular, the current result in Theorem <ref> does not yield any progress towards it.) However, Theorem <ref> does potentially yield equidistribution results in the spirit of <cit.> for representations of positive-definite integral quadratic forms by indefinite integral quadratic forms. One can relate '(xH) to the length of the shortest integer vector in a certain rational subspace as follows. By the Borel-Wang density Theorem <cit.> there exists for any periodic H-orbit xH= Γ gH a rational subspace L ⊂^d (positive-definite over ) such that gHg^-1 = _L()^+. The subspace L depends not only on xH but also on the representative g. Since H has a compact centralizer in G, classical non-divergence results thus imply that (·) is uniformly bounded on periodic H-orbits. Equivalently, any periodic orbit xH = Γ g H has a representative g of absolutely bounded size. Defining the above subspace L with this choice, we have v^⋆≫(Γ_v() g) ≫v^⋆ for any primitive integral vector v ∈ L ∖{0} (if Γ_v() g contains xH, v must be contained in L). In other words, Theorem <ref> provides a rate polynomial in min_0 ≠ v ∈ Lv. §.§ Some ideas of the proof of Theorem <ref> We follow an a priori familiar strategy already used in previous works <cit.> which in essence effectivizes a proof of Ratner's measure classification theorem for H-invariant and ergodic measures on X (cf. <cit.>, <cit.>). The outline below recovers this strategy. We omit exact definitions and some (partially important) details as well as polynomial dependencies where convenient in order to focus on the ideas. Suppose that μ is the Haar probability measure on a periodic H-orbit xH. The proof proceeds by recursion showing that μ is `almost invariant' under larger and larger intermediate Lie algebras 𝔥⊂⊂. Here, no restriction whatsoever is imposed on (e.g.  might have a center). A measure μ is ε-almost invariant under if the exp(Z)-translate of μ is within ε of μ (in a suitable sense – see Definition <ref>) for all Z ∈ with Z≤ 2. If the measure μ is ε-almost invariant under , the spectral gap on xG^+ implies the theorem. Moreover, at each step of the recursion ε will be polynomial in D. Assume from now on that μ is ε-almost invariant under some intermediate Lie algebra . As H is semisimple, there exists an H-invariant complement to . We further fix a one-parameter unipotent subgroup U = {u_t: t∈} of H which acts ergodically on xH. §.§.§ From transversal generic points to almost invariance Using effective decay of matrix coefficients for the H-representation L_0^2(μ), Einsiedler, Margulis, and Venkatesh <cit.> have established the existence of a large set of U-generic points where 'generic' is taken to be an effectivization (cf. Definition <ref>) of the usual Birkhoff genericity notion. For the purposes of this outline, assume now that there are two points x_1,x_2 ∈ X with x_2 = x_1 exp(r) for some nonzero small r ∈ and with |1/√(n)∫_n^n+√(n) f(x_iu_t) t - ∫ f μ| ≤ C(f) 1/√(n) for all 1 ≤ n ≤ε^-δ where C(f)>0 depends on f ∈(X) and δ>0 is given. The behaviour of the displacement between x_1u_t and x_2u_t in the time t is governed by the polynomial map p:t ↦_u_-t(r)-r. One distinguishes two situations: * When the supremum of p(t) over t ∈ [0,ε^-δ] is, say, at least 1, then polynomial divergence techniques show that the pieces of the U-orbits are roughly parallel towards the end. In this case, one obtains that μ is ε^⋆-almost invariant under that displacement. * When the supremum is at most 1, the polynomial _u_-t(r) is roughly constant equal to r on a polynomially shorter interval (such as [0,ε^-δ/2]). In this case, μ is ε^⋆-almost invariant under r. In both of the above cases, the size of r needs to be controlled in terms of ε. For instance, if the size r is a lot smaller than ε, the conclusion of (2) is vacuous by Lipschitz continuity. In short, the displacement r ∈ needs to satisfy ε^⋆≤r≤ε^⋆ with suitable exponents from the start. Lastly, we remark that the effective generation results established in <cit.> also apply here to show that if μ is ε-almost invariant under and ε^⋆-almost invariant under some unit vector in , it is ε^⋆-almost invariant under a new intermediate Lie algebra of larger dimension. Hence, the recursion may be continued. §.§.§ Existence of transversal generic points Overall, it remains to discuss why two points x_1,x_2 with the Birkhoff genericity assumption in (<ref>) and a controlled transversal displacement as in (<ref>) exist (see Proposition <ref> for an exact statement). To outline the argument we suppose for simplicity that is absolutely almost simple and that X is compact. Fix T>0 which is implicitly polynomial in D. We cover X with small boxes of the form y exp(B_δ_1^)exp(B_δ_2^) where δ_1 is much larger than δ_2 and both are polynomial in T. Consider the U-orbit for times [0,T] through a point x_0 ∈ xH. We may assume that x_0u_texp(s) is generic for at least 90% of s ∈ B_1^ and t ∈ [0,T] in a sense similar to (<ref>); this follows from an effective ergodic theorem (Proposition <ref>) which was up to minor differences already established in <cit.>. By realignment, we need to show that x_0u_t does not spend a disproportionate amount in any one of the above boxes. This situation is addressed by an effective closing lemma in work with Lindenstrauss, Margulis, Mohammadi, and Shah <cit.>: If x_0u_t spends a disproportionate amount in one of the above boxes, we obtain information about the initial point x_0, namely that its orbit under U stays close to a periodic orbit of some proper subgroup of G for the time T. However, effective avoidance results of Lindenstrauss, Margulis, Mohammadi, and Shah <cit.> imply that the set of such points have small measure in xH (cf. Corollary <ref>). In other words, we may choose the initial point appropriately and avoid the above problem. §.§ Overview of this article This article is structured as follows: * In <ref>, we set up notation and give some fundamental definitions (Sobolev norms, almost invariance and heights to name a few). * In <ref>, we give a variant of the effective ergodic theorem proved in <cit.>. * In <ref>, we recall the effective closing lemma from work with Lindenstrauss, Margulis, Mohammadi, and Shah <cit.> and the effective avoidance result from <cit.>. Simple corollaries for periodic H-orbits are derived. * In <ref>, we show how pairs of nearby generic points give rise to additional almost invariance (as outlined in <ref>). * In <ref>, we establish the existence of such generic points using <ref>,<ref>. * In <ref>, we prove Theorem <ref> by an inductive procedure accumulating almost invariance. * In <ref>, we give an extension of Theorem <ref> for suitable collections of subgroups of class . In particular, we prove Theorem <ref>. Acknowledgments: I would like to thank Elon Lindenstrauss and Amir Mohammadi for many discussions on this and related topics including, but not at all limited to, effective closing lemmas. I am also grateful towards Manfred Einsiedler and Zhiren Wang for conversations about effective equidistribution results. § PRELIMINARIES §.§ Basic setup Let be a connected semisimple -group and fix a faithful representation ↪_. We may assume that the adjoint representation occurs as an irreducible subrepresentation. To simplify notation, we identify with its image. The embedding ↪_ fixes an integral structure on which is inherited by its subgroups. More concretely, we denote () = ()∩_N() for any -subgroup <. Let Γ < () be a congruence lattice, G = () and X = ΓG. The Lie algebra of also inherits an integral structure such that () = () ∩𝔰𝔩_(). In particular, () is Γ-invariant under the adjoint representation. We fix a Euclidean norm · on _n() with [X,Y]≤XY for all X,Y. The Euclidean norm restricted to induces a left-invariant metric (·,·) on G which descends to a metric on X also denoted by (·,·). For g ∈_N() we write g = min{g_∞,g^-1_∞} where g_∞ = max_i,j|g_ij|. For an algebraic subgroup < the identity component is denoted by ^∘ whereas for a (Hausdorff-) closed subgroup M < G the identity component in the Hausdorff topology is M^+. We fix a connected semisimple subgroup H< G without compact factors throughout the whole article. We may assume that H is not contained in (the real points of) any proper normal -subgroup of . Indeed, for any g ∈ G and a proper normal subgroup the discriminant of the orbit Γ()g depends only on (by the definition given in <ref> below). Thus, Theorem <ref> is trivial for any periodic orbit Γ g H ⊂Γ()g simply by choosing a large enough implicit constant (depending only on ). Moreover, we fix a homomorphism _2() → H that projects non-trivially onto any simple factor of H. We write U< H for the image of {[ 1 ∗; 0 1 ]} < _2(). We pick a unit vector ∈ so that U= {u_t:t ∈} with u_t = exp(t). The above assumptions imply that U is not contained in any normal -subgroup of . Given any H-invariant subspace V ⊂ there exists a H-invariant complement W i.e. a subspace with V ⊕ W =. One can show[Indeed, decomposing = ⊕_i ∈ℐV_i into irreducible subrepresentations it is easy to see that for any H-invariant subspace V there is a subset ℐ' such that W = ⊕_i ∈ℐ'V_i has the required property.] that there is a constant >0 such that for any H-invariant subspace V there exists an H-invariant complement W with v+w^2 ≥(v^2+w^2) for any v ∈ V and w ∈ W. We shall call such an H-invariant complement undistorted. Throughout this article, we shall choose undistorted H-invariant complements for Lie subalgebras containing . As opposed to <cit.>, no information is known on and in particular, the complement cannot be chosen -invariant. §.§.§ Notations regarding implicit constants For two positive quantities A,B (such as real valued positive functions) we write A ≪ B if there exists a constant c>0 such that A ≤ cB. If we want to emphasize the dependency of the implicit c on another quantity, say m, we write A ≪_m B. Throughout this article, most implicit constants are allowed to depend on or () and this dependency is usually omitted. Similarly, we write A^⋆ to mean A^k for an absolute implicit constant k>0 which is allowed to depend on . §.§ Height in the cusp and Sobolev norms Define for any x = Γ g ∈ X the height in the cusp (x) = sup{v^-1: v ∈(g^-1)_ non-zero}. As mentioned earlier, this height is not to be confused with the notions of arithmetic heights defined e.g. in <ref>. By Mahler's compactness criterion the height function : X →_>0 is continuous and proper. Let X_η for any η>0 be the set of points of height at most η^-1. There is >0 such that a uniform injectivity radius on X_η is given by η^- for η sufficiently small. Moreover, by <cit.> there exist constants >0 (depending on ) and >0 (depending on the geometry of X) such that for any x ∈ X there exists a representative g ∈ G with g≤(x)^. We refer to <cit.> for a sharper understanding of the implicit constant in terms of G and Γ. Lastly, we remark that (xg) ≪g(x) for any x ∈ X, g ∈ G. Fix an orthonormal basis of and define for any d ≥ 0 an L^2-Sobolev norm _d of degree d on (X) via _d(f)^2 = ∑_𝒟((·)+1)^d 𝒟f^2 where 𝒟 runs over all monomials of degree ≤ d in the orthonormal basis of . We record some properties of these Sobolev norms proven in <cit.>: * There exists ≥(G) so that for all f ∈(X) and d ≥ f_∞≪_d(f). * For any d >, g ∈ G and f ∈(X) f-g.f_∞≪(e,g)_d(f). * There exists >0 so that for any d ≥ 0, g ∈ G, and f ∈(X) _d(g.f) ≪_d g^ d_d(f). * For any f_1,f_2 ∈(X) _d(f_1f_2) ≪_d _d+(f_1)_d+(f_2). * For any d there exists d' > d such that (_d'|_d)< ∞. We shall not use the relative trace estimate in <ref> explicitly, but mention here that it plays a crucial role in the proofs of the effective ergodic theorems in Propositions <ref> and <ref>. We refer to <cit.> for a thorough discussion influenced by work of Bernstein and Reznikov <cit.>. §.§ Almost-invariant measures Let μ be a probability measure on X. Given g ∈ G write μ^g for the measure given by μ^g(f) = ∫ f(xg) μ(x) for f ∈(X). The following notion was introduced in <cit.>. We say that μ is ε-almost invariant (with respect to _d) under g ∈ G if for all f ∈(X) |μ^g(f)-μ(f)| ≤ε_d(f). Moreover, μ is ε-almost invariant under Z ∈ if it is ε-almost invariant under exp(tZ) for |t|≤ 2. It is ε-almost invariant under a Lie subalgebra ⊂ if it is ε-almost invariant under all Z ∈ with Z≤ 1. Note that for d ≥+1 any measure μ is trivially ≪ε-almost invariant under any g ∈ G with (g,e)≤ε by <ref>, hence almost invariance is delicate for small elements of the group. The following elementary property is verified in <cit.> and allows one to pass from almost invariance under an element of G to almost invariance under an element of the Lie algebra. * If μ is ε-almost invariant under exp(Z)∈ and 1 ≤ c ≤ 2Z^-1, then μ is ≪ (cε+Z) almost-invariant under cZ. Note that the analogous statement in an S-arithmetic setup (see e.g. <cit.>) fails completely as small elements of (_p) for a prime p cannot be iterated to become of size ≈ 1. The effective generation results established in <cit.> imply the following proposition. There exists a constant >0 depending on H with the following property. Let ⊂ be a Lie subalgebra containing and let be an undistorted H-invariant complement to . Let μ be an H-invariant probability measure on X. Suppose that μ is ε-almost invariant (with respect to _d) under and under a unit vector Z ∈. Then there exists a constant c_1(d) and a Lie subalgebra _∗⊃ with (_∗)>() so that μ is c_1(d)ε^-almost invariant under _∗. Follows from the proof of <cit.>, see also footnote 21 therein. We may and will assume that any proper intermediate Lie algebra ⊂⊊ is not normal in . To that end, note that if H has a periodic orbit in X some conjugate gHg^-1 is defined over (or, more precisely, is equal to ()^+ for some -subgroup ∈). Hence, the intersection of Lie ideals of containing is defined over . But we assumed that no proper Lie ideal defined over contains . To summarize, if H has a periodic orbit in X there is no proper intermediate Lie ideal ⊂⊊ under our standing assumption on H. §.§ Exterior representations and heights of subgroups For any -subgroup < we define the -vector space = ⋀^()𝔤. The integral structure on induces an integral structure on . The Lie algebra of defines the line ⋀^() in and we choose a primitive integral vector in that line (uniquely determined up to signs). Lastly, we write for the (exterior) representation of on induced by the adjoint representation and : →, g ↦(g^-1).v_ for the (right-)orbit map. When it is clear from context, we write simply g.v = (g)v for the action. The height of the subgroup is defined to be () := where · is the Euclidean norm on induced by the choice of Euclidean norm on < 𝔰𝔩_N(). §.§.§ Discriminant of an orbit Recall that a connected -subgroup <_N is said to be of class if its radical is unipotent. Note that a -subgroup < is of class if and only if it has no nontrivial -characters. By a theorem of Borel and Harish-Chandra <cit.>, any orbit of the form Γ M g for M = () and g ∈ G thus possesses a unique g^-1Mg-invariant probability measure. In particular, Γ M g is closed. For ∈ a subgroup of and g ∈ G the discriminant of the orbit Γ() g is (Γ() g) := (g). Note that the discriminant depends only on the orbit as γγ^-1(γ g)= (g) for any γ∈Γ. The discriminant (Γ() g) for semisimple is comparable to the volume (Γ() g). We shall not need such a comparison in the current article and hence omit a more precise statement – see <cit.> for a discussion. § EFFECTIVE ERGODIC THEOREM Let μ be the H-invariant probability measure on a periodic H-orbit in X. In this section, we use the uniform spectral gap of the H-representation on L^2_0(μ) to exhibit an abundance of 'generic' points (a version of an effective ergodic theorem). While spectral gap is also used in the final step of the proof (Proposition <ref> below), the application of spectral gap is deepest here. This is in contrast to <cit.> where the gap was also used for an effective closing lemma for intermediate groups (over which the current article has no control). By Clozel's property (τ), there is a constant p_G>1 depending only on () such that the H-represen­tation on L^2_0(μ) is 1/p_G-1-tempered[Here, a unitary representation π of H is 1/m-tempered for m ∈ℕ if π^⊗ m is tempered or equivalently if there is a dense set of vectors 𝒱 such that for all v,w ∈𝒱 the matrix coefficient H ∋ h ↦⟨π(h)v,w⟩ is in L^2m+ε for all ε>0.] — see e.g. <cit.>. We set = 20(p_G+1). We define an effective notion of Birkhoff genericity following <cit.>. For any f ∈ C_c(X) and n>0 the discrepancy D_n(f) of f (with respect to U and μ) is given by D_n(f) = 1/(n+1)^-n^∫_n^^(n+1)^ f(xu_t) t - ∫ f μ. Given positive numbers k_1 < k_2 a point x ∈ X is said to be [k_1,k_2]-generic (for U and μ) with respect to _d if |D_n(f)(x)| ≤1n_d(f) for all n ∈ [k_1,k_2] ∩ and all f ∈ C_c^∞(X). We say that x is k_0-generic if it is [k_0,k_1]-generic for all k_1 > k_0. The following ergodic theorem is established in <cit.>. There is d_1≥ 1 (depending only on G,H,U and not on μ) so that for any d ≥ d_1 the μ-measure of the set of points which are not k_0-generic with respect to _d is ≪_d k_0^-1. We remark that the proof of Proposition <ref> does not use the centralizer assumption while its refinement for almost invariant measures <cit.> does. Indeed, this extension requires an effective generation result for large balls in intermediate Lie subgroups H < S < G (see also <cit.>). In the context of <cit.> these intermediate groups are automatically semisimple as opposed to the current article. Here, we use an adapted version which does not rely on the centralizer assumption, but follows by the same methods. Let d ≥ + 1. There exist β=β(d)>0, d' > d and ε_0>0 with the following property. Suppose that μ is ε-almost invariant under a Lie subalgebra ⊃ with respect to _d for some ε≤ε_0. Let Ω⊂ be a compact neighborhood of 0 contained in the unit ball. For any t_0 >0 and k_0>0, the fraction of points (x,t,Z) ∈ X × [0,t_0] ×Ω for which xu_texp(Z) is not [k_0,ε^-β]-generic with respect to _d' is ≪_d k_0^-1. Here, we take the Lebesgue measure on Ω. The proof is largely analogous to the proof of <cit.>, we give it for completeness. Invariance of μ under U implies for n ∈ 1/t_0(Ω) ∫_X× [0,t_0] ×Ω |D_n(f)(xu_texp(Z))|^2 μ(x) t Z = 1/(Ω)∫_X ×Ω |D_n(f)(xexp(Z))|^2 μ(x) Z = 1/(Ω)∫_X ×Ω |D_n(f)(x)|^2 μ^exp(Z)(x) Z. By ε-almost invariance μ under (w.r.t. _d) we have that |μ^exp(Z)(F)-μ(F)|≤ε_d(F) for all F ∈ C_c^∞(X) and Z ∈Ω. This implies for any Z ∈Ω ∫_X |D_n(f)(x)|^2 μ^exp(Z)(x) ≤∫_X |D_n(f)(x)|^2 μ(x) + ε_d(|D_n(f)|^2). By <ref> and <ref> _d(|D_n(f)|^2) ≪_d _d+(D_n(f))^2 ≪_d n^⋆_d+(f)^2. We obtain ε_d(|D_n(f)|^2) ≪ n^-4_d+(f)^2 for an appropriate choice of β and n ≤ε^-β. From here, one can conclude the proof as in <cit.> (increasing d). § EFFECTIVE AVOIDANCE RESULTS §.§ Diophantine points We recall the notion of Diophantine points introduced by Lindenstrauss, Margulis, Mohammadi, and Shah in <cit.> in slightly adapted notation. Fix a decreasing function : (0,∞)→ (0,1). A point x = Γ g∈ X is (,T)-Diophantine if for any proper non-trivial subgroup < of class we have ∧(g)≥((g)) whenever (g) < T. For convenience of the reader and to introduce constants we recall here results proven in <cit.>. We include the clarifiying dependencies of the constants while noting that they carry little importance within this article. There exist constants ≥ 1 and ∈ (0,1) depending on , >0 depending on and (), and >0 depending on , () and Γ with the following property. Let g ∈ G, let T≥ 0, let η∈ (0,1/2), and let t_1 < t_2. Suppose that for all s>0 (s) ≤1s^-η^. Then at least one of the following options hold: * We have |{t∈ [t_1,t_2]: Γ g u_t ∉X_η or Γ g u_t not (,T)-Diophantine}| < η^ |t_2-t_1|. * There exists a nontrivial proper subgroup < of class so that for all t ∈ [t_1,t_2] (gu_t) ≤ (g^+ T^) η^-, ∧(gu_t) ≤ |t_2-t_1|^- (g^+ T^) η^-. * There exists a nontrivial proper normal subgroup < with ∧≤(()^1/η/)^1/. We also record the following simple corollary to Theorem <ref>. As in the introduction, let (xH) be the smallest height in the cusp attained on a periodic H-orbit xH. There exist ≥ 1 depending on and >1 depending on , (), Γ and U with the following property. Let μ=μ_xH be the Haar probability measure on a periodic orbit xH= Γ g H. Let T ≥ 1, η∈ (0,1/2) and suppose that (s)≤1 s^-η^ Then at least one of the following two options holds. * We have μ( {y∈ X: y is not (,T)-Diophantine or y ∉X_η}) ≤η^. * There exists a proper subgroup 𝐌∈ of such that the orbit Γ() g contains xH and satisfies (Γ() g) ≤(xH)^ T^η^-. We assume that (·) satisfies (<ref>) for some sufficiently large ≥ to be determined in the proof. We wish to use <cit.> for a generic point on the H-orbit xH; we may assume without loss of generality that x = Γ g itself is Birkhoff generic with respect to U. Similarly, we may suppose that |g| ≤ 2 (xH)^ by (<ref>). Consider Theorem <ref> for intervals [0,t'], t' ∈, and the point x. Then one of the three options <ref>–<ref> occurs infinitely often and by changing to a subsequence t_ℓ of the integers we assume that it always occurs. If <ref> holds for all intervals [0,t_ℓ], we may pass to the limit as ℓ→∞ to obtain (1) (recall that x is Birkhoff generic). Suppose now that for any ℓ there exists a nontrivial nonnormal subgroup 𝐌_ℓ∈ as in <ref>. As the height is bounded independently of ℓ and for any m>0 there are finitely many -groups of height at most m, we may pass to a subsequence and assume = _ℓ for all ℓ. We have for all t ∈ [0,t_ℓ] (gu_t) ≤ (g^+ T^) η^- ∧(gu_t) ≤ t_ℓ^- (g^+ T^) η^- for all t ∈ [0,t_ℓ]. Letting ℓ→∞ we obtain ∧η_(g)=0 from (<ref>) or equivalently U ⊂ g^-1() g. This shows that xU ⊂Γ() g and hence xH ⊂Γ() g by taking the closure. In particular, H ⊂ g^-1() g and the second case of the proposition holds with the group by (<ref>). Suppose Option <ref> in Theorem <ref> holds. In particular, there is a proper normal subgroup 𝐌 = 𝐌_ℓ with ∧≤((𝐌)^1/η/)^1/. Notice that ∧'≠ 0 for any proper normal subgroup '. Indeed, if ∧' = 0 then U and all its conjugates by elements of H are contained in '(). But the conjugates of U generate H and so H ⊂'() contradicting our assumption in <ref>. Thus, choosing large enough with C_4^-1≪min_≠'∧^⋆ we contradict (<ref>) and hence conclude. In the remainder of this article, we will consider functions (·)=_η(·) of the form _η(s) = 1 s^-η^ for η∈ (0,1/2). For later convenience, we also define (xH) = _(xH) = inf{(Γ() g): ≠𝐌∈ with xH ⊂Γ() g} to abbreviate the second case in the above corollary. Thus, if T < ^-1/(xH)^1/η/(xH) the existence of many (ψ_η,T)-Diophantine points on the H-orbit xH is guaranteed by Corollary <ref>. §.§ Effective closing lemma This paper relies heavily on the following effective closing lemma due to Lindenstrauss, Margulis, Mohammadi, Shah, and the author from <cit.>. For a subalgebra < we write v̂_ for the point defines on the projective space ℙ^()(). We equip ℙ^()() with the metric (·,·) (the Fubini-Study metric) given by (v̂,ŵ) = min{v+w,v-w} for unit vectors v ∈v̂,w∈ŵ. There exist constants ,>1 depending only on N, and E>0 depending on N,G,Γ with the following property. Let < be a non-normal subalgebra. Assume τ>0, T>R>Eτ^-. Let x = Γ g ∈ X_τ be a point. Suppose that there exists a measurable subset ℰ⊂ [-T,T] with the following properties: * |ℰ|>TR^-1/. * For any s,t∈ℰ there exists γ_st∈Γ with u_-sg^-1γ_stgu_t≤ R^1/ (u_-sg^-1γ_stgu_t. v̂_, v̂_)≤ R^-1. Then one of the following is true: * There exist a nontrivial proper subgroup ∈ so that the following hold for all t ∈ [-T,T]: η_(gu_t) ≤ R^ ∧η_(gu_t) ≤ T^-1/R^ * There exist a nontrivial proper normal subgroup ∈ with ∧ ≤ R^-1/. We will use the effective closing lemma only in the weaker form of the following corollary. There exists >0 depending only on and >0 depending on N,G,Γ with the following property. Let < be a subalgebra. Let η∈ (0,1/2) and T ≥η^-. Let x ∈ X_η be (ψ_η,T)-Diophantine. Suppose that ℰ⊂ [0,T] is a measurable subset satisfying the following: * |ℰ| > T^1-1/ * For any s,t ∈ℰ there exist g_st∈ G with |g_st| ≤ 2, xu_s = xu_tg_st, and (g_st.v̂_, v̂_) ≤ T^-1. Then is a Lie ideal. We apply Theorem <ref> with τ = η and R = T^θ for some small θ to be determined. In view of our assumptions, we may assume R is bigger than any given fixed constant below. We assert that (a) in Theorem <ref> is satisfied by choosing > / θ. Moreover, writing x = Γ g and using the assumptions there is for any s,t ∈ℰ a lattice element γ_st∈Γ with γ_stgu_s = gu_t g_st. In particular, (<ref>) and (<ref>) yield (b) in Theorem <ref>. By contradiction, assume that is not an ideal. Suppose conclusion (1) in Theorem <ref> holds for some ∈ℋ. In particular, T^-⋆R^⋆≥∧η_(g)≥ψ_η(η_(g)) ≫η^⋆ R^-⋆ and so T ≪η^⋆R^⋆ which is a contradiction in view of our assumptions and for θ sufficiently small. Suppose conclusion (2) in Theorem <ref> holds for some . Similarly to the previous case R^-⋆≫∧≥ψ_η(()) ≫η^⋆(max_() )^-⋆≫η^⋆ where we used that has finitely many normal subgroups (see Lemma <ref> for a more explicit estimate). This proves the corollary. § ADDITIONAL ALMOST INVARIANCE In the following, we use polynomial divergence of U-orbits to show that transversal generic points yield additional almost invariance (see <ref> for an introductory discussion). The following proposition is a refinement of <cit.> to include almost centralized displacements. Let k_0,d ≥ 1 and ε>0. Let μ be the Haar probability measure on a periodic H-orbit and assume that μ is ε-almost invariant under an intermediate Lie algebra ⊂⊂ (with respect to _d). Let ⊂ be an undistorted H-invariant complement to . Suppose that there exist two points x_1,x_2∈ X satisfying that * x_2 = x_1 exp(r) for some r ∈ with ε^1/4≤r≤ 1, * and x_1,x_2 are [k_0,ε^-1]-generic (with respect to _d). Then μ is ≪_d k_0^⋆r^⋆-almost invariant (with respect to _d) under some Z ∈ with Z = 1. In view of the conclusion, we may and will assume k_0 ≪ε^A for some absolute large constant A>0. Indeed, μ is trivially ≪ 1-almost invariant under any vector in of norm 1 by <ref>. To simplify notation, we set t_0 = k_0^ (following the definition of the discrepancy operator in Definition <ref>). We prove the proposition by case distinction according to whether or not sup_t ∈ [t_0,ε^-]_u_-t(r)-r≤ 1. Note that (in any orthonormal basis of 𝔯 any component of) p(t) = _u_-t(r)-r is a polynomial in t with p(0) = 0; there is no control on the speed of growth of p(·) (from below) e.g. since r could be U-invariant. Suppose first that (<ref>) holds. By a fundamental property of polynomials, we have (using p(0) = 0) for any t_1 ≤ε^- sup_t ∈ [t_0,t_1]p(t)≪ t_1 ε^. By the genericity assumption, we have for any n ∈ [k_0,ε^-1], f ∈(X) and i=1,2 |μ(f) - 1/(n+1)^-n^∫_n^^(n+1)^ f(x_iu_t) t| ≤1/n_d(f). Also, note that f(x_2u_t) = f(x_1exp(r)u_t) = f(x_1u_texp(_u_-t(r)) =f(x_1u_texp(r + p(t))). Notice that exp(r+p(t)) = exp(r)(I+O(p(t))) so that for all t ∈ [0,2ε^-/2] using <ref> f(x_2u_t) = f(x_1u_t exp(r+p(t)) = f(x_1u_texp(r)) + O(ε^/2_d(f)). Thus, we have for n ∈ [k_0,ε^-1/2] μ(f) = 1/(n+1)^-n^∫_n^^(n+1)^ f(x_2u_t) t + O(1n_d(f)) =1/(n+1)^-n^∫_n^^(n+1)^ f(x_1u_texp(r)) t + O(1n_d(f) + ε^/2_d(f)) =μ(exp(r).f) + O(1n_d(f) + ε^/2_d(f)) where we used _d(exp(r).f) ≪_d(f) (cf. <ref>) in the last step. We now choose n = ⌊ε^-1/2⌋ and obtain that μ is ≪ε^1/2-almost invariant under exp(r). Letting Z = r/r we deduce from <ref> that μ is ≪(1/rε^1/2 + r)-almost invariant under Z. If (<ref>) holds, μ is ≪r-almost invariant under Z. This proves the proposition assuming that (<ref>) holds. We suppose now that (<ref>) fails. In this case, one can proceed exactly as in <cit.>. Define T = 12inf{t ∈ [t_0,ε^-]: p(t)≥} where >0 is as in (<ref>) below. Since the coefficients of p are bounded in terms of r, we have T ≫r^-⋆. The polynomial map q(s) = p(sT) satisfies sup_s∈ [0,2]q(s) = and sup_s∈ [0,2]q'(s)≪ 1. Now let n ∈ be such that n ∈ [T^1/, (2T)^1/-1]. In particular, n ≥ T^1/≫r^-⋆. Also, n ≤ (2 T)^1/≤ε^-1. For any t,t_0∈ [n^,(n+1)^] and f ∈(X) we have f(x_2u_t) = f(x_1 u_t exp(r + p(t)) = f(x_1 u_t exp(p(t))) + O(r_d(f)) = f(x_1u_t exp(p(t_0))) + O((r+ T^-1/)_d(f)) where we applied (<ref>) in the second step and <ref> together with |t-t_0|≪ n^-1≪ T^1-1/ in the third. Overall, we obtain by definition of genericity |μ^exp(q(s_0))(f)-μ(f)| ≪ (r+ T^-1/)_d(f) ≪r^κ_d(f) for s_0 = t_0/T and some absolute κ>0. This proves that μ is ≪_d r^κ-almost invariant under exp(q(s)) for s ∈ [1/2,2]. We shall need at this point a replacement for <cit.>. We claim that (because is undistorted) there is >0 (not depending on or ) so that the following is true: for any r_1,r_2 ∈𝔯 with r_1,r_2≤ one can write exp(r_1)exp(r_2)^-1 = exp(Z_)exp(Z_) for some Z_∈𝔯 and Z_∈ so that Z_≤r_1-r_2,r_1-r_2≤Z_≤1r_1-r_2. Indeed, the derivative of the map Φ:(r,s)∈×↦log(exp(r)exp(s)) at the origin is the identity and, as can be seen from the Baker-Campbell-Hausdorff formula, D_u Φ -id≪u using that is undistorted. This implies (<ref>). Set s'= r^κ and suppose that s,s+s' ∈ [1/2,2] are such that q(s)-q(s+s')≫ s'. As μ is ≪_d r^⋆-almost invariant under exp(q(s)) and exp(q(s+s')), it is also r^⋆-almost invariant under exp(q(s+s'))exp(-q(s)). Applying (<ref>) for v_1 = q(s+s') and v_2 = q(s) and using ε-almost invariance under we obtain that μ is ≪r^κ-almost invariant under exp(Z) for some w ∈ with Z≍r^κ/2. The proposition now follows from <ref> rescaling Z to norm 1. § EXISTENCE OF TRANSVERSAL GENERIC POINTS The purpose of this section is to establish the existence of two generic points with displacement transversal to an arbitrary intermediate Lie algebra ⊂⊊. Such a pair of points gives rise to additional almost invariance via Proposition <ref> (see also <ref>). Let d ≥ 1. There exist constants >1 (depending on ), k_0=k_0(d)≥ 1, d'>d, and >1 (depending on G,Γ,U) with the following property. Let μ be the Haar probability measure on a periodic orbit xH. Let T ≥ with T ≤(xH)^1//(xH). Suppose that μ is ε-almost invariant under an intermediate Lie algebra ⊋⊃ with respect to _d and let be an undistorted H-invariant complement to . Then there exist two points x_1,x_2 ∈ X with the following properties: * x_1,x_2 are [k_0,ε^-β(d)]-generic with respect to _d'. * There exists r ∈ so that x_2= x_1 exp(r) and T^-≤r≤ T^-1/ We refer to <ref> for an outline of the proof which, in particular, relies on the effective closing lemma in the work <cit.> with Lindenstrauss, Margulis, Mohammadi and Shah (see also Theorem <ref>) and the effective ergodic theorem in Proposition <ref>. The rest of the section is dedicated to the proof of Proposition <ref>. We fix a small neighborhood Ω⊂ of zero to be specified later. Let d ≥ 1 be arbitrary and let d'>d, β = β(d)>0 be as in Proposition <ref>. The following technical lemma constructs a set of points in X with good properties. There exist k_0 ≥ 1 depending on d as well as η_0 >0 depending on Γ, U with the following property. Let = _η_0 be as defined in (<ref>) and let T>0 with η_0^-≤ T ≤^-1/η_0 (xH)^1//(xH). Then there exists a non-empty subset X_good⊂ X with the following properties: * For any x ∈ X_good the measure of the set of t ∈ [0,T] such that ({Z ∈Ω: xu_texp(Z) is [k_0,ε^-β]-generic w.r.t. _d'}) < 9/10(Ω) is at most 10^-10T. * Any x ∈ X_good is (,T)-Diophantine. * For any x ∈ X_good |{t ∈ [0,T]: xu_t ∉X_η_0}| ≤ 10^-10T. The lemma merely combines already established results. We begin by finding a `large' set of points as in (i). For any x ∈ X and t ∈ [0,T] set f(x,t) = ( {Z ∈Ω: xu_t exp(Z) is not [k_0,ε^-β]-generic})/(Ω). By Proposition <ref> we have 1/T∫_0^T∫_X f(x,t) μ(x) t ≪ k_0^-1. By Chebyshev's inequality, this shows that 1/T∫_0^T f(x,t) t ≤ 10^-11 for x ∈ X outside of a set of μ-measure ≪ k_0^-1. For any such x, we have after applying Chebyshev's inequality again |{t ∈ [0,T]: f(x,t) ≥1/10}| ≤ 10^-10T. Choosing k_0 large enough, we can thus ensure the set of points x ∈ X satisfying (ii) has measure at least 9/10. We turn to the set of points as in (ii). Choosing η_0 small enough (depending on Γ and G) to ensure that η_0^≤ 10^-11 we obtain that the set of points as in (iii) has measure least 1-10^-11 by Corollary <ref> in view of our restrictions on T. For the set of points in (iii) we also apply Corollary <ref> to obtain that μ(X ∖ X_η_0) ≤ 10^-11. This implies by U-invariance of μ ∫_X |{t ∈ [0,T]: xu_t ∉X_η_0}|/Tμ(x) ≤ 10^-11. In particular, the set of points x ∈ X with |{t ∈ [0,T]: xu_t ∉X_η_0}| ≥ 10^-10 has measure at most 1/10. We have shown that the set of points as in (i), (ii), and (iii) has μ-measure at least 7/10 which proves the lemma. Throughout the proof of Proposition <ref>, we shall take k_0, η_0, , and X_good to be as in Lemma <ref>. Moreover, T>0 is assumed to be larger than a fixed large constant and is taken to satisfy (<ref>). Fix a point x ∈ X_good and write 𝒯⊂ [0,T] for the set of times t ∈ [0,T] with the following properties: * xu_t ∈ X_η_0. * The set of Z ∈Ω such that xu_texp(Z) is [k_0,ε^-β]-generic has measure at least 9/10(Ω). By Lemma <ref>, |𝒯| ≥ (1-10^-9)T. Fix small neighborhoods Ω_⊂Ω,Ω_ of zero in resp. . Each of these neighborhoods is assumed to be a small ball around zero with respect to · where the radius is chosen independently of or . We may assume that exp(Ω_)exp(Ω_) is an injective set on the compact set X_η_0. We denote for any y ∈ X_η_0 ϕ_y: (Y_1,Y_2) ∈Ω_×Ω_↦ yexp(Y_1)exp(Y_2). We may assume that ϕ_y is a diffeomorphism onto its image Ω(y) := yexp(Ω_)exp(Ω_) where the derivative is uniformly close to the identity independently of and (using the argument after (<ref>) based on being undistorted). By the pigeonhole principle, we may fix y∈ X_η_0 such that |{t ∈𝒯: xu_t ∈Ω(y) }| ≫ T. Write ϕ=ϕ_y for simplicity. Fix κ∈ (0,1) to be determined in the course of the proof. We set for R >0 Ω_(R) := {Y ∈Ω_: Y≤ R^-κ}, Ω_(R) := {Y ∈Ω_: Y≤ R^-2}, B_Y(R) := Y +Ω_(R)+Ω_(R). There is a constant C>1 such that if Ω_,Ω_ are small enough, we have that for any Y ∈Ω_+ Ω_ ϕ(B_Y(R)) ⊂ y'exp(Ω_(CR)) exp(Ω_(CR)) =: Ω(y',CR) for y' = y exp(Y). Furthermore, we may assume that for any y”∈Ω(y',R) Ω(y',R) ⊂Ω(y”,CR). The set of t ∈𝒯 with xu_t ∈ϕ(B_Y(T)) has measure at most T^1-1/(2) for all Y ∈Ω_+ Ω_. To prove the claim, we use the effective closing lemma in the form of Corollary <ref>. Suppose that there exists Y ∈Ω_+ Ω_ so that the set ℰ of times t ∈𝒯 with xu_t ∈ϕ(B_Y(T)) has measure at least T^1-1/(2). For any s,t ∈ℰ we have xu_s,xu_t∈Ω(yexp(Y),CT) by (<ref>). Hence, xu_s∈Ω(xu_t,C^2T) by (<ref>) and there exist X^_st∈, X^_st∈ so that xu_s = xu_texp(X^_st)exp(X^_st) with X^_st≤ (C^2T)^-2, X^_st≤ (C^2T)^-κ. Setting g_st= (exp(X^_st)exp(X^_st))^-1, Corollary <ref> applies whenever T is sufficiently large. But note that is not an ideal (cf. Remark <ref>) which yields a contradiction and proves the claim. Take a cover of Ω(y) by 'boxes' of the form ϕ(B_Y(T)) with O(1) overlaps. The assumption implies that ≫ T^1/(2) of the boxes in that cover contain a point of the form xu_t for t ∈𝒯. Note that the partitioning is chosen not to be `too fine' in the directions of . Indeed, Ω_ is covered by ≪ T^κ() translates of Ω_(T). Thus, ≫ T^1/(2)-κ() many boxes in the direction are reached by the U-orbit. In particular, we may find two times t_1,t_2 ∈𝒯 and Y_1 = Y_1^ + Y_1^, Y_2 = Y_2^+ Y_2^ such that the points x_1 = xu_t_1 = y exp(Y_1^) exp(Y_1^), x_2 = xu_t_2 = y exp(Y_2^) exp(Y_2^) satisfy T^-2≤Y_1^-Y_2^≤ T^-1/2()(1/(2)-κ()). We take κ = (4())^-1. As in <cit.>, we now perturb x_1,x_2 in the -direction. For i=1,2 we denote by W_i the set of Z ∈Ω such that x_iexp(Z) is [k_0,ε^-β]-generic. By construction, we have (W_i) ≥9/10(Ω). As these sets W_i have sufficiently large measure, one may argue as in <cit.> to find Z_i ∈ W_i for i=1,2 such that x_2 exp(Z_2) = x_1 exp(Z_1) exp(Y) for some Y ∈ with T^-2≪Y≪ T^-1/2()(1/(2)-κ()) (assuming that Ω_,Ω,Ω_ are sufficiently small). Thus, the points x_1 exp(Z_1) and x_2exp(Z_2) satisfy the requirements of the proposition and we conclude. § PROOF OF THEOREM <REF> We prove Theorem <ref> by a simple induction process: Starting with _0 = we can apply Proposition <ref> to obtain two generic points with displacement transversal to . Using these two points we obtain that μ is almost invariant under a larger subalgebra _1 ⊋ by Proposition <ref> and Proposition <ref>. We repeat this procedure using _1. After at most (G) steps, one obtains that μ is almost invariant under . (Note that the degree of the Sobolev norm _d needs to be increased at every step.) The following proposition then implies the theorem. There is >0 with the following property. Let μ_xG^+ be the G^+-invariant probability measure on xG^+ ⊂ X. Suppose that μ is a probability measure on xG^+ which is ε-almost invariant under with respect to _d for some d≥ 1. Then |μ_xG^+(f) -μ(f)|≪_Γ,dε^/d_d(f). We may assume that C (xH) ≤(xH)^1/2 for some large constant C>0 where the notation (xH) was introduced in (<ref>). Indeed, the theorem follows easily from <ref> otherwise. We proceed recursively as outlined above. Start with _0 = so that μ is ε-almost invariant under _0 for any ε>0 by definition (for any choice of Sobolev norm, say _d_0). Let be an undistorted H-invariant complement to . Apply Proposition <ref> with T = (xH)^1/2. Thus, there exists d_1 ≥ d_0 and two [k_0(d_0),ε^-β(d_0)]-generic points x_1,x_2 (with respect to _d_1) with x_2 = x_1 exp(r) for some r ∈ such that T^-≤r≤ T^-1/. By Proposition <ref>, μ is ≪ T^-⋆-almost invariant under some unit vector Z ∈ (with respect to _d_1). Therefore, by Proposition <ref> the measure μ is ε_1-almost invariant under a Lie subalgebra _1⊃ (with respect to _d_1) of dimension (_1) > () for ε_1 ≪ T^-⋆≪(xH)^-⋆. One can proceed in this manner recursively; we limit ourselves to the second step for notational simplicity. Let _1 be an undistorted H-invariant complement to _1. Let T >0 be sufficiently large satisfying T ≤(xH)^1/2. By Proposition <ref> there exists d_2 ≥ d_1 and two [k_0(d_1),ε_1^-β(d_1)]-generic points x_1,x_2 (with respect to _d_2) with x_2 = x_1 exp(r) for some r ∈_1 such that T^-≤r≤ T^-1/. We choose T to be a sufficiently small power of ε_1^-1 so that (<ref>) is satisfied. By Proposition <ref>, μ is ≪ε_1^⋆-almost invariant under some unit vector Z∈_1. By Proposition <ref> the measure μ is ε_2-almost invariant under a Lie subalgebra _2⊃ of dimension (_2) > (_1) for ε_2 ≪ε_1^⋆≪(xH)^-⋆. Continuing like this proves that μ is ≪(xH)^-⋆-almost invariant under and the theorem follows from Proposition <ref>. § EXTENSIONS A priori, the function (xH) measuring the minimal complexity of all intermediate orbits for a periodic orbit xH is hard to estimate (note that Theorem <ref> is equivalently phrased with a polynomial rate in (xH)). Indeed, it appears to require a classification of all subgroups of class containing gHg^-1 for g ∈ G with Γ gH periodic. In the following, we aim to reduce the set of intermediate groups. Let be a -subgroup of and let A>1, C>1. We say that a subcollection ' ⊂ is (A,C)-exhaustive for if the following property holds: whenever is contained in a proper -subgroup ∈ of with () ≤ T then is also contained in a proper -subgroup ' ∈' with (') ≤CT^A. A subcollection ' ⊂ is exhaustive for if it is (A,C)-exhaustive for some A>1, C>1. We shall give concrete examples of exhaustive subcollections in <ref> below. If xH is a periodic orbit in X, the Borel-Wang Theorem determines a Γ-conjugacy class of semisimple -groups: the Γ-conjugacy class of the Zariski-closure of gHg^-1 when x = Γ g. We say that a subcollection ℋ'⊂ℋ is (A,C)-exhaustive for xH if it is (A,C)-exhaustive for any element of that conjugacy class. Lastly, we define in analogy to (<ref>) _'(xH) = inf{(Γ() g): ≠𝐌∈' with xH ⊂Γ() g}. The following theorem extends Theorem <ref>. There exists d ≥ 1 depending on G and H with the following properties. Let xH = Γ g H be a periodic orbit and let '⊂ be an (A,C)-exhaustive Γ-invariant subcollection for xH. Then there exists δ>0 depending only on A,G,H such that | ∫_xH f μ_xH - ∫_xG^+ f μ_xG^+| ≪_'(xH)^-δ(xH) _d(f) for all f ∈ C_c^∞(X). Here, the implicit constant depends on G, H, Γ, and C. By Definition <ref>, we have (xH) ≤_'(xH) ≪_C(xH)^⋆(xH)^⋆. We may assume that (xH) ≤θ_ℋ'(xH)^δ' for a fixed δ'>0 (else the theorem follows from <ref>). Thus, | ∫_xH f μ_xH - ∫_xG^+ f μ_xG^+| ≪_(xH)^-δ(xH) _d(f) ≪_'(xH)^-⋆δ(xH)^⋆_d(f) implies Theorem <ref> by choosing δ'>0 sufficiently small. §.§ Examples of exhaustive subcollections In this subsection we shall give a few examples of exhaustive subcollections with a focus on the setup in Theorem <ref>. §.§.§ General examples Given any -subgroup < we write _ for the normalizer subgroup and ^ℋ for the largest subgroup of which has class . Then (_),(^) ≪()^⋆; see <cit.>. The following lemma is elementary. For any nonnormal subgroup ∈ of there exists a connected nonnormal -subgroup ⊂ containing with () ≪()^⋆ and _^ ∘ = so that one of the following is true: * is a parabolic -group. * is the centralizer of a -torus. * is semisimple with finite centralizer. In particular, the collection of subgroups ^ for as in (i), (ii), or (iii) together with all normal subgroups of is an exhaustive subcollection for . We remark that a centralizer of a -torus is always a Levi subgroup of a parabolic subgroup defined over but in general such a parabolic subgroup cannot be chosen to be defined over . Suppose first that the unipotent radical _ of is non-trivial. We define _0 = and inductively _j = ___j-1^∘ for j ≥ 1. We obtain an increasing sequence of connected -subgroups = _0 ⊂_1 ⊂_2 ⊂… The sequence needs to stagnate after at most () steps and hence we obtain a connected -subgroup containing with () ≪()^⋆ and = __^∘. Such a subgroup must be parabolic <cit.>. Suppose now that _ is trivial and consider the nested sequence defined by _0 = and _j = __j-1^∘. If _j has non-trivial unipotent radical for some j, one can apply the previous step[Alternatively, one can use the fact that the centralizer of a reductive -group is always reductive. ]. Also, note that _j is never a normal subgroup of . Indeed, otherwise _i is a product of some of the simple factors of for every i ≤ j by backwards induction contradicting the non-normality assumption on . If _j has trivial unipotent radical for every j, there is a connected reductive -group containing with () ≪()^⋆ and with = ^∘. If the center of is finite, we are done. Otherwise, the centralizer[Note that ' is not necessarily equal to . For example, take to be the product of the block-diagonally embedded copy of _2 in _4 with its centralizer.] ' of contains and satisfies (') ≪()^⋆. This concludes the lemma. When is given as product of -almost simple groups, one could wish for an exhaustive subcollection of subgroups in product form. Theorem <ref> in this case is particularly useful when trying to establish disjointness. For simplicity, we only state the following lemma for two factors; as it is not used in the remainder of the article, we omit the proof. Suppose that = _1 ×_2 where _1, _2 are -almost simple simply connected -groups. Consider the subcollection ' ⊂ of -groups < of one of the following two forms: * = _1 ×_2 for _1 < _1 or = _1 ×_2 for _2 < _2. * is the graph of an isomorphism _1 →_2. Then ' is exhaustive for any < of class . §.§.§ Special orthogonal groups The aim of this section is to determine an exhaustive subcollection used in the proof of Theorem <ref>. For that purpose, let Q be a nondegenerate[Here, Q is non-degenerate if there is no non-zero v ∈^n with (v,w)_Q = 0 for all w ∈^n.] rational quadratic form on ^n for n ≥ 4 and let = _Q. For any subset ℬ⊂^n we set _ℬ = {g∈_Q: g.w = w for all w ∈ℬ}. Whenever L ⊊^n is a subspace with Q|_L non-degenerate, the group _L≃_Q|_L is semisimple if (L)≤ n-3 and is a torus if (L) = n-2. It is a maximal (connected) subgroup if and only if (L)=1 (this goes back to Dynkin <cit.>). Let L ⊂^n be a subspace of dimension at most n-3. Assume that Q|_L is positive definite over . The subcollection ℋ_L' = {_v: 0 ≠ v ∈ L} is (A,C)-exhaustive for _L for some (A,C) depending only on n. We will use the following lemma. Let < _ be a semisimple -subgroup. Then (') ≪_()^⋆ for any normal -subgroup ' ◃. We realize any -simple Lie ideal ' ◃ by integral linear equations with controlled coefficients. To that end, let be the centralizer of () in () where we use coordinates induced by a basis of of integral vectors of norm ≪()^⋆ (which exists by Siegel's lemma). Any Lie ideal ' ◃ (defined over ) is -invariant and, if ' is absolutely simple, the action of on ' is by scalars (in particular, is a -torus). Conversely, if _α is a weight space for a weight α of , then _α is an absolutely simple ideal. Let 𝒜 be the set of weights of the -representation on . The absolute Galois group 𝒢 = (/) acts on 𝒜 and we set _𝒢.α = ⊕_α' ∈𝒢.α_α'. By construction, _𝒢.α⊂ is a -simple Lie ideal and any -simple Lie ideal is of this form. Observe that () ≪()^⋆ and fix an integral basis X_1,…, X_t of () with X_i≪()^⋆. For any α∈𝒜 the eigenvalue α(X_i) of X_i is an algebraic integer and satisfies |α(X_i)|≪X_i^⋆≪()^⋆. By construction, _𝒢.α = {v ∈: ∏_α' ∈𝒢.α(X_i-α'(X_i))v = 0 for all i} and so any -simple ideal is defined by integral linear equations with coefficients of size ≪()^⋆. Taking direct sums the same is true of any ideal defined over . This implies the lemma. Let < be a -group of class containing _L. If n =4, _L is maximal and _L = i.e. there is nothing to prove. So we assume n ≥ 5 throughout and, in particular, = _Q is absolutely almost simple. By Lemma <ref> we may assume that is a subgroup as in (i)–(iii) of that lemma. We begin first by observing that (i) cannot occur. Indeed, the centralizer of _L is equal to _L^⊥≃_Q|_L (up to finite index) and hence -anisotropic (as L is assumed positive definite). Suppose now that is the centralizer of a -torus as in (ii). In particular, commutes with _L which implies that preserves L and the orthogonal complement L^⊥. In fact, since _L acts irreducibly on L^⊥, fixes L^⊥ pointwise. The subspace V = {v ∈^n: t.v= v for all t ∈()} thus contains L^⊥ and the restriction of Q to V^⊥⊂ L is positive-definite over . Since () ≪()^⋆, V and V^⊥ are defined by integral linear equations with coefficients of size ≪()^⋆. In particular, by Siegel's lemma there exists a nonzero integral vector v∈ V^⊥⊂ L with v≪()^⋆. By construction, (_v)≪()^⋆ and _v ⊃_L which proves the lemma in this case. Lastly, suppose that ⊃_L is semisimple with finite centralizer as in (iii). The following argument roughly follows <cit.>. Let '⊂ be the subgroup generated by all -conjugates of _L. It is a normal subgroup defined over and hence (') ≪()^⋆ by Lemma <ref>. We may assume henceforth that = '. Assume first that does not act irreducibly in the standard representation. If V is a nontrivial -invariant subspace and m ∈(), then it is also m_L m^-1= _m.L-invariant. In particular, either V ⊂ m.L or V ⊃ (m.L)^⊥. As is connected, one of these options occurs for all m ∈(); after taking orthogonal complements we may assume the first. Thus, V ⊂ V' = ⋂_m ∈() m.L. Hence, the right-hand side V' is a nontrivial -invariant subspace and V' = {v ∈^n: m.v = v for all m ∈()}. Hence the argument follows as in the previous case. Lastly, assume that acts irreducibly. The argument in <cit.> implies that = and the proposition follows. The proof follows directly from Proposition <ref>, Theorem <ref>, and the fact that (xH) ≪ 1. The latter follows from compactness of the centralizer by work of Dani and Margulis <cit.> (see also <cit.>). amsplain
http://arxiv.org/abs/2407.12151v1
20240716201229
Revisiting AGN Placement on the BPT Diagram: A Spectral Decomposition Approach
[ "Hossen Teimoorinia", "Sara Shishehchi", "Finn Archinuk", "Joanna Woo", "Robert Bickley", "Ping Lin", "Zhonglin Hu", "Emile Petit" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.IM" ]
Hossen Teimoorinia hossen.teimoorinia@nrc-cnrc.gc.ca, hossteim@uvic.ca NRC Herzberg Astronomy and Astrophysics, 5071 West Saanich Road, Victoria, BC, V9E 2E7, Canada Department of Physics and Astronomy, University of Victoria, Victoria, BC, V8P 5C2, Canada Signify, Research US, 1 Van de Graaff Drive, Burlington, MA 01803, USA Department of Biomedical Engineering, University of Alberta, Edmonton AB, T6G 1H9, Canada. Department of Physics, Simon Fraser University, 8888 University Dr, Burnaby BC, V5A 1S6, Canada Department of Physics and Astronomy, University of Victoria, Victoria, BC, V8P 5C2, Canada Ciena, 5050 Innovation Drive, Ottawa, ON, K2K 0J2, Canada NRC Herzberg Astronomy and Astrophysics, 5071 West Saanich Road, Victoria, BC, V9E 2E7, Canada Department of Electrical and Computer Engineering, University of Victoria, 3800 Finnerty Road, Victoria BC, V8P 5C2, Canada Departement of Mathematics and Statistics, McGill University, 845 Sherbrooke Street West, Montreal, QC, H3A 0G4, Canada § ABSTRACT Traditional single-fibre spectroscopy provides a single galaxy spectrum, forming the basis for crucial parameter estimation. However, its accuracy can be compromised by various sources of contamination, such as the prominent  emission line originating from both Star-Forming (SF) regions and non-Star-Forming regions (NonSF), including Active Galactic Nuclei (AGN). The potential to dissect a spectrum into its SF and NonSF constituents holds the promise of significantly enhancing precision in parameter estimates. In contrast, Integral Field Unit (IFU) surveys present a solution to minimize contamination. These surveys examine spatially localized regions within galaxies, reducing the impact of mixed sources. Although an IFU survey's resulting spectrum covers a smaller region of a galaxy than single-fibre spectroscopy, it can still encompass a blend of heterogeneous sources. Our study introduces an innovative model informed by insights from the MaNGA IFU survey. This model enables the decomposition of galaxy spectra, including those from the Sloan Digital Sky Survey (SDSS), into SF and NonSF components. Applying our model to these survey datasets produces two distinct spectra, one for SF and another for NonSF components, while conserving flux across wavelength bins. When these decomposed spectra are visualized on a BPT diagram, interesting patterns emerge. There is a significant shift in the placement of the NonSF decomposed spectra, as well as the emergence of two distinct clusters in the LINER and Seyfert regions. This shift highlights the key role of SF `contamination' in influencing the positioning of NonSF spectra within the BPT diagram. § INTRODUCTION Galaxies are complex systems that require systematic classification to enable thorough analysis. These classifications include distinguishing between star-forming and non-star-forming (NonSF) activities, including AGNs, which are potent energy sources located at the centers of galaxies. They emit energy across a broad electromagnetic spectrum, from radio waves to X-rays <cit.>. NonSF galaxies are further subdivided into two categories <cit.> of Seyfert <cit.> and Low Ionization Nuclear Emission-Line Region (LINER) <cit.> galaxies. In these galaxies, hot and old stars play a significant role in producing low-ionization emissions. They are older, more massive, less dusty, less concentrated, and have higher velocity dispersions compared to Seyferts <cit.>. The emission from these galaxies can dominate the overall galactic emission, especially at certain wavelengths. This dominance is critical as NonSF emissions can obscure the wavelengths important for identifying star-forming activities, particularly in the ultraviolet and infrared spectra. These wavelengths are vital for accurately estimating the Star Formation Rate (SFR) <cit.>. Given that the spectra under study combine emissions from HII regions, pure Seyfert and LINER AGNs, and LINER-like emissions resulting from shocks or photoionization by old, UV-bright stars, we may use the terminology 'SF' to refer to star-forming regions and 'NonSF' for non-star-forming regions, which may also include true AGNs, in the context of our paper. Overall, these decomposition distributions show a significant narrowing of AGN density. Contamination of AGN spectra is well discussed in the literature, with a notable example being low ionization nuclear Emission-Line Region (LINER) galaxies. These galaxies are known for their diverse properties, which have been the subject of extensive studies (e.g., <cit.>; <cit.>; <cit.>; <cit.>, <cit.>). We refer readers to <cit.>, who provides evidence supporting the role of hot and old stars in generating low-ionization emissions in these galaxies, which can be considered a sort of contamination. Galactic research utilizes diverse data acquisition methodologies, such as photometric data collection and integrated single-fibre spectroscopy. The Sloan Digital Sky Survey (SDSS) (<cit.>; <cit.>) employs a spectrograph to capture galactic spectra. These spectra may not encompass the entire galaxy, highlighting the importance of the covering fraction. The spectrum that emerges is a composite of emissions from various parts of the galaxy, each rich in emission lines necessary for classifying galaxies based on their physical properties. These complex optical spectra can be dissected to isolate contributions from SF and NonSF activities, as demonstrated by <cit.>. The research delves into the challenging task of differentiating emissions caused by star formation from those arising from NonSF activities. Employing the superposition principle <cit.>, the study reveals that emissions from NonSF and star-forming regions within a galaxy can be viewed as distinct but overlapping waveforms, together forming the galaxy's overall emission spectrum. The superposition principle asserts that any resultant waveform at a given point, created by the overlap of multiple waves, is simply the algebraic summation of these individual waves. Consequently, the observed emission from a galaxy often represents a blend of contributions from both NonSF and SF regions. Accurately distinguishing these emissions is crucial for precise astronomical observations and interpretations due to their different mechanisms and inherent properties <cit.>. Regarding the application of the superposition principle, modelling a galaxy's spectrum becomes a complex task due to uncertainties and complexities related to star composition and formation. To streamline this process, researchers often utilize a linear superposition of `Simple Stellar Populations' (SSPs), which treats galaxies as assemblies of stars that share similar ages and chemical compositions. For example, <cit.> illustrates this approach by reconstructing a galaxy's integrated spectrum through a linear combination of individual stellar spectra from various types drawn from an extensive and diverse spectral library to ensure a broad representation of stellar characteristics. Additionally, the stellar initial mass function plays an important role in the formation of SSPs <cit.>. Regarding the above example, the spectrum of a galaxy, represented by N data points, can be viewed as a combination of SSPs, each weighted by an SFR. With a given integrated spectrum, the goal can be to identify the contributing SSPs (each with N data points) along with their respective weights (i.e., the SFRs). These kinds of tasks are analytically complex, as they involve solving numerous equations to determine the N data points of SSPs with the appropriate SFR weights. Alternatively, these kinds of challenging problems may be addressed using machine learning by providing a robust training set of integrated spectra and the contributing SSPs. <cit.> used a deep learning approach to estimate mass-weighted stellar age and metallicities of the SSPs that comprise galaxy spectra. A deep neural network can also be employed to adjust the model's parameters to find the reverse process. In this paper, we propose a method to reverse the procedure by using a data set that includes `pure' SF and NonSF galaxy spectra. We will demonstrate how to decompose a composite spectrum (with N data points) into two components: NonSF and SF, each with the same number of data points and corresponding weights. To classify NonSF and SF galaxies, Baldwin, Phillips, and Terlevich (BPT; <cit.>) introduce a classification system using emission line ratios. For galaxy classification via BPT diagrams, the theoretical and empirical demarcation lines are introduced by Kewley et al. (2001; K01) and Kauffmann et al. (2003; K03). K01 delineated the NonSF-dominated region, while K03 defined the SF galaxy region. These lines were applied to SDSS galaxies, and those situated between these two lines were categorized as composites. The BPT diagram, which uses log [OIII]/H_β vs. log [NII]/H_α for classification, is not the only method to classify NonSF and SF galaxies. For example, the WHAN method (Cid-Fernandes et al., 2010, 2011) uses log EW_Hα vs. log [NII]/H_α and classifies galaxies into four different classes: `strong AGNs,' `weak AGNs,' SF, and passive galaxies. The effectiveness of equivalent width emission lines in classifying SF and NonSF galaxies is also demonstrated by <cit.>, who used the SDSS (DR7) dataset and supervised Deep Neural Networks (DNN). <cit.> employ a machine learning approach, specifically a Gaussian mixture model (an unsupervised method), along with SDSS (DR7) and SEAGal/STARLIGHT, to analyze the BPT and WHAN diagrams. Their best-fit model identifies four Gaussian components that can account for up to 97% of the data variance. However, their analysis does not provide statistical evidence supporting the presence of a Seyfert/LINER dichotomy within their sample. While SDSS, an example of a single-fibre spectroscopic survey, has significantly contributed to our understanding of galaxy formation and evolution, it is limited by averaging the complex internal structures of target galaxies. For a more detailed study, <cit.> use data from the S7 survey <cit.> on NGC 5728 and NGC 7679. Particularly, in the paper presented by <cit.>, optical integral field data is used to separate emissions related to star formation, shock excitation, and AGN activity in the central region of the galaxy NGC 613. The paper identifies three 'basis spectra' representing pure star formation, AGN activity, and shock excitation using BPT diagrams that can be used to distinguish the contributions in a spectrum. The study illustrates that in more than 85 percent of cases along each AGN fraction sequence, the emission line luminosities are effectively modelled using linear superposition of the luminosities from one spectrum dominated by AGN activity and another dominated by star formation processes. As another example, <cit.> examines a broad-line `AGN' that was mistakenly classified as an H II galaxy in the BPT diagram. This misclassification is thought to be caused by contamination from SF activities. The issue is addressed by subtracting an average star formation activity from the spectrum. Their study suggests that at least 20% of the star-forming contributions should be considered in the spectrum to rectify the misclassification, which may be a consequence of the limitations inherent in single-fibre spectroscopy. IFU surveys have emerged as a key tool to address the limitations of single-fibre spectroscopic surveys like SDSS in capturing the complex internal structure of galaxies. IFU surveys capture data across smaller two-dimensional fields, providing a more nuanced view of the internal structures of galaxies. This approach complements the broader insights offered by surveys like SDSS. A widely utilized IFU dataset is the Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey <cit.>, which provides us with less contaminated SF and NonSF regions. This makes it an exceptional training set for a range of analytical projects. Moreover, the MaNGA survey has significantly contributed to research into the spatially resolved properties of galaxies. This includes the evaluation of the NonSF fraction and its distribution across various galactic regions, a topic comprehensively explored in the study by <cit.>. In this current work, we present a decomposition method that utilizes the BPT diagram along with deep-supervised models to separate galaxy spectra into their constituent NonSF and SF components. This method is refined by training data derived from MaNGA spectra. In this paper, we describe the datasets used in this work in section <ref>. Section <ref> describes the method, and the results will be described in section <ref>. An overview and the conclusion will be presented in section <ref>. § DATA §.§ MaNGA preprocessing Training our models for spectra decomposition involves generating synthetic spectra with known contributions from near-homogenous sources. This section outlines how we processed MaNGA spectra to act as these representative sources. The preprocessing pipeline builds on <cit.>, and we would direct the reader there for a more thorough explanation. Briefly, the MaNGA spectra are spatial pixels (spaxels) from Data Release 15 (DR15, <cit.>). Spaxels with continuum signal-to-noise ratio (SN) < 2 were masked, along with foreground stars. Spaxels were spatially binned following <cit.> and <cit.> into “baxels” to increase SN, accounting for correlated noise in adjacent spaxels following <cit.>. Binning targeted a SN of 20 and this was not always achieved, especially for baxels towards the edge of the galaxies. The preprocessing left 4682 galaxies and 1,170,539 baxels. For this work, baxels were resampled to 2Å wavelength bins between 3700Å and 8114Å (therefore, each baxel is defined by 2208 elements). Due to deredshifting, some wavelength bins did not have a signal in this range, and they were removed, resulting in 1.08 million spectra. As this paper intends to decompose emission lines into either SF or NonSF sources, we further filter baxels by signal-to-noise of their emission lines; all the major emission lines (, , Ø3, 2 and 2) need to have SN values greater than 8. There are 1,082,389 samples before filtering, and there are 406891 samples after filtering. We use K01 (<cit.>) and K03 (<cit.>) as the lines to separate the AGN and SF samples, which is shown in Figure <ref>. There are 41,974 NonSF samples, 173,235 SF samples, and 191,682 composite samples. For data normalization, we divide each spectrum by its median value. §.§ SDSS preprocessing We generate a test set from integrated galaxy spectra taken from the 7th data release of SDSS (DR7). Spectra included must correspond to SDSS objects with spectroscopic classes of 2, 3, or 4 (galaxies, quasars, or high-redshift quasars, respectively) and spectroscopic redshifts below 0.3. In order to select optical/ narrow-line region AGN hosts and galaxies without a strong optical AGN contribution with high confidence, we use the MPA-JHU catalogue of spectroscopic data products for DR7 galaxies. Following our method for the MaNGA training set, we require a minimum signal-to-noise ratio of 8 for all four narrow emission lines used for BPT classification. We select samples of star-forming, composite, and NonSF host galaxies, again using the K01 and K03 diagnostics on the diagram to denote the three main regions. The spectra for the objects in the validation set are pre-processed in the same way as the MaNGA training spectra, with additional features added to account for specific systematic issues with SDSS DR7 spectra. Inspection of the data reveals that there is a common artifact in the SDSS spectra between 5570- 5590Å produced by the over-subtraction of a telluric line. We flattened each spectrum by the local flux of the continuum in this 20Å window to prevent it from affecting the dynamic range of the data later. As with the MaNGA data, we shift the wavelengths of each spectrum measurement to the rest frame based on the SDSS spectroscopic redshift. We used the pysynphot <cit.> package to re-sample each spectrum on an equal-spaced linear grid of wavelength between 3700-8114Å. pysynphot conserves total flux in the spectrum, such that emission line fluxes are retained even when specific spectral features are down-sampled during data preparation. Where data was missing on either the short- or long-wavelength side of the spectrum, a linear regression and extrapolation was performed on the spectral data below 4000Å and above 5000Å. Gaussian noise with σ_noise=1/2×σ_data (standard deviation equal to half the standard deviation of the real, nonzero data in the relevant wavelength domain) was also added to the linear extrapolations in order to match the characteristic variance of the spectrum. Although this choice is motivated by realism, we expect the particular characteristics of the noise will have no bearing on the performance of the model since noise is inherently unlearnable and no new information is being passed. §.§ Continuum estimations We extract the continuum from each spectrum by running pPXF <cit.>, which is able to fit both the continuum and emission lines simultaneously. We used the E-MILES templates <cit.> in a grid of 16 age bins (0.03, 0.05, 0.07, 0.09, 0.2, 0.4, 0.7, 1, 1.5, 2, 3, 5, 7, 9, 11 and 13.5 Gyr) and 10 bins of metallicity ([Z/H] = -1.79, -1.49, -1.26, -0.96, -0.66, -0.35, -0.25, 0.06, 0.15 and 0.26). We used the option to regularize the best-fit template weights with a regularization parameter of 100 (i.e., a regularization error of 0.01). <cit.> found that regularization greatly improved stellar population parameter estimation for the continuum. However, regularization is not expected to change the χ^2 of the fit significantly. § METHOD In this section, we will describe the methods used to estimate the necessary parameters used in this paper and also our approach to decomposing galaxy spectra. §.§ Flux ratio estimations Accurate and reliable measurement of spectral flux in astronomy must account for various factors. This may involve correcting spectra for dust and other intrinsic and environmental influences <cit.>. Additionally, when measuring physical parameters, the impact of utilizing different methods and templates <cit.>, along with the significant role of spectroscopic techniques and data processing algorithms <cit.>, are crucial. These factors can lead to variations between different studies or surveys. Our study utilizes spectra from the SDSS and MaNGa surveys, which already have corresponding flux measurements. In a machine learning context, these data sets can potentially be used to train deep neural networks to obtain fluxes. Since the spectra are generated synthetically by a neural network, the flux values are no longer known. In the next phase of our work, we will need to handle such spectra generated by our pipeline, which will require new flux measurements. This presents a situation where we need to measure fluxes from different sources. So, we need a method for obtaining the measurable parameters from different sources required to construct BPT diagrams in a consistent and uniform way. One potential machine learning solution for measuring flux could involve training a deep neural network (DNN) with a specific dataset, such as SDSS. For example, the trained model can be used to estimate MaNGA spectra fluxes. Initially, after training a model, this method produces good results when validated against the SDSS validation set. However, when we examine the results, it exhibit bias when validated with MaNGA spectra. A bias in SDSS flux estimations is also observed when MaNGA spectra are used as the training set. Even when a combined training set of SDSS and MaNGA is employed, individual validation of each survey shows biases in the validation process, with one showing bias to the left and the other to the right. This might indicate the different nature of the datasets or data processing procedures or the effect of environmental factors such as dust. In the next step, we utilize the ratio of flux measurements as the target instead of directly measuring the flux. This eliminates systematic issues and prevents biases from being present. In practice, since we need the ratio of the fluxes, we train and use a DNN to measure flux ratios directly (hereafter, ratio-DNN). The comparison between the observed and predicted ratios shows very little scatter, and no bias is seen in different validation processes; therefore, the classification of spectra by BPT diagrams (SF, NonSF, and composites) achieves very high accuracy. Figure <ref> depicts a validation set showing the difference between the predicted log(OIII/Hβ) and the observed value. For consistency and uniformity, we utilize this ratio-DNN to make our BPT diagrams. The ratio-DNN model is a regressor, which means the output values are real numbers instead of labels used for classification tasks. This model inputs 2208 data points from a spectrum. It includes eight one-dimensional Convolutional Neural Network <cit.> layers. These layers can extract useful features and information from the input spectra. The last layer of the CNN part is connected to a fully connected regressor <cit.>, ending with nodes related to flux rations needed to construct BPT diagrams. To avoid overfitting, we employ various regularization methods such as Dropouts <cit.> and an Early Stopping method that halts training if the validation set performance drops below that of the training set. §.§ Spectra Continuum Estimations In this paper, we investigate two distinct scenarios. Our approach to analyzing spectra yields results with both the original spectra and the same spectra from which the continuum has been subtracted. One can train various DNNs (as a pipeline) using the original spectra and adopt a more data-driven approach that minimizes dependence on theoretical assumptions and models. However, since NonSF emission lines typically lie in regions with very different stellar populations than SF emission lines, one potential concern is that the DNN will learn from the continuum rather than the emission lines. Therefore, we also repeat the analysis by training another set of DNNs with continuum-subtracted spectra. Our research indicates that using either the original or the continuum-subtracted spectra produces similar results. Similar to the previous section, our method requires us to work with spectra generated by DNNs and estimate the continuum when necessary. To achieve this, we use a machine learning model, a DNN (hereafter, cont-DNN), where the input is the spectra, and the predicted outputs are the best-fitted continuum. It should be noted that the target for the deep model is the continuum estimated by pPFX; thus, the ML model mimics pPFX in estimating the continuum. Using the data and continuum described in Sec. <ref>, we train our cont-DNN to obtain a continuum for each input spectrum. The results are unsurprisingly excellent, largely because training a deep model on noiseless theoretical models tends to converge quickly and produces outputs with high accuracy, as demonstrated here. The top plot of Figure <ref> shows a spectrum alongside the continuum by pPFX and the continuum generated by cont-DNN. The middle plot illustrates the relative average difference between the pPFX and ML estimates for MaNGA spectra from 50,000 samples. The bottom plot illustrates the same comparison for SDSS spectra. The results are nearly perfect for MaNGA spectra, with slightly more fluctuation for SDSS spectra. Note that although the MaNGA and SDSS datasets were constrained to have the same limiting S/N for the emission lines, they were not constrained to have the same S/N for the continuum. MaNGA baxels were binned to achieve a continuum S/N of 20, while there was no such binning or constraint on the continuum S/N for the SDSS. §.§ Artificial spectra Our method for decomposing an unknown spectrum into SF and NonSF components involves synthesizing a set of known artificial samples. A DNN is then trained to reverse this process. MaNGA spectra have been classified into three regions based on their locations on a BPT diagram (see Figure <ref>). A uniformly random value between 0 and 1 is selected as the weight for a randomly chosen SF spectrum, with the remaining fraction applied to a randomly selected NonSF spectrum. These two 'source' spectra are weighted and summed to create a synthetic spectrum. We record both source spectra and their weights as the training dataset. To create the training set, we only utilize 'pure' SF and 'pure' NonSF spectra. Below, we provide two examples of the procedure. The left panel of Figure <ref> provides a visual demonstration, with an SF spectrum (depicted in blue) assigned a weight of 0.6559 and a NonSF spectrum (in red) assigned a weight of 0.3441. The resultant combined spectrum is calculated as the weighted average of these individual NonSF and SF spectra, as shown at the bottom left of the figure. The right panel displays the positions of the two source spectra on the BPT diagram and the placement of the resultant synthetic spectrum. As can be seen, this synthesized spectrum is situated between the K01 and K03 demarcations, a positioning influenced by the randomly selected SF and NonSF sources and their respective weights. Different weights or initial SF/NonSF spectra will result in synthetic spectra that can occupy various regions on the BPT diagram. In Figure, <ref>, we present a scenario where the synthetic spectrum is positioned not between the division lines K01 and K03 but within the NonSF region. This placement demonstrates that the location of the combined spectrum—whether between K01 and K03 or outside this range—depends on the specific weightings and source spectra used. Our training set includes one million synthetic spectra, supplemented by an independent validation set of half a million spectra. A subset of 50,000 synthetic spectra is displayed in Figure <ref> to illustrate their distribution. Similar to Figure <ref>, these spectra can be classified into the three regions of the BPT diagram and coloured accordingly. We record the weights and initial spectra used to create each synthetic spectrum to train our decomposition model. §.§ The Deep Decomposition Model The main purpose of generating synthetic spectra is to use them as known targets in training a decomposition model. Essentially, the process executed by our Deep Decomposition Model (hereafter, DDM) reverses the method used to create the synthetic spectra. Our model comprises a neural network with five distinct outputs, each governed by its own loss function. Detailed information on the structure of all the models and the pipeline can be found https://github.com/shishehchi/SpectraDecompositionhere. From a technical perspective, the model comprises eight consecutive one-dimensional CNN layers to extract valuable features and information from the input spectra. This is followed by five distinct individual regressors corresponding to five outputs. The overall trainable parameters are about 39 million, and the model is evaluated with half a million independent synthetic spectra as validation sets and employs various regularization methods such as Dropouts and the Early Stopping method. In summary, the DDM processes a combined spectrum featuring 2,208 wavelength bins and generates five distinct outputs. Among these, two are spectra: one representing the SF component and the other the NonSF component, each consisting of 2,208 data points. In addition to these spectra, the DDM produces three additional outputs. One of the outputs predicts the SF contribution as a fraction, indicating the proportion of the SF component within the combined spectrum (or 1-SF for NonSF). The other two outputs are the flux ratios, expressed in a logarithmic scale, that are needed to create the BPT diagram. These outputs (the ratios) help the model learn and focus on predicting the proper positions needed to construct the BPT diagram. To maintain consistency in terminology, we use the term 'composite' spectra to refer to those that fall between the K01 and K03 lines on a BPT diagram, while 'combined' spectra refer to the input to our model. A combined spectrum may also be a composite. An example model output is displayed in Figure <ref>. The left panel features the input combined spectrum (shown in green) being processed by the model. The predicted SF (in blue) and NonSF (in red) spectra, along with their corresponding contributions, are also illustrated. The right panel displays the placement of the two predicted decomposed source spectra on a BPT diagram alongside the original combined spectrum. For comparison, this example serves as the inverse of Figure <ref>. As can be seen, the SF contribution to the predictions is in good agreement with expectations. Here, we see that the SF component spectrum exhibits higher Hα intensity in comparison to the combined spectrum. This might raise concerns about flux conservation and, consequently, the conservation of the star-formation rate. However, when we consider the contribution percentage of each spectrum, the total flux remains conserved. The total scatter between the SF contribution in making the synthetic spectra and the predicted ones is σ∼ 0.043. A sample of 50,000 predictions is compared with the true values in Figure <ref>. § RESULTS §.§ Flux Conservation Flux conservation is an important factor in assessing the quality of our model. With robust flux conservation, the decomposed SF spectrum can be used to extract emission line fluxes and calculate important physical parameters like SFR. In Figure <ref>, we present a single example of flux conservation where an input spectrum (from the synthetic test set) shown in the top panel is decomposed into two spectra displayed in the middle panels. These panels also indicate the fraction contributed by each spectrum. By taking the weighted average of the two predicted sources, we can reconstruct the input. The relative residuals between the reconstruction and the original input are shown in the bottom panel, where near-zero variance along the wavelength bins indicates that flux conservation is only minimally perturbed at emission lines. While this is just one example, we will explore creative average flux conservation in the following. Figure <ref> displays the relative average residual for 50,000 decomposed spectra, separated by wavelength bin. The reconstructed spectrum is the sum of the predicted SF/NonSF spectra, weighted by their predicted contributing fractions. The upper panel presents the average residuals for MaNGA and SDSS spectra for the original spectra (i.e., not continuum-subtracted). It shows a fairly flat residual with negligible fluctuations at the emission line positions. MaNGA spectra show smaller residuals due to MaNGA baxels being binned to achieve S/N for the continuum of at least 20, while SDSS spectra were not binned. However, both cases exhibit a bit more fluctuation at wavelengths less than 4000 Å. On average, this panel indicates good flux conservation, especially at wavelengths greater than 4000 Å, which is crucial for constructing BPT diagrams. The lower panel pertains to the continuum-subtracted spectra, which are used to train the associated DDM. These show less residual at wavelengths below 4000 Å, likely due to the flat spectra (continuum-subtracted) used in training the associated DDM. However, they exhibit higher fluctuations at the positions of emission lines, which is more noticeable for the SDSS dataset. The lower average continuum S/N in the SDSS dataset likely resulted in noisier continuum subtraction. It should also be noted that the SDSS spectra result from the combinations of SF and non-SF emission on top of a generally bulge-dominated central spectrum, whereas the machine learning models have been trained using MaNGA spectra, which are mainly from HII regions in young disk areas and from non-SF regions in old central stellar regions. The difference could cause additional fluctuations or inaccuracies in the SDSS spectra. §.§ Decomposing SDSS_J1042_0018 SDSS J1042-0018 is a typical broad-line AGN; however, the flux ratios of its narrow emission lines categorize it as an H II galaxy in the BPT diagram, prompting further investigation into its unique properties as discussed in various studies <cit.>. In the analysis presented by <cit.>, the emission lines of SDSS J1042-0018 are examined using two distinct models: one employs broad Gaussian functions, and the other utilizes broad Lorentz functions for the broad Balmer lines. These differing approaches lead to variant flux ratios of the narrow emission lines, resulting in the classification of SDSS J1042-0018 as an H II galaxy in the BPT diagram, even though it is actually a broad-line AGN. To explain this misclassification, one plausible theory proposes the presence of significant star-forming activity, suggesting that at least a 20% contribution from star formation is necessary to justify the misclassification. The spectrum of SDSS J1042-0018 has unique features, and visually, it is distinct from our training set and does not resemble average SDSS spectra. As a case study, we fed the spectrum into two DDM models, one trained with and the other without continuum subtraction. In the top panel of Figure <ref>, the spectrum is identified as an 'outlier' SF galaxy. The decomposed SF part contributes approximately 0.471. The associated relative residual is fairly flat; however, it exhibits high fluctuations, particularly around 4800 and 6600Å. The third panel from the top shows results for the continuum-subtracted scenario. Here, the associated model classifies the input spectrum as a composite spectrum with only a 3.5% SF contribution. The bottom panel displays the relative residual, which shows more fluctuation. In particular, the original spectra scenario (the two top panels) aligns more closely with expectations, as it shows more than a 20% contribution from the SF part <cit.> and also presents a better residual profile. As described in <cit.>, parameter estimations are challenging in this case and often yield results that are not definitive. Although our results are not significantly aberrant, conclusive outcomes may not be expected due to the unique nature of this spectrum. Nevertheless, our method can still be effectively utilized for data mining and deriving insights based on a specific training set. §.§ Statistical Decomposition Results In this section, we analyze the decomposition of two datasets (MaNGA and SDSS) using large samples from different classes (SF, Composite, and NonSF). It's important to note that each spectrum, represented as a point on the BPT diagram, is split into two distinct points on a (new) BPT diagram. So, for example, if we have a sample of 50,000 SF spectra from the MaNGA dataset (shown in the top left plot of Figure <ref>), after feeding them to the model, we will have 50,000 points of NonSF and 50,000 SF on a new BPT diagram, each with different contribution values. This is depicted in the top middle plot, where the contribution values are shown on a logarithmic scale. It's evident that the Non-SF contributions are very low and mainly in the LINER sub-region. This low-value NonSF contribution is expected from IFU MaNGA spectra, which are more uniform. In other words, although we initially assumed a 'pure' SF, the results show the presence of low NonSF contributions. The top right plot displays the same result as a density plot, which shows the NonSF components concentrated in the LINER sub-region. This 'purification' process can also be extended to other classes on the BPT diagram. In the middle panel of Figure <ref>, we feed the MaNGA `composite' spectra to the model (the left plot). These spectra are already known as a more heterogeneous set in which a more scattered distribution of SF or NonSF is expected to be seen in the decomposition procedure. As shown in the middle plot of this panel, SF and NonSF spectra show more extended populations and higher contributions (compared to the SF sample). In the bottom panel of the figure, the 'purification' is presented for a sample of NonSF spectra. Here, we feed a set of 'pure' NonSF spectra to the model (i.e., the bottom left plot). As expected and shown on the bottom middle plot, there is a low probability of data points appearing in the SF region. On the bottom right plots, we show the associated density plot. This plot demonstrates that with the purification process, two more distinguishable clusters of spectra in the regions of Seyfert and LINERs are now noticeable. In this case, the density is concentrated in the LINER region potentially true Seyfert AGN emission is quite rare in nearby galaxies and may not be present in the MaNGA sample. As we will observe, the clustering pattern becomes quite clear when we test the models using the SDSS dataset, with a dense population in the Seyfert region. Figure <ref> is similar to Figure <ref> (regarding the MaNGA data set), but for the model that utilizes a continuum-subtracted training set. The two figures exhibit very comparable patterns, validating that the network is indeed learning from the emission lines. A data-driven approach typically aims to minimize assumptions and utilize less theoretical models whenever possible. As demonstrated, the model can accurately predict the contributions to the emission lines regardless of the continuum. One advantage of using a data-driven approach (i.e., to use original spectra directly), for example, is that our DDM can detect continuum and bypass the additional step of continuum estimation. In Figure <ref>, we present the same plot as in Figure <ref>, but it pertains to the SDSS dataset. Generally, we expect more 'contamination' and a different pattern here since we have a single-fibre dataset. In the top left plot of this figure, SF spectra from the SDSS dataset are fed into the model. The middle and right plots on the top display a more extended pattern in the NonSF region with a higher level of NonSF contribution (contamination) compared to the MaNGA SF spectra. This is not surprising because single-fibre spectroscopy typically results in more mixed classes. The `contaminated' part is mainly located in the LINER region of the NonSF. In the middle panel, the SDSS composite spectra are decomposed. Here, more extended populations with higher contributions (compared to SF) appear in both the SF and NonSF regions. The bottom panel, which displays the NonSF decomposition, shows that after 'purification,' the 'pure' NonSF spectra shift away from K01 (the red line) and clearly delineate two prominent clusters in the Seyfert and LINER regions. In other words, removing SF 'contamination' from SDSS NonSF spectra makes these two classes more distinct, although the contribution from the contamination part is not very high. Figure <ref> resembles Figure <ref>, but it pertains to the model using a continuum-subtracted training set. Both figures present similar patterns, confirming the consistency of the models. § SUMMARY AND DISCUSSION We have devised a technique for decomposing a single spectrum by utilizing a training set and training various deep neural networks. This method employs the BPT diagram to classify spectra as either 'pure' SF (Star Forming) or NonSF (Non-Star Forming). To obtain 'pure' spectra for the training set, we utilized MaNGA spectra and trained three different deep neural networks: ratio-DNN, cont-DNN, and the main network, the Deep Decomposition Model (DDM). ratio-DNN, one of the networks, was trained to predict flux ratios for constructing BPT diagrams. We found that direct ratio estimations displayed high accuracy and did not exhibit systematic errors. When decomposing a spectrum consisting of 2208 data points into two spectra, SF and NonSF, each also with 2208 data points, we considered two different scenarios. In one scenario, we input the spectrum in its original format, while in the other, we feed the model 2208 data points of the spectrum after subtracting the continuum. To subtract the continuum, we use our trained cont-DNN model. When we input an original spectrum into the model, the output is the best-fitted continuum to the spectra. Once we have both the continuum and the original spectra, we can train two different DDMs to check whether the model is learning from the different stellar populations underlying the emission lines or from the emission lines themselves. One DDM is for the original spectra, and another is for continuum-subtracted inputs. The decomposition results from both DDMs show very similar patterns on the BPT diagrams, indicating the model's capability to extract relevant information from the emission lines in the original spectra, not just the continuum. This demonstrates that with a large and informative training set, the model can account for the continuum in discriminating between SF and NonSF sources. This capability can reduce the number of preprocessing steps required to estimate the continuum in training sets. To train a DDM, we need to synthesize 'combined' spectra from known 'pure' SF and NonSF sources, weighted by random numbers between 0 and 1, representing the SF contribution in the spectra. In addition to an independent validation set of synthesized spectra, we have also applied our method and networks to the MaNGA and SDSS datasets, as well as to an unusual SDSS spectrum, SDSS J1042-0018. For the MaNGA dataset, although we started with the assumption of 'pure' MaNGA SF and NonSF spectra, the results indicate that some purification is necessary. This is not surprising considering that, although the survey covers smaller regions than single-fibre spectroscopy, it can still encompass a blend of heterogeneous sources. However, this heterogeneous effect has a low impact, i.e., a low contribution of contamination for each spectrum in the training set. The impact is more noticeable when we apply the DDMs to the SDSS spectra. As expected, there is more NonSF 'contamination' in SDSS SF galaxies due to the nature of the single-fibre survey, which is more likely to include mixed regions. An interesting case occurs when SDSS NonSF spectra are fed into the model. Here, the results reveal that once the SF 'contamination' is removed from the input spectra, a vertical distribution pattern emerges on the BPT diagram. This pattern contains two completely separated clusters in the NonSF region. The new distribution moves away from the line that separates NonSF spectra from the rest. The two clusters of LINER and Seyfert regions are well separated, and this separation is in good agreement with the two LINER and Seyfert regions introduced by <cit.>. The spectrum of SDSS J1042-0018 displays distinct features that differ from those in our training set and do not resemble average SDSS spectra either. Estimating the fluxes, including flux ratio estimations, in this case, is challenging and often leads to uncertain results. However, our models can provide quick results that are reasonably close to expectations and consistent with other studies. Our method can be effectively used for data mining and deriving insights from a case based on a specific training set. In this paper, we aim to demonstrate the method and the results based on a specific training set, along with the potential applications of our approach. One such application could be to estimate a more reliable SFR from sources potentially contaminated by other emissions. Here, we have presented our method as a two-class decomposition problem using the MaNGA survey. This means that any new input is projected onto the space created by DDM using the IFU survey. An alternative training approach could expand this to a multi-component scenario. For instance, in the cases of availability, by using a large and proper training set of 'pure' SF, LINER, and Seyfert spectra, the decomposition could be expanded to distinguish between the three classes and their respective contributions. A multi-component scenario would be expected to show much better reconstructions. Another potential approach is to use theoretical training sets and compare the results with those from a data-driven approach. In this paper, we have focused on the method and presented various results obtained from it. We have not yet followed up on extracting the physical parameters from the decomposed spectra, with the exception of a few flux ratio estimations. This process requires fitting procedures using relevant software packages like pPFX, which can help find more connections between the combined spectra and their components. These connections might provide better insight into and interpretation of the decomposition results. Finally, we utilized the MaNGA data release 15, which was sufficient for our approach, although a newer release with an increased number of spectra is available. § ACKNOWLEDGEMENT HT acknowledges support from an NSERC Discovery Grant. This research used the facilities of the Canadian Astronomy Data Centre operated by the National Research Council of Canada with the support of the Canadian Space Agency. aasjournal
http://arxiv.org/abs/2407.12538v1
20240717132131
High Frequency Matters: Uncertainty Guided Image Compression with Wavelet Diffusion
[ "Juan Song", "Jiaxiang He", "Mingtao Feng", "Keyan Wang", "Yunsong Li", "Ajmal Mian" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Journal of Class Files, Vol. 14, No. 8, 2024 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals High Frequency Matters: Uncertainty Guided Image Compression with Wavelet Diffusion Juan Song, Jiaxiang He, Mingtao Feng^, Keyan Wang, Yunsong Li and Ajmal Mian Juan Song, Jiaxiang He, Mingtao Feng, Keyan Wang and Yunsong Li are with Xidian University, Xi’an 710071, China (email: songjuan@mail.xidian.edu.cn; hjx1255216006@163.com; mintfeng@hnu.edu.cn; kywang@mail.xidian.edu.cn; ysli@mail.xidian.edu.cn) Ajmal Mian is with the Department of Computer Science and Software Engineering, The University of Western Australia, Perth, Crawley, WA 6009, Australia (e-mail: ajmal.mian@uwa.edu.au). ^ denotes corresponding authors. Manuscript received 2024; revised 2024. Received February 2024; Accepted July 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Diffusion probabilistic models have recently achieved remarkable success in generating high-quality images. However, balancing high perceptual quality and low distortion remains challenging in image compression applications. To address this, we propose an efficient Uncertainty-Guided image compression approach with wavelet Diffusion (UGDiff). Our approach focuses on high frequency compression via the wavelet transform, since high frequency components are crucial for reconstructing image details. We introduce a wavelet conditional diffusion model for high frequency prediction, followed by a residual codec that compresses and transmits prediction residuals to the decoder. This diffusion prediction-then-residual compression paradigm effectively addresses the low fidelity issue common in direct reconstructions by existing diffusion models. Considering the uncertainty from the random sampling of the diffusion model, we further design an uncertainty-weighted rate-distortion (R-D) loss tailored for residual compression, providing a more rational trade-off between rate and distortion. Comprehensive experiments on two benchmark datasets validate the effectiveness of UGDiff, surpassing state-of-the-art image compression methods in R-D performance, perceptual quality, subjective quality, and inference time. Our code is available at: https://github.com/hejiaxiang1/Wavelet-Diffusion/tree/mainhttps://github.com/hejiaxiang1/Wavelet-Diffusion/tree/main. learned image compression, wavelet transform, diffusion model, uncertainty weighted rate-distortion loss. § INTRODUCTION Given the exponential growth of media data, lossy image compression plays a crucial role in efficient storage and transmission. Established lossy image compression standards, such as JPEG <cit.>, JPEG2000 <cit.>, BPG <cit.>, and VVC <cit.>, follow a sequential paradigm of transformation, quantization, and entropy coding. Each stage is separately optimized with hand-crafted rules, making it challenging to adapt to diverse image content. Recently, learned image compression methods <cit.> based on the variational auto-encoder (VAE) <cit.> have demonstrated superior rate-distortion performance compared to traditional techniques. Despite these advancements, these models often directly optimize for low distortion in terms of mean squared error (MSE), leading to a degradation in perceptual quality. This degradation typically manifests as over-smoothness or blurring, which has minimal impact on distortion metrics but adversely affects visual perception. As illustrated in Fig.<ref>, high frequency details like textures and edges (e.g., trees and ripples) contain more significant visual information than smooth areas (e.g., sky). These regions with high frequency details often suffer severe distortion during compression, highlighting the need for improved methods that balance rate-distortion performance with perceptual quality. A recent class of generative models focuses on improving the perceptual quality of reconstructions. Agustsson et al.  <cit.> and Mentzer et al.  <cit.> employed Generative Adversarial Networks (GANs)  <cit.> as image decoders to generate reconstructions with rich details, albeit at the cost of some fidelity to the original image. Given the recent success of diffusion based models in image restoration tasks such as super-resolution  <cit.>, deblurring  <cit.> and inpainting  <cit.>, Yang et al.  <cit.> and Ghouse et al <cit.> leveraged diffusion models for image compression, achieving impressive results in terms of perception. However, it has been observed that vanilla diffusion models tend to reconstruct images with richer visual details but less fidelity to the original images. These models are prone to generate images with color distortion or artifacts due to the reverse process starting from randomly sampled Gaussian noise. This inherent uncertainty reveals the instability of reconstructed pixels, which is closely related to the fidelity of texture and edge recovery. Therefore, considering uncertainty from the random sampling of the diffusion model is crucial for enhancing the robustness and reliability of compression models. The central challenge in balancing low distortion and high perception quality lies in the reconstruction of high frequency details, such as edges and textures. Enhancing high frequency reconstruction is an nontrivial task because high frequency typically possesses less energy and are therefore more susceptible to distortion compared to low frequency. Motivated by these issues, we propose an Uncertainty-Guided image compression approach with wavelet Diffusion (UGDiff) to maintain high perceptual quality as well as low distortion. We employ a discrete wavelet transform (DWT) to the image, compressing low frequency and high frequency components separately. Our focus is on high frequency compression to recover crucial high frequency details, thereby improving overall reconstruction quality. Specially, we propose a wavelet diffusion model to predict high frequency, followed by a residual compression module that compresses and transmits the prediction residuals between original and predicted high frequency. This diffusion prediction-then-residual compression paradigm is able to effectively address the low fidelity issue common in direct reconstructions by existing diffusion models. To facilitate high frequency prediction via wavelet conditional diffusion, we introduce a condition generation module to derive a strong condition from the reconstructed low frequency by leveraging the inter-band relations between low and high frequency components. The advantages of combination of the diffusion model with discrete wavelet transform (DWT) <cit.> are twofold. On the one hand, DWT provides a sparser representation that is easier for a network to learn compared to the pixel domain <cit.>. On the other hand, DWT reduces the image’s spatial size by a factor of four, in accordance with the Nyquist rule <cit.>, thereby expediting the inference speed of the denoising function. Diffusion models are susceptible to generating unexpected chaotic content due to the randomness of sampling from Gaussian noise. The impact of this uncertainty on compression has not been thoroughly considered, meaning that low distortion cannot always be guaranteed although high perceptual quality may be achieved. Large uncertainty in predicted high frequencies results in large residuals, consuming more bits compared to those with low uncertainty. Motivated by this, we propose a novel uncertainty-guided residual compression module. We initially estimate the aleatoric uncertainty of the predicted high frequency using Monte Carlo <cit.> sampling of the diffusion model. Subsequently, we design an uncertainty-weighted rate-distortion (R-D) loss tailored for residual compression. In addition to the hyper-parameter λ that balances the overall R-D level for the entire dataset, we introduce an uncertainty-related weight to the distortion terms to prioritize residuals with large uncertainty, thereby allocating more bits to them. The main contributions are: * We propose a wavelet diffusion based predictive coding for high frequency. Our diffusion prediction-then-residual compression paradigm effectively addresses the low fidelity issue stemming from the direct reconstruction of existing diffusion models. In addition, the combination of DWT and diffusion models can greatly expedite the inference of diffusion model. * We introduce a novel uncertainty guided residual compression module, in which an uncertainty weighted R-D loss is designed to prioritize residuals with high uncertainty and allocate more bits to them. Our proposed uncertainty weighted R-D loss provides more rational trade-off between rate and distortion. * Extensive experiments conducted on two benchmark datasets demonstrated that our proposed method achieves state-of-the-art performance on both distortion metrics and perceptual quality while offering prominent speed up compared to previous diffusion-based methods. § RELATED WORK Learned Image Compression Learned image compression methods have demonstrated substantial advancements in network structure and entropy modeling, resulting in significant enhancements in compression performance. Ballé et al. <cit.> firstly proposed an end-to-end image compression model. Then they incorporated a hyper-prior to effectively capture spatial dependencies in the latent representation <cit.>. Moreover, Minnen et al. <cit.> designed a channel-wise auto-regressive entropy model to achieve parallel computing. Ali et al. <cit.> proposed a novel non-auto-regressive model substitution approach, which reduced spatial correlations among elements in latent space by introducing a correlation loss, thereby enhancing the balance between image compression performance and complexity. Besides, Liu et al. <cit.> constructed a mixed Transformer-CNN image compression model, combining the local modeling capabilities of CNN with the non-local modeling capabilities of Transformers. To achieve better perceptual quality which is closer to human perception, a series of methods leveraging generative models to build decoders are proposed.Agustsson et al  <cit.> and Mentzer et al.  <cit.> propose to employ GANs  <cit.> to achieve high perception quality at the cost of some fidelity to the original image. Diffusion Models for Image Compression Denoising diffusion models <cit.> have progressed rapidly to set state-of-the-art for many image restoration tasks such as super-resolution <cit.>, inpainting <cit.>, and deblurring <cit.>. Some researchers have also explored the application of diffusion models in the image compression community. Yang et al.  <cit.> replaced the decoder network in the image compression process with a conditional diffusion model. Alternative approaches were introduced by  <cit.>. They initially optimized an auto-encoder for a rate-distortion loss and subsequently trained a conditional diffusion model on the output of the auto-encoder to enhance its perception quality. Pan et al. <cit.> introduced text embedding as a conditioning mechanism to guide the stable diffusion model in reconstructing the original image. However, diffusion models are prone to compromise fidelity although high perception quality is achieved when utilized as an image decoder. Uncertainty in Bayesian Neural Networks The uncertainty in Bayesian Neural Networks can be roughly divided into two categories <cit.>. Epistemic uncertainty describes how much the model is uncertain about its predictions. Another type is aleatoric uncertainty which refers to noise inherent in observation data. Notably, the integration of uncertainty modeling into deep learning frameworks has enhanced the performance and resilience of deep networks across various computer vision tasks, including image segmentation <cit.>, image super-resolution <cit.> and etc. Badrinarayanan et al. <cit.> incorporated uncertainty estimation into image segmentation tasks, leading to more reliable segmentation results, especially in ambiguous regions. Ning et al. <cit.> extended uncertainty modeling to image super-resolution, leveraging Bayesian estimation frameworks to model the variance of super-resolution results and achieve more accurate and consistent image enhancement. Zhang et al. <cit.> proposed a scalable image compression framework that integrates uncertainty modeling into the scalable reconstruction process. Chan et al. proposed Hyper-diffusion to accurately estimate epistemic and aleatoric uncertainty of the diffusion model with a single model  <cit.>. Nevertheless, there are few works that have investigated the uncertainty of diffusion models in image compression to the best of our knowledge. In this paper, we explore the influence of uncertainty in R-D loss and design an uncertainty weighted R-D loss to guide the residual compression optimization. § PROPOSED METHOD As illustrated in Fig.<ref>, our proposed UGDiff follows a wavelet predictive coding paradigm, where low and high frequency components are compressed separately after DWT. The low frequency image x_l is compressed by a pre-trained VAE based codec whose network structure is illustrated in the Appendix. Our work focuses on the high frequency compression to maintain high perception quality as well as low distortion. The condition generation module firstly generates refined high frequency x̅_h from the reconstructed low frequency as the condition, which guides the wavelet diffusion to predict high frequency x̃_h. The uncertainty map is simultaneously estimated upon the Monte Carlo Sampling during the reverse diffusion process. Then the residuals x_r between original and predicted high frequency are compressed using another VAE based codec optimized by our proposed uncertainty weighted R-D loss in which the estimated uncertainty related weight is introduced to the distortion term to prioritize residuals with large uncertainty and allocate more bits to them. Finally, the reconstructed low and high frequency components are inversely transformed by 2D-IDWT (inverse DWT) to obtain reconstructed images. §.§ Wavelet Conditional Diffusion Model With respect to image compression tasks, there are two challenges associated with the conditional diffusion model: 1) Small noise variance permits the assumption that the inverse denoising process approximates a Gaussian distribution. That results in large sampling steps (typically set to 500) and significant inference time <cit.>. 2) The diversity in the sampling process may introduce content inconsistency in the generation results, which makes application of diffusion models in image compression challenging. To address these challenges, we propose a wavelet diffusion based predictive coding for high frequency. Our approach leverages the strength of wavelet transform to speed up the inference process of the diffusion model by quartering the spatial dimensions without sacrificing information. Different from the existing diffusion models to reconstruct the image directly, our approach utilize the diffusion model for high frequency prediction and follow a prediction residual codec to address the low fidelity issues. Discrete Wavelet Transform 2D-DWT employs a convolutional and sub-sampling operator, denoted as W, to transform images from spatial domain to frequency domain, thus enabling the diffusion process solely on high frequency components. Let (x, x̂) ∈𝒟 denote an original-reconstruction image pair. Before applying the diffusion process, the specific wavelet operator W, such as haar wavelet <cit.>, decomposes x into its low frequency component x_l and an array of high frequency components x_h. Mathematically, this can be represented as: ( x_ l , x_h ) = Wx 2D-DWT decomposes the image into four sub-bands, namely, Low Low (LL), Low High (LH), High Low (HL) and High High (HH). The subband LL represents the low frequency component x_l that contains global information akin to a down-sampled version of the image, while the remaining sub-bands, LH, HL and HH, represent the high frequency components x_h that comprise sparse local details, as illustrated in Fig. <ref>. By operating in the wavelet domain, we open the possibility for diffusion models to focus on isolated high frequency details, which is often lost when the data is processed directly. In addition, utilizing 2D-DWT allows for faster inference speed for diffusion models as the spatial size is quartered according to the Nyquist rule. High Frequency Condition Generation The naive way to apply wavelet diffusion in high frequency prediction is to directly corrupt high frequency with additive Gaussian noise in the forward process and then learn to reverse it in the sampling phase, taking reconstructed low frequency x_l as the condition to guide the reverse diffusion process, as the similar way in <cit.>. Nonetheless, low frequency cannot guide the conditional diffusion to generate satisfactory predicted high frequency as expected, since utilizing the low frequency as the condition would enforce the diffusion model to generate an image that resembles low frequency rather than high frequency as will be illustrated in Fig.<ref>. To derive conditions that encapsulate high frequency details for wavelet diffusion, we investigate the correlation between the wavelet low frequency and high frequency sub-bands. We present the tree structure diagram of the wavelet decomposition to provide a clearer illustration of the sub-bands correlations in Fig.<ref>. Inspired by the inter-band correlations of wavelet sub-bands, we design a high frequency condition generation module to convert the reconstructed low frequency x̂_l into refined high frequency x̅_h as the condition for wavelet conditional diffusion, i.e., x̅_h=G_ψ(x̂_l),where G_ψ is a neural network described in the Appendix. Conditional Diffusion Equipped with the refined high frequency as the condition x̅_h, we design a conditional diffusion model in the frequency domain to generate predicted high frequency x_h with high quality. A conditional Denoising Diffusion Probabilistic Model (DDPM) utilizes two Markov chains <cit.>. The first is a forward chain responsible for adding Gaussian noise to the data: q(x_t| x_t-1) = 𝒩(x_t ; √(1-β_t) x_t-1, β_t I) where β_t represents a variance schedule. The other Markov chain is a reverse chain that transforms noise back into the original data distribution. As is illustrated in Fig. <ref>, the key idea of our wavelet conditional diffusion is to introduce the refined high frequency x̅_h as the condition into the diffusion model μ_θ(x_t, t), thereby, μ_θ(x_t, t) becomes μ_θ( x_t,t,x̅_h): p_θ(x_t-1| x_t,x̅_h) = 𝒩(x_t-1 ; μ_θ(x_t, t, x̅_h), Σ_θ) where x̅_h represents conditional guidance that controls the reverse diffusion process. The parameters θ are typically optimized by a neural network that predicts μ_θ(x_t, t, x̅_h) of Gaussian distributions. This is simplified by predicting noise vectors ϵ_θ(x_t, t, x̅_h) with the following objective: L_simple =𝔼_𝐱_0, t, ϵ_t∼𝒩(0, 𝐈)[ϵ_t- ϵ_θ(𝐱_t, t,x̅_h)^2] Subsequently, the sampling phase can commence with X_T ∼𝒩(0,𝐈) using the noise ϵ_θ(x_t, t, x̅_h) predicted from the learned neural network, as follows: 𝐱_t-1=1/√(α_t)(𝐱_t-β_t/√(1-α̅_t)ϵ_θ(𝐱_t, t,x̅_h))+σ_tz where z ∼𝒩 (0,𝐈), α _t = 1 - β _t, and α̅_t = ∏_i=1^tα _i. This iterative sampling process enables the generation of samples 𝐱_t-1 backward in time, where each sample is computed based on the previously generated sample 𝐱_t. During the sampling process, the refined high frequency x̅_h is embedded into each step of the reverse diffusion process, which makes the sampled image consistent with the distribution of x̅_h and thus improves the quality of the predicted high frequency. The pseudo-code for sampling with conditional diffusion is as Alg.<ref>. §.§ Uncertainty-guided Residual Compression Diffusion models exhibits inherent uncertainty due to randomness of sampling from Gaussian noise, which will introduce instability in subsequent residual compression. Large uncertainty in predicted high frequency will produce large residuals that will consumes more bits than those with low uncertainty. A learned image compression model aims to optimize the R-D loss so that the minimum distortion can be achieved with the minimum bitrate. Specifically, a trade-off hyper-parameter λ is utilized to balance the rate and distortion that is applied to the entire dataset, whereas the uncertainty within one image is not considered in this global trade-off. To alleviate the influence of uncertainty from the diffusion prediction on the subsequent residual compression, we propose an uncertainty-guided residual compression module. Firstly, the aleatoric uncertainty of the predicted high frequency x_h is estimated upon the Monte Carlo sampling  <cit.> of the above diffusion model. Subsequently, a novel uncertainty weighted R-D loss is designed in which the aleatoric uncertainty map is incorporated to prioritize pixels with high uncertainty and dynamically allocate more bits to them than those with low uncertainty. The introduction of uncertainty allows for more rational trade-off between rate and distortion considering the disequilibrium of the residuals, ultimately leading to improved compression performance. Uncertainty estimation. Aleatoric uncertainty arises from inherent variability and randomness in the underlying processes being modeled. Unlike other deterministic neural networks relying on Monte Carlo dropout mechanisms <cit.>, uncertainty estimation of the diffusion model is conceptually straightforward due to the randomness of the input noise. To be specific, aleatoric uncertainty can be captured by the variance of samples generated by a diffusion model. The uncertain estimation details are illustrated in Fig.<ref>. For a given conditional input x̅_h, our wavelet conditional diffusion model yields S different prediction samples of high frequency outputs {x_h1, x_h2, ..., x_hk,...,x_hs} after S times Monte Carlo Sampling, where x_hk denotes the prediction obtained from the k-th sampling given different initial input noise. To obtain an estimate of uncertainty regarding the high frequency predictions, the mean and variance of these S predictions is computed. The predicted mean of the output is calculated as: μ̂=1/S∑_k=1^Sx_hk The uncertainty of the prediction can be quantified by the variance of the prediction results, given by: δ=1/S∑_k=1^S(x_hk-μ̂)^2 Here, μ̂ represents the predicted mean, and δ denotes the predicted variance, which serves as an estimate of the uncertainty associated with the prediction results. Uncertainty weighted R-D Loss. Given an arbitrary image x, optimizing the VAE based image compression model for R–D performance has been proven to be equivalent to minimization of the KL divergence by matching the parametric density functions to the transform coding framework as follows <cit.>, L_RD ∝ E_x∼ p_x D_KL [q||p_ỹ|x ] =𝔼 _x∼ p_x𝔼_ỹ∼ q [ -logp_x|ỹ(x|ỹ)_weighted distortion-logp_ỹ (ỹ)_rate ] where ỹ is an approximation to the quantized latent representations ŷ with an additive i.i.d. uniform noise to enable end-to-end training. Specifically, minimizing log likelihood in the KL divergence is equivalent to minimizing the distortion between original x and reconstructed x̃ measured by squared difference when the likelihood p_x|ỹ(x|ỹ)∼ N(x|x̃,(2λ )^-1I ). The second term in Eq.(<ref>) denotes the cross entropy that reflects the cost of encoding ỹ i.e., bitrate R. The R-D loss can be reformulated as L_RD = R+λ x-x̃ _2 where λ is a hyper-parameter used to balance the overall rate-distortion, i.e.,larger λ for larger rate and better reconstruction quality and vice versa. λ is a global hyper-parameter applied to the entire dataset, whereas the uncertainty within one image is neglected in this trade-off. To address this issue, we reconsider the weighted R-D loss with aleatoric uncertainty. Let x_r represent the residuals to be compressed, f(·) represents the variational inference in residual compression module, δ denotes the aleatoric uncertainty estimated in the above subsection. This way, the compression model can be formulated as: x_r = f(y) + ϵδ where ϵ represents the Gaussian distribution with zero-mean and unit-variance, which is assumed for characterizing the likelihood function by: p(x_r|y, δ) = 1/√(2 πδ)exp(-x_r-f(y)_2/2 δ) Then a negative log likelihood then works out to be the uncertainty weighted distortion term between x_r and f(y), -log( p(x_r|y, δ) ) ∝x_r-f(y)_2/2 δ Uncertainty δ is incorporated into the denominator of the distortion term, resulting in residuals with high uncertainty having a relatively minor impact on the overall R-D loss function. Through comparison with the distortion term in Eq.(<ref>), we find that the weights λ and (2δ)^-1 play the same role in the the distortion term. They both penalize pixels with large variance. However, in contrast to the role of the hyperparameter λ in balancing the rate-distortion trade-off, the uncertainty, which serves as the prior of the image to be encoded, indicates pixels with high uncertainty requires a greater allocation of bits compared to those with low uncertainty. Inspired by that, we propose a new adaptive weighted loss named uncertainty-weighted rate-distortion loss (L_URD) for residual compression. Actually, we need a monotonically increasing function to prioritize pixels with large uncertainty rather than penalize them using (2δ)^-1. Differential entropy is a measure of the information content of a continuous random variable, which reflects the cost of coding. The differential entropy of the random variable X is computed as follows given the probability density function p(X). H(X) = -∫ p(X)log(p(X))dX We substitute the probability distribution in Eq.(<ref>) into Eq.(<ref>) to obtain the differential entropy H(x_r) of x_r, H(x_r)=log (δ√(2 π)) Eq.(<ref>) demonstrates the increase trend of differential entropy with the variance δ. Motivated by this equation, we use the weight log(δ) to prioritize pixels with large uncertainty in the R-D loss function. Combining the hyper-parameter λ to balance the overall trade-off between the rate and distortion, the uncertainty weighted R-D loss function is reformulated as: L_URD = R+ (λ + log(δ)) ·x_r- x̃_̃r̃_2 where λ serves as a global weight applied to the entire dataset to balance the rate and distortion, whereas estimated uncertainty δ serves as a regulator within the image to prioritize pixels with large uncertainty and allocate more bits to them during compression. Our proposed uncertainty weighted R-D loss provides more rational trade-off between the rate and distortion compared with the regular R-D loss without uncertainty. p(x̂_r, δ| x_r) ∝ p(x̂_r| x_r, δ) p(δ) =1/√(2 πδ)exp(-x̂_r-x_r_2/2 δ) 1/δ =1/√(2 π)δ^3/2exp(-x̂_r-x_r_2/2 δ). Then p_x in the Eq.(<ref>) can be replaced with p(x̂_r| x_r, δ): R_u= Σ p(x̂_r, δ| x_r) ·[-log _2 p_ŷ(Q(g_a(x ; ϕ_g)))] By allocating more bits to regions with higher variance, we can achieve better reconstruction of details, thus improving the quality of image compression. §.§ Training Strategy As the analysis above, the whole training process of UGDiff contains four steps. Firstly, we train a learned image compression network <cit.> for our low frequency codec. Details of the network structure is shown in Appendix. The loss function is L_low = R+λ· D = 𝔼_x_l ∼ p_x_l[-log _2 p_ŷ|ẑ(ŷ|ẑ)-log _2 p_ẑ(ẑ)] +λ·𝔼_x_l ∼ p_x_l[d(x_l, x̂_̂l̂)] where λ controls the trade-off between rate and distortion. R represents the bit rate of latent ŷ and side information ẑ, and d(x_l, x̂_̂l̂) is the MSE distortion term. The second step is to train the condition generation module, aiming at converting reconstructed low frequency x̂_l to refined high frequency condition x̅_h. We employed a deep convolutional neural network G_ψ with localized receptive fields, whose structure is illustrated in Appendix, optimized by minimizing MSE between the output and the original high frequency, L_generation = ||G_ψ(x̂_l)-x_h_2 The third step is to train the conditional diffusion model. The design of the denoising network in the diffusion model follows a similar U-Net architecture used in DDPM  <cit.> enhanced with residual convolutional blocks<cit.> and self-attention mechanisms.We use Eq.(<ref>) to optimize the parameters of the denoising network. The final step is to train the uncertainty guided residual compression model. The residual compression model follows the same structure as that for the low frequency compression, while optimized to minimize the uncertainty weighted R-D loss in Eq.(<ref>). § EXPERIMENT §.§ Implement Details Training. We use the OpenImages dataset <cit.> to train our models, consisting of over 9 million images. This dataset is widely used for image compression research. We randomly selected 300k images from OpenImages, resized them to 256 × 256 in each epoch. All models were trained using the Adam optimizer <cit.> for 1.8 million steps with a batch size of 16. The initial learning rate was set to 1× 10^-4 for the first 120k iterations, then reduced to 3× 10^-5 for the following 30k iterations, and further decreased to 1× 10^-5 for the last 30k iterations. We configured the λ within the range 0.01, 0.05, 0.1, 0.2, 0.3 for MSE optimization of low frequency compression network and residual compression module. Additionally, we maintained consistency in the number of channels, setting N = 192 and M = 320 for all models. We conduct the experiments using the Pytorch  <cit.> and CompressAI libraries  <cit.> over one NVIDIA Geforce RTX4090 GPU with 24GB memory. Evaluation. The evaluations are conducted on the Kodak dataset <cit.> and Tecnick dataset <cit.>. The Kodak dataset comprises 24 images with a resolution of 768x512 pixels. The Tecnick dataset includes 100 natural images with 600x600 resolutions. We calculate bit-per-pixel (bpp) for each image to show bitrate. We reach different ranges of bitrates by compressing images with different models trained using different λ. For reconstruction distortion, the common PSNR and MS-SSIM  <cit.> are evaluated for all models. PSNR represents the pixel-level distortion while MS-SSIM describes the structural similarity. In addition, we also compute the Learned Perceptual Image Patch Similarity (LPIPS) metric <cit.> to evaluate the perception loss. §.§ Comparison with the SOTA Methods Rate-Distortion Performance. To show the effectiveness of our UGDiff, we compare its R-D performance with the state-of-the-art (SOTA) image compression methods including diffusion model-based compression methods, such as CDC NeurIPS2024 (ρ=0) <cit.>,DIRAC(single sampling step is adopted to achieve minimal distortion) <cit.>, learned image compression methods, as well as traditional compression standards. The traditional compression standards include JPEG2000 <cit.>, BPG <cit.>, and VVC <cit.>. The learned compression methods include the context-free hyperprior model (Balle ICLR2018) <cit.>, the auto-regressive hyperprior models (mbt NeurIPS2018) <cit.>, entropy models with Gaussian Mixture Models and simplified attention (Cheng CVPR2020) <cit.>, Mixed Transformer-CNN model (Liu CVPR2023) <cit.>, lightweight attention model(He ESA2024) <cit.> and implicit neural representation model (Guo2024 NeurIPS 2024) <cit.>. Fig.<ref> and Fig.<ref> compare R-D curves in terms of PSNR and MS-SSIM verus bitrates averaged over the Kodak and Technick datasets respectively. Fig.<ref> compares the perception performance of different compression methods in terms of LPIPS over the Kodak and Technick datasets respectively. The figures demonstrate that our proposed UGDiff significantly outperforms SOTA compression methods including traditional codec, learned image compression methods as well as other diffusion based methods at all bitrate points in terms of PSNR, MS-SSIM, especially at low bitrates. The diffusion based methods, such as CDC, achieve excellent perception quality in terms of LPIPS ,whereas the fidelity is compromised resulting in inferiority to other learned image compression approaches in terms of PSNR and MS-SSIM. The experimental results demonstrate that our proposed UGDiff achieves better trade-off between perception and distortion metric compared to other diffusion methods. The reasons are twofold. Firstly, we utilized a novel diffusion predict-then-residual compress paradigm to address low fidelity issues during direct reconstruction from the diffusion model. Secondly, the uncertainty stemming from random sampling in diffusion is considered in the residual compression to achieve more rational trade-off between rate and distortion. BD-rate Analysis. To further compare the R-D performance quantitatively, we use the Bjøntegaard-delta-rate(BD-rate) metric <cit.> to compute the average bitrate savings for the same reconstruction quality in terms of PSNR. We use VVC intra <cit.> (version 12.1), the current best hand crafted codec, as the anchor to compute BD-rate. The BD-rates are shown in Table.<ref>. It is evident that our UGDiff outperforms the current best traditional codecs VVC, achieving the BD-rate savings of 8.02% and 11.54% on Kodak and Tecnick datasets respectively. And compared with the SOTA learned image Compression method Liu'2023, our UGDiff still achieves the BD-rate savings of 0.64% and 1.51%. Notably, our UGDiff achieves the Bd-rate savings of 23.66% and 31.13% compared with the diffusion based approach CDC, which demonstrates great improvement in terms of distortion metric. These comparisons demonstrate the RD performance superiority of our UGDiff over SOTA image compression methods. Subjective Quality Comparison. We also implement subjective quality evaluations on Kodak dataset. Fig. <ref> presents visual comparisons for original images along with the corresponding reconstructed results produced by various compression methods. As is shown in the second column of Fig.<ref>, JPEG produces severe blocking effects due to the partitioning mechanism in the coding standards. The learned image compression methods,such as Balle'2018, Cheng'2020, suffer from over-smoothing issues, characterized by a loss of textural fidelity in the reconstructed images. In particular, the subtle smile in the sign board within the first image are diminished,the numerals within the blue expanse of the sail in the second image are obscured, and the facial features of the statue in the third image are rendered with an excessive degree of smoothness. our proposed UGDiff retains more high frequency details and exhibits superior subjective visual qualities. The details are still perceptible in our reconstructed images,such as the smiling faces on the signboards, the numerals on the blue sail, and the cavities on the face of the statue. Complexity Analysis. We also evaluate the complexity by comparing the inference time of different compression methods on the Kodak dataset with the size of 512×768. Here, we calculate the encoding time and decoding time at the similar R-D points to evaluate the model complexity. For fair comparison, all the models are implemented on the same GPU using their public codes. It can be observed from Table.<ref>, that Balle'2018 demonstrates the lowest model complexity among the learned image codec. The diffusion model CDC suffers from slow decoding speed due to their iterative denoising process (A default sampling step 500 is adopted). Specifically, it takes approximately 55s to decode an image. Benefiting from the proposed wavelet diffusion model only implemented on sparse high frequency components, our UGDiff is at least 40× faster than CDC (A sampling step 10 is needed). Specifically the decoding time of our UGDiff can be reduced from 55s to 1.47s with a even higher PSNR compared with CDC. §.§ Ablation Studies We conduct the ablation studies to further analyze our proposed UGDiff. Firstly, we evaluate the sampling steps in the reverse denoising process. Then, we conduct ablation studies through including or excluding the components to evaluate the effect of high frequency, wavelet diffusion, diffusion condition as well as the uncertainty guidance. The specific components encompasses the low frequency codec, condition generation, wavelet diffusion, residual codec and uncertainty map guidance. Sampling steps. A factor that determines the inference speed of the diffusion models is the sampling step T used in the reverse denoising process. We conducted an ablation study of various settings of diffusion models on the condition and sampling step in Table.<ref>. The table clearly demonstrates that the R-D performance approaches a state of saturation when the sampling step exceeds a threshold of 10 irrespective of the guiding condition employed in the diffusion process, whether it be x̅_h or x̂_l. By contrast, large sampling step T = 500 is essential for CDC 2024 to generate images with the best perception quality. Compared with CDC 2024 that employed the diffusion in the pixel domain, the inference speed of our UGDiff is accelerated through two key factors: the spatial dimension is quartered by 2D-DWT, in addition, high frequency contains more sparse information in the frequency domain compared to the image domain. Effect of high frequency. We conduct an ablation study to validate the effectiveness of the high frequency on the image compression performance. We conduct comparative experiments using 3 variants. Variant 1:Without high frequency. Only low frequency is compressed and transmitted to the decoder to reconstruct the image without high frequency. The R-D performance is shown in the 1st column. Variant 2: Predict high frequency. Only low frequency is compressed and transmitted to the decoder, meanwhile, the high frequency is predicted from the reconstructed low frequency at the decoder. The R-D performance is demonstrated in the 2nd and 3rd columns utilizing condition generation and wavelet diffusion for prediction respectively. Variant 3:Reconstruct high frequency. Besides the low frequency, the residual between original and predicted high frequency is also compressed and transmitted to the decoder. The high frequency is reconstructed by adding the reconstructed residual and the predicted high frequency. The R-D performance is shown in the 4th and 5th columns utilizing condition generation and wavelet diffusion for prediction respectively. From the results demonstrated in Table. <ref>, we can observe that high frequency contributes dramatic performance gains although very few bits are consumed. When high frequency is discarded while only low frequency is transmitted, the RD performance drops dramatically. By comparing the 1st column with the 2nd and 3rd columns, we can observe that the high frequency components, which are merely predicted from the reconstructed low frequency without any additional bit consumption, yields a PSNR improvement ranging from 0.4 to 1.0 dB depending upon the prediction mechanism employed . Even the most accurate prediction values exhibit discrepancies when compared to their original counterparts. By comparing 2nd, 3rd and the 4th, 5th columns in the table, substantial PSNR improvement of about 2.5dB is further achieved with a minimal increase in bit consumption when the residual between original and predicted high frequency is also compressed and transmitted to the decoder to facilitate high frequency reconstruction. In conclusion, high frequency play a pivotal rule in image conclusion, which motivates us to focus on high frequency compression. Effect of Wavelet Diffusion. To validate the effectiveness of wavelet diffusion on the image compression performance, we conduct an ablation study using 3 variants. Variant 1: Without wavelet diffusion. The refined high frequency x_h obtained by the condition generation module are directly used for high frequency reconstruction without reverse diffusion sampling, whose R-D performance is shown at the 2nd column of the Table. <ref>. Variant 2: Utilizing Wavelet Diffusion for reconstruction. High frequency is generated from the wavelet diffusion model guided by the refined high frequency derived from the condition generation module, whose R-D performance is shown at the 3rd column of Table. <ref>. Variant 3:Utilizing Wavelet diffusion for prediction. High frequency is predicted from the wavelet diffusion, and the prediction residual between the original and predicted high frequency is also compressed and transmitted to the decoder, whose R-D performance is shown at the 5th column of Table. <ref>. By comparing the variants with and without wavelet diffusion in the 2nd and 3rd columns in the table, we observe that wavelet diffusion greatly improves the image compression performance due to its great capacity to generate images. When wavelet diffusion is utilized to reconstruct high frequency directly, PSNR is improved by 0.72dB compared with that without wavelet diffusion. Further more, When wavelet diffusion is utilized to predict high frequency and the prediction residual is also transmitted, the PSNR is further improved by 2.37dB comparing variant 2 in the 3rd column with variant 3 in the 5th column. That is mainly because the diffusion models may compromise fidelity when applied to reconstruct images directly. Therefore, prediction is a more suitable strategy than direct reconstruction when the diffusion model is applied to image compression. Effect of Condition on Diffusion. The condition controls the generation content of the diffusion model tightly. We introduced a condition generation module which obtains a condition stronger than the reconstructed low frequency by leveraging the inter-band correlation of wavelet coefficients. To conduct an ablation study which validates the effectiveness of the condition generation module, we implement a baseline by using the reconstructed low frequency x̂_l as the condition directly without condition generation. Table.<ref> compares the overall R-D performance of our proposed UGDiff under different conditions and sampling steps. Compared with that conditioned by reconstructed low frequency, about 0.3dB improvement regarding PSNR and 1.25dB improvement regarding MS-SSIM are achieved by UGDiff conditioned by the refined high frequency x̅_h obtained by the condition generation module. Effect of Uncertainty Guidance. We conduct an ablation study to evaluate the effect of uncertainty guidance on our UGDiff. Fig.<ref> compares the R-D curves of variants with and without uncertainty estimation. From the figure we can observe R-D performance gains at all R-D points when the uncertainty of diffusion model is introduced in the weighted R-D loss to optimize the residual compression model. §.§ Further Analysis Wavelet diffusion condition. The guiding condition exerts strong control over the reverse diffusion process. Fig.<ref> illustrates the visualization results of reverse diffusion process under different conditions. From the first row of the figure, it is evident that the process of reverse diffusion, when conditioned by the reconstructed low frequency x̂_l , leads to acquisition of an image that resembles low frequency representation, consequently manifesting a loss of certain detailed textures. That would make the prediction residual quite large and affect the efficacy of residual compression. As is illustrated in the figure, the generated refined high frequency from the condition generation module contains more high frequency details. The visualization of reverse diffusion process demonstrates that the generated image sampled from reverse diffusion conditioned by refined high frequency resembles original high frequency more than that conditioned by low frequency. That indicates our proposed condition generation module can provide a strong guide for the conditional diffusion model to predict high frequency. Uncertainty weighted rate-distortion loss. To thoroughly investigate the impact of uncertainty weighted rate-distortion loss on residual compression networks, we visually illustrate the residual image, uncertainty map, latent representations, and bit allocation of models with and without uncertainty-guided optimization respectively in Fig.<ref>. The figure demonstrates that the distribution of predictive residuals is found to be consistent with the distribution delineated in the uncertainty map, which demonstrates that the uncertainty map reveals instability of the wavelet diffusion model to predict the high frequency. Furthermore, it is observed in the 2nd column that the residual compression model, which employ a regular rate-distortion loss neglecting uncertainty, treats residuals more uniformly and allocates bits more evenly across the entire residual image. By contrast, the uncertainty weighted rate-distortion loss enforces the residual compression network to prioritize the large residuals identified by the uncertainty map. Consequently, the latent representations of these regions become more prominent, leading to more rational bit allocations. § CONCLUSION In this paper, we present an effective Uncertainty Guided image compression approach with wavelet Diffusion (UGDiff). We employ the wavelet diffusion for high frequency prediction rather than direct reconstruction and subsequently utilize a residual compression module to maximally recover high frequency details. This diffusion prediction-then-residual compression paradigm effectively addresses the low fidelity issue common in existing diffusion models. In addition, our wavelet diffusion approach exhibits a significant improvement in inference speed compared to previous diffusion-based methods. We also designed an uncertainty weighted R-D loss for the residual compression module, which provide more rational trade-off between the bitrate and distortion. Experimental results demonstrate that our proposed UGDiff outperforms state-of-the-art learned image compression methods in terms of R-D performance and subjective visual quality. Considering the great capacity of wavelet transform in multi-resolution analysis, we plan to extend our approach to resolution scalable image compression in our future work. § ACKNOWLEDGMENTS This work was supported in part by the National Natural Science Foundation of China under Grant 62373293. Professor Ajmal Mian is the recipient of an Australian Research Council Future Fellowship Award (project number FT210100268) funded by the Australian Government. IEEEtran [ < g r a p h i c s > ] Juan Song received her Ph.D. degree in communication and information system from Xidian University, Xi’an, China in 2012. She is now an Associate Professor in computer science and technology school of Xidian University. Her research interests include low-level vision, image and video compression and graph neural network. She has published more than 20 academic papers in peer-reviewed international journals and conferences. [ < g r a p h i c s > ] Jiaxiang He received his bachelor's degree in Automation from Xi'an University of Architecture and Technology, Xi'an, China, in 2022. He is currently a graduate student in the School of Computer Science and Technology at Xidian University. His research interests include deep learning and image compression. [ < g r a p h i c s > ] Mingtao Feng received the Ph.D. degree in circuits and systems from the College of Electrical and Information Engineering, Hunan University, Changsha, China, in 2019. From 2016 to 2018, he was a visiting Ph.D. student with the University of Western Australia, Perth, WA, Australia. He is currently an Associate Professor with the School of Artificial Intelligence, Xidian University. His research interests include image processing, computer vision, and machine learning. [ < g r a p h i c s > ] Keyan Wang received the M.S. and Ph.D. degree in information and telecommunication engineering from Xidian University, Xi'an, China, in 2005 and 2008, respectively. She is currently an Associate Professor with the State Key Laboratory of Integrated Service Networks, Xidian University, Xi'an, China. From September 2014 to September 2015, she worked as a visiting scholar with the Department of Electrical and Computer Engineering, McMaster University, Hamilton, Canada. Her current research interests focus on image/video compression and processing, computer vision, and deep learning. [ < g r a p h i c s > ] Yunsong Li received the M.S.degree in telecommunication and information systems and the Ph.D. degree in signal and information processing from Xidian University, Xi’an, China,in 1999 and 2002, respectively.In 1999, he joined the School of Telecommunications Engineering, Xidian University, where he is currently a Professor and the Director of the State Key laboratory of Integrated Services Networks,Image Coding and Processing Center. His research interests focus on image and video processing and high-performance computing. [ < g r a p h i c s > ]Ajmal Mian is a Professor of Computer Science at The University of Western Australia. He has received several awards including the West Australian Early Career Scientist of the Year Award, the Aspire Professional Development Award, the Vice-chancellors Mid-career Research Award, the Outstanding Young Investigator Award, IAPR Best Scientific Paper Award, EH Thompson Award, and excellence in research supervision award. He has received several major research grants from the Australian Research Council and the National Health and Medical Research Council of Australia with a total funding of over $13 Million. He serves as an Associate Editor of IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Image Processing and the Pattern Recognition journal. His research interests include computer vision, machine learning, 3D shape analysis, human action recognition, video description and hyperspectral image analysis. Architecture details. The structure for low frequency and residual compression is depicted in Fig.<ref>. The encoder, denoted as E, maps the input image x to a latent representation y. Following quantization by Q, we obtain the discrete representation of the latent variable, denoted as ŷ. This ŷ is subsequently transformed back to a reconstructed image x̂ through the decoder, represented as D. The core process is expressed as follows: [ y=E(x ; ϕ); ŷ=Q(y); x̂=D(ŷ ; θ) ] where ϕ and θ are trainable parameters of the encoder E and decoder D. We model each element ŷ_̂î as a single Gaussian distribution with its standard deviation σ_i and mean μ_i by introducing a side information ẑ_̂î. The distribution p_ŷ_i|ẑ_i of ŷ_i is as follows: p_ŷ_i|ẑ_i(ŷ_i|ẑ_i)=𝒩(μ_i, σ_i^2) To expedite the prediction process of the latent variable ŷ, we employ the Channel-wise Auto-regressive Entropy Model <cit.> for parallel accelerated prediction. This approach results in faster decoding. For our condition generation module, we design a deep U-net-like CNN architecture with a localized receptive field for estimating the score of probability of high frequency coefficients conditioned on low frequency coffcients. Specifically, the encoder's downsampling module comprises two 3 x 3 convolutional layers and a 2 x 2 maximum pooling layer, each of which applied four times. The decoder's upsampling module includes a deconvolutional layer and a feature splicing layer, repeated four times. The network implementation details are shown in Fig<ref>.
http://arxiv.org/abs/2407.13348v1
20240718094909
Multipartite Entanglement versus Multiparticle Entanglement
[ "Marcin Wieśniak" ]
quant-ph
[ "quant-ph" ]
Instiute o Theoretical Physics and Astrophysics, Faculty of Mathematics, Physics, and Astrophysics, University of Gdaśk, ul Wita Stwosza 57, 80-309 Gdańsk, Poland. marcin.wiesniak@ug.edu.pl § ABSTRACT Entanglement is defined as presence of quantum correlations beyond those achieved by local action and classical communication. To identify its presence in a generic state, one can, for example, check for existence of a decomposition of separable states. A natural extension is a genuine multipartite entanglement (GME), understood as nonexistenence of a decomposition into biseparable states (later called biseparable decomposition, BD). In this contribution we revisit activation of GME. We discuss few examples of states, which are decomposable into a mixture of biproduct states. However, after merging two copies of these states, we certify nonexistence of BD with witness operators. This seems to challenge our understanding of GME as a separate resource. It turns out that it requires a careful consideration of the physical context. We stress that activation of GME from multiple copies of GME-free states necessarily involves entangling operations. algorithmAlgorithm definitionDefinition Multipartite Entanglement versus Multiparticle Entanglement Marcin Wieśniak July 22, 2024 =========================================================== § INTRODUCTION Quantum mechanics diverges from classical physics with including the superposition principle. For an individual quantum system this means a possibility to experience more than one classically understood history. An even more striking effect is entanglement, which allows for a superposition of two, possibly spatially separated, subsystems to be in a superposition <cit.>. That is to say, either subsystem's properties are described only in reference to the other, e.g., spins of both particles are always anticorrelated. With the rise of interest in the Bell's theorem <cit.>, entanglement turned out to a be an information processing resource useful in, for example, quantum cryptography <cit.>. On the other hand, researchers found it to be surprisingly complex to characterize. A pure state of two systems is entangled iff it is not represented as a product of two local states. Entanglement of a mixed state is defined by nonexistence of a separable decomposition, ρ is entangled iff ρ≠∑_jp_jΠ(|ψ_j⟩⊗|ϕ_j⟩), ∑_jp_j=1,p_j≥ 0, Π(|ψ⟩)=|ψ⟩⟨ψ|. As it is not feasible to consider all possible decompositions, it has been not considered highly practical to verify presence of entanglement with it and many additional criteria for entanglement were formulated. A negative eigenvalue of the state under partial transposition was a strong indicator of the presence of quantum correlations <cit.>, but not a universal one, as PPT entangled states exist <cit.>. Any form of entanglement can be detected by an application of a positive, but not a completely positive map on the state <cit.>, but for a given state it is unfeasible to check all possible positive maps. By Jamiołkowski-Choi isomorphism <cit.> any such map corresponds to an entanglement witness operator <cit.>, which can attain positive (by a convention taken here) mean value only for a certain class of entangled states. However, until recently we have lacked an efficient way to find the witness operator. The entanglement detection problem affects the efforts in quantifying this resource. Obviously, quantum correlations are a quantitative rather than a qualitative feature, and there has been a number of propositions for their proper measures, e.g., <cit.>. But, again, so far, any formulation of such a measure would be a compromise between some reasonable requirements (discussed below), mainly sacrificing its ability to be efficiently computed. Even with our incomplete understanding of bipartite entanglement, it is a natural next step to discuss genuinely multipartite entanglement (GME). For example, multipartite quantum correlations have been shown to lead to a stronger all-versus-nothing conflict between quantum mechanics and local hidden variable models <cit.> than the original Bell inequality for two qubits <cit.>. Correlations between multiple system may also be necessary to reach the ground state energy in certain interacting spin lattices <cit.>, or to generate word states in an error-correcting code <cit.>. It is thus tempting to generalize definition (<ref>) to ρ is genuinely multipartite entangled iff ⇔ρ≠∑_𝒜⊂Ω∑_j_𝒜p_j_𝒜Π(|ψ_j_𝒜⟩_𝒜⊗|ϕ_j_𝒜⟩_𝒜^C), ∑_𝒜⊂Ω∑_j_𝒜p_j,𝒜=1,p_j,𝒜≥ 0. Here, Ω is the set of all parties describing state ρ, 𝒜 goes over all its proper subsets and 𝒜^C is the complement of 𝒜 in Ω. The right-hand side of the first line of definition (<ref>) will be called the biseparable decomposition and abbreviated as BD. Recently, Yamasaki et al. <cit.> have discussed the phenomenon that for multiple copies of certain states, BD breaks down. The Authors of Ref. <cit.> dub this effect activation of genuine multipartite entanglement. In this contribution, we investigate the relation of this effect to the requirements for entanglement measures. As we shall see, the usual interpretation leads to an inevitable conclusion that genuine multipartite entanglement cannot be a proper resource as it can be produced by either simply taking multiple copies of the same GME-free state, or by applying supposedly local operations. We also investigate the third logical possibility, namely that these operations should not be considered local. In this contribution we investigate few most basic cases, in which, according to definition (<ref>) and Ref. <cit.>, genuine multipartite entanglement is activated with two copies. This is confirmed by a witness operator. However, a single copy cannot be multipartite entangled by construction. While this is well accepted by Yamasaki et al., we believe that this point of view calls for a deepened discussion. While the question of number of copies and the nature of operations needed to create GME is relevant, it needs to be put in context of operational feasibility. We shall discuss an alternative formulation of the multipartite entanglement problem, which is free of these difficulties. § REQUIREMENTS FOR BIPARTITE AND MULTIPARTITE ENTANGLEMENT MEASURES It is not surprising that entanglement has been considered a resource, both in the sense of its potential advantage in communication and computation tasks, and of the resource theory, where we distinguish states and operations that are free or rich in the resource. Separable states are free, as all of them can be obtain by LOCC. Likewise, local operations, possibly coordinated among participants, cannot be used to create or increase entanglement, so they are considered free. This opens a whole theory of transformation between different states. Too understand transition between which states are possible, and at what rates, are possible, we need some means to quantify the amount of our resource. The basic requirements for such a measure E in the bipartite case are <cit.>: * Discriminance: E(ρ)=0 if ρ is not entangled, i.e. there exist probabilities p_j and product state |ψ_j⟩⊗|φ_j⟩, such that ρ=∑_jp_jΠ(|ψ_j⟩⊗|φ_j⟩). * Monotonicity: E(ρ) cannot increase under stochastic local actions and classical communication (SLOCC). * Convexity: E(pσ+(1-p)ρ)≤ p E(σ)+(1-p)E(ρ), where 0≤ p≤ 1 * Asymptotic continuity: Let ρ_m and σ_m denote series of state acting on Hilbert spaces (ℋ_d_1⊗ℋ_d_2)^⊗ m. Then ||ρ_m-σ_m||_1→ 0⇒E(ρ_m)-E(σ_m)/m→ 0. * Normalization: E(Π(|Ψ^+⟩))=1, where |Ψ^+⟩=(|00⟩+|11⟩)/√(2). * Additivity: E(ρ⊗σ)=E(ρ)+E(σ). * Computability: There is an efficient way to compute E(ρ) for any ρ. Jointly, these conditions up to date have prevented us from formulating an universal measures of bipartite entanglement. Some candidate figures of merit were proposed and proven to satisfy most of the requirements, but largely fail to be computable. Examples include entanglement cost, distillable entanglement <cit.>, or measures based on extensions to pure states <cit.>. Therefore, some requirements were modified. For example, asymptotic continuity can be efficiently replaced by regular additivity, ρ_m→_m→∞ρ⇒ E(ρ_m)→_m→∞E(ρ), which is much easier to prove. Monotonicity can be replaced by weaker monotonicity under deterministic LOCC, and in this contribution we will consider an even weaker requirement no production of (genuinely multipartite) entanglement under SLOCC. Likewise, additivity can be relaxed to extensivity stating that E(ρ^⊗ n)=n E(ρ), but for here it will be sufficient to consider even weaker statement, no production under copying the state. Negativity-related entanglement indicators <cit.> are straight-forwardly computable, but they largely fail the discriminance criterion due existence of bound entangled states. On the other hand, one can formulate entanglement indicators and measures based on various distance functions based on relative entropy <cit.>, Bures <cit.>, trace <cit.>, and Hilbert-Schmidt (HS) distances <cit.>. Their main advantages are discriminance, and, in particular in case of the Hilbert-Schmidt distance, convexity. While the exact computation of distance between a given state and a convex set remains difficult, a recent adaptation of the Gilbert <cit.> algorithm <cit.> allowed to efficiently give both a precise upper and lower bounds on HS distance to the closest separable state. More recently, similar techniques were applied to the Bures distance <cit.>. Interestringly, a brief review on multipartite entanglement measure <cit.> follows Ref. <cit.> in not mentioning additivity or extensivity as a minimal requirement for a measure of genuine multipartite entanglement. However, Refs. <cit.>, to which Ref. <cit.> redirects the Reader mention these properties. Also, it is worth noticing that Ref. <cit.> quotes both subadditivity and superadditivity as possible relaxations of requirement 6. § GILBERT ALGORITHM We now briefly recall the Gilbert algorithm, which yields an approximation of the closest separable state to a given state, with respect to Hilbert-Schmidt distance, D_HS(A,B)=√(Tr(A-B)(A-B)^†). The generalization of the algorithm to finding the existence of BD is straight-forward, one just replaces a product state in step 1 with a state which is a biproduct with respect to different partition. After suuficiently many repetiotions of the algorithm we obtain three quantities informing us about the (multipartite) entanglement content. First is the last distance found after, e.g., a given number of corrections, D_Last=√(Tr(ρ-ρ')^2). The second is the witness distance. The witness operator is defined as <cit.>: W= ρ-ρ'-λ1/D_Last, λ= max_|ψ⟩ is bipartite product⟨ψ|(ρ-ρ')|ψ⟩. The witness distance is defined as D_Wit=max(0, Trρ W). It is also useful to estimate the distance from the decay of D^2_Last, which is stored in the list l. First we create list l̃ by rejecting the first one third of the entries. Second, we shift l̃ by a an inverse the entries. With c being the list of consecutive numbers of correction and x denoting the mean value of list x, the squared estimated distance is defined as D_Last^2 =0≤ a≤ D_LastArgmax(R(c,1/(l̃-a))), R(x,y)= xy-x·y/√((x^2-(x)^2)(y^2-(y)^2)). Here, x denotes the average value of the list, and the product of two lists is entry-wise. Importantly, D_Last is the upper bound on the actual distance, while D_Wit is a lower bound. In particular, finding D_Wit>0 certifies nonexistence of BD. On the other hand, the Algorithm cannot reach D_Last=0 but for a large class of states it can made arbitrary low with a sufficiently long runtime, leading us to a strong belief that the studied state is indeed separable. Thus the Gilbert algorithm provides a high level of discriminance and computability of the Hilbert-Schmidt distance. However, it is known to that it violates contractiveness under LOCC. § EARLY REALIZATIONS OF GENUINE MULTIPARTITE ENTANGLED STATES In this section we will briefly recall two pioneering quantum optical experiments, which led to observation of multipartite entanglement. They will turn out both conceptually relevant to the examples of GME states discussed below and helpful in considering their feasibility. These realizations were based spontaneous parametric down-conversion (SPDC) <cit.>. In short, it is a process, in which inside a nonlinear crystal a high-frequency (pump) photon is converted into a highly correlated pair of low-frequency photons. Ultrastrong, strictly quantum correlations between the output (signal and idler) photons include the frequencies summing to the one of the pump photon, times of creation being equal, their positions being symmetrical with respect to a fixed axis, and, in type-II SPDC, polarization. For example, the interaction Hamiltonian between the pump beam and the output field can have term H_Int= i(za_p,Ha^†_c,Ha^†_d,H-z^*a^†_p,Ha_c,Ha_d,H), with z being a complex amplitude, a_x,Y – the annihilation operator with mode x (p for pump, c for signal, d for ilder) and polarizations X=H,V. In the first approximation, the output of SPDC, i.e. the (four-mode) squeezed vacuum can be written as α|Ω⟩+β(|00⟩+|11⟩)/√(2)=α|Ω⟩+β|Ψ^+⟩, where α and β are some coefficients, |Ω⟩ denotes the vacuum in output modes, and |00⟩ (|11⟩) denotes two photons with, say, horizontal (vertical) polarization. Discarding the vacuum part by postselection and taking two copies of the reminder we get |Ψ^+⟩_AB|Ψ^+⟩_CD. After crossing signals in modes B and C on a polarizing beam splitter (PBS) and demanding that one photon goes to each observer, we get the GHZ state <cit.>, |GHZ_4⟩= (|0000⟩_ABCD+|1111⟩_ABCD), |0⟩≡ ([ 1; 0 ]), |1⟩≡([ 0; 1 ]), which is a seminal example of a state with genuine four-partite entanglement, namely the GHZ state <cit.>. In the same fashion and by measuring one photon, e.g., in the basis of L/R polarization, we can obtain the GHZ state for any number of qubits. Note that the requiring that both photons reaching PBS have the same polarization is a projection onto two-dimensional space. The other example to be recalled here is obtained when a single SPDC source emits two pairs at once. In the second quantization, we write it as proportional to (â_H^†ĉ_H^†+â_V^†ĉ_V^†)^2|Ω⟩. Next, output channels a and c are split with a non-polarizing beam splitter into modes a, b, and c, d, respectively. With a post-selection condition that one photon propagates to one output channel, the state is <cit.> |Ψ_4⟩ ∝ (2â_H^†b̂_H^†ĉ_H^†d̂_H^†+2â_V^†b̂_V^†ĉ_V^†d̂_V^†+â_H^†b̂_V^†ĉ_H^†d̂_V^† + â_H^†b̂_V^†ĉ_V^†d̂_H^†+â_V^†b̂_H^†ĉ_H^†d̂_V^†+â_V^†b̂_H^†ĉ_V^†d̂_H^†)|Ω⟩ ∝ 1/√(3)(|0000⟩_ABCD+|1111⟩_ABCD. +.|Ψ^+⟩_AC⊗|Ψ^+⟩_BD). This state is equivalent, up to local unitary transformations, to the four-qubit singlet state, in which two pairs spins-1/2 jointly create two spins-1, which then nullify each other. We shall stress that the key ingridients are the beam splitters and postselection. While the description of this scheme requires the second quantization, in which we lack a proper definition of entanglement, combination of these two elements represent an entangling action on two particles. § MULTIPLE COPIES OF SOME STATES LOOSE BD First, let us consider a simple case of a pure state. Let B simultaneously share |Ψ^+⟩ with both A and C, which can be written as |φ_4⟩ = |Ψ^+⟩_AB_1|Ψ^+⟩_B_2C ≈ 1/2∑_i,j=0^1|i⟩_A|ij⟩_B|j⟩_C. This state does not have a biseparable decomposition. In fact, consider tripartite negativity <cit.>, 𝒩_ABC= (𝒩_A|BC𝒩_B|AC𝒩_C|AB)^1/3, 𝒩_I|JK= -2∑_ϵ_i<0ϵ_i(ρ^Γ_I), where ρ^Γ_I is the state partially transposed with respect to subsystem I and ϵ_i(ρ^Γ_I)s are its eigenvalues. The negativity of |φ_4⟩⟨φ_4| transposed with respect to A and C is 1, while for B it equals 3, giving 𝒩_ABC=3^1/3≈ 1.4422. Notice that the state |Ψ^+⟩_AB_1|Ψ^+⟩_B_2C is obtainable both from two copies of GHZ states and two copies of the three-party W state, |W⟩=1/√(3)(|001⟩+|010⟩+|001⟩). and it can be transformed into a single copy of either of them. To pass from two GHZ states to a W state, A measures the second qubit in any basis in the x-y plane, while C does the same for the first qubit. Depending on their measurements and outcomes, the state of the remaining qubits is locally unitarily transformed to the state given in the middle line of <ref>. Now, B probabilisticly projects qubits with P_B'=([ 0 1/√(2) 1/√(2) 0; 1 0 0 0 ]), and coherently attenuates the remaining qubit state |1⟩ with Attn=([ 1 0; 0 1/√(2) ]). The overall probability of this conversion is 3/8. In the converse protocol, one copy of the w state is projected with |0_A_2⟩⟨0_A_2|, while the other is projected with|0_C_1⟩⟨0_C_1|. Consequently, we |Ψ^+⟩_AB_1|Ψ^+⟩_B_2C and B can project onto P_B=([ 1 0 0 0; 0 0 0 1 ]) to obtain the GHZ state. The probability of of these projections being successful is 2/9. Now, let us discuss examples of states, two copies of which in this paradigm reveal genuine multipartite entanglement. Consider state ρ_1(θ)= 1/2(Π(|θ⟩_A)⊗Π((|00⟩_BC+|11⟩_BC)/√(2)). + .Π((|00⟩_AB+|11⟩_AB)/√(2))⊗Π(|θ⟩_C)) with |θ⟩=(cos(θ)|0⟩+sin(θ)|1⟩)/√(2) and Π(|ψ⟩)=|ψ⟩⟨ψ|. By construction, ρ_1 satisfies condition (<ref>). However, let us take two copies of ρ_1 shared between parties A, B, and C, so that each party holds two qubits identically correlated with the rest. While we do not have convincing enough arguments that ρ_1(θ)^⊗ 2 are not decomposable into biseparable states, we let each observer perform a projection on a two-dimensional subspace. B performs projection P_B whereas A and C each project onto P_A/C=([ cos^2(θ) cos(θ)sin(θ) cos(θ)sin(θ) sin^2(θ); sin(2θ)/√(2) -cos(2θ)/√(2) -cos(2θ)/√(2) sin(2θ)/√(2) ]). The resulting state is hence ρ_3(θ)=(P_A/C⊗ P_B⊗ P_A/C)ρ_1(θ)^⊗ 2(P_A/C⊗ P_B⊗ P_A/C)^† Figure 2 shows D_Last, D_Est, and D_Wit in function of θ with Algorithm <ref> having performed between 1800 and 3500 corrections. As seen in the figure, the resulting state has no BD for all values of θ, as certified by D_Wit. In particular, for θ=0 we have ρ_3(0)=1/9([ 8 0 0 0 0 0 0 2; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 2 0 0 0 0 0 0 1; ]), for which the witness of decomposability into biseparable states is close to (up to a multiplicative constant) W_3(0)=8|GHZ_3⟩⟨GHZ_3|-1_8× 8, max_|ψ⟩ is biseparable⟨ψ|W_3(0)|ψ⟩=3, where |GHZ_3⟩=1/2(|000⟩+|111⟩). The witness indicates the distance from ρ_3(0) and the set of biseparable states is √(8/63)≈ 0.2376, which is slightly larger than the witness distance found by the algorithm, 0.2333 Another example we want to present here is a four-qubit state, composed of two pairs of Bell states, ρ_2 = 1/2(|Ψ^+⟩_AB⟨Ψ^+|_AB⊗|Ψ^+⟩_CD⟨Ψ^+|_CD. + .|Ψ^+⟩_AD⟨Ψ^+|_AD⊗|Ψ^+⟩_BC⟨Ψ^+|_BC). Again, the parties share two-copies of the state and each performs a projection P_B given by Eq. (<ref>). The resulting state reads ρ_4=1/12([ 4 0 0 1 0 0 1 0 0 1 0 0 1 0 0 4; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 4 0 0 1 0 0 1 0 0 1 0 0 1 0 0 4 ]). The Gilbert algorithm tested ρ_4 against full separability and after 7250 corrections found the following approximation of the closest separable state, with precision up to 10^-5. ρ'_4≈ 10^-5( [ 26475 6 27 2662 -6 4 2669 0 -38 2663 0 -11 2676 -2 0 3079; 6 3552 4 -8 1843 -14 -7 2320 23 -3 6 -2 -14 2311 -3 4; 27 4 3547 -6 1 3 -3 -3 1835 -2 5 2312 6 0 2320 0; 2662 -8 -6 3136 0 9 1550 -16 -11 1536 -5 -6 3027 5 -9 2659; -6 1843 1 0 3574 0 -5 2324 0 -10 -11 6 8 2306 -4 -7; 4 -14 3 9 0 3114 -6 -3 7 -4 -12 -17 4 3 -2 4; 2669 -7 -3 1550 -5 -6 3141 0 3 3028 -5 -11 1532 1 11 2640; 0 2320 -3 -16 2324 -3 0 3526 0 2 -5 -11 -9 1821 -12 -62; -38 23 1835 -11 0 7 3 0 3559 2 -2 2308 0 -8 2301 9; 2663 -3 -2 1536 -10 -4 3028 2 2 3134 -4 0 1522 -7 5 2650; 0 6 5 -5 -11 -12 -5 -5 -2 -4 3099 2 -3 -11 1 -11; -11 -2 2312 -6 6 -17 -11 -11 2308 0 2 3542 7 12 1830 -4; 2676 -14 6 3027 8 4 1532 -9 0 1522 -3 7 3140 13 6 2652; -2 2311 0 5 2306 3 1 1821 -8 -7 -11 12 13 3538 -13 -36; 0 -3 2320 -9 -4 -2 11 -12 2301 5 1 1830 6 -13 3525 -8; 3079 4 0 2659 -7 4 2640 -62 9 2650 -11 -4 2652 -36 -8 26396; ]) Table <ref> lists the maximal values of ⟨ϕ|(ρ_4-ρ'_4)|ϕ⟩ for |ϕ⟩ belonging to different biseparablity classes. Meanwhile, we get Tr(ρ_4(ρ_4-ρ'_4))=0.35834 and √(Tr(ρ_4-ρ'_4)^2)=0.53969, which translates into the found witness distance of 0.01208. The witness operator obtained from ρ_4-ρ'_4 has one leading positive eigenvalue corresponding to state and. is close to W=4(|GHZ_4⟩⟨GHZ_4|-1/16)/√(15)), which indicates the distance to bi-separable states to be at least 29/12√(15)-7/16≈ 0.1865. This scheme seems to saturate rather quickly. While being only an approximation of an optimal witness of nonexistence of BD, we consider W_N=|GHZ_N⟩⟨GHZ_N|-1/21_2^N× 2^N, which attains mean value of 1/2 for the N-particle GHZ state and nonpositive mean values for all states with biseparable decomposition. Extending the scheme presented for ρ_4 we get mean value W_N=-1/2+4/2+2^N/2, which becomes negative for N=6. As GHZ correlations diminish exponentially, it is reasonable to accept that they are not strong enough to guarantee presence of GME. More elaborate examples would include projected entangled product states (PEPS) <cit.>, a technique, in which ground states of large spin arrays can be found by merging singlet states. Also, notice that the state discussed in this section are related to the optical realizations mentioned above. |φ_4⟩ appears just in front of the beam slitter in the setup producing |GHZ_4⟩, while ρ_1(θ) and ρ_2 are obtained from |Ψ_4⟩. Ref. <cit.> showed that the four-partite visibility decreases as the pairs produced in SPDC become more distinguishable in time domain. We can use this effect to completely decohere the pairs by placing two ultrafast shutters with nonoverlapping opening times on one side of the crystal, behind the beam splitter. Subsequently, we reduce the temporal distinguishability by using narrowband spectral filters in all four outputs. Under the condition that all four measuring stations were reached by a photon, the resulting state is ρ_2. If then we find one photon to be polarized in direction (cosα,sinα), resulting in the state of the remaining three photons equal to ρ_3. § PARTIES AND PARTICLES To interpret the above results we can adopt a few different approaches. The first is straight-forwardly transplanted from the theory of bipartite entanglement. The key concepts are parties, who within their locations may have arbitrary large quantum systems treated uniformly and act freely upon them, but the parties are separated, The case of ρ_3 and ρ_4 can be depicted as follows. GME=0→(·)^⊗ 2→“local” projections→ GME>0, which must hold for any potential measure of GME satisfying the discriminance requirement. The left-most side is from the construction of ρ_1 and ρ_2, while the right-most is due to the fact that we have presented witnesses against BD of ρ_3 and ρ_4. That means that either any measure of GME cannot be additive, even in the weakest sense (taking two copies of a state produces GME), or monotonic (GME is produced by local action). The lack of BD for |φ_4⟩, as demonstrated by nonvanishing tripartite negativity hints towards nonadditivity of GME measures. However, this understanding seems to be contradictory to the operational approach. Distributing another copy of a state is typically more feasible than operations on composite subsystems. On one hand, we have argued that GME opens access to more features than bipartite entanglement. on the other, here we reach the conclusion that it can be produced by simply distributing GME-free states, which questions its role as a separate resource. Another point of view is to accept parties, but allow each of them to handle more than one particle at once. This approach seems natural for many experimental situations, e. g., quantum teleportation <cit.> and admits BD of |φ_4⟩, ρ_3(θ) and ρ_4. We are now left with two options. Either genuine multipartite entanglement was created by local operations, or it was already in a state with BD. In this paradigm, there seems to a contrast between the operational and the formal understanding of a party. There is also a third approach possible, which is to assume that each subsystem constitutes its own party, its own location. We trivially find BDs of ρ_1^⊗ 2 and ρ_2^⊗ 2, which prevents them from being recognised as genuinely multiparticle entangled. ρ_3 and ρ_4 do not have BD, but only due to operations now understood as nonlocal, which happen to be experimentally feasible due to pairwise proximity of particles. It remains to discuss the nature of projections P_A/C, P_B', and P_B. While the last one contains two rows representing product states, one line of the former two is an entangled state. Still, these operators represent a component of a von Neumann measurement with degenarate results. That is to say, all states in form (|00⟩+e^iϕ|11⟩)/√(2) must survive projection P_B intact, thus these measurements must be performed collectively on both particles. A generic state is projected onto subspace of {|00⟩,|11⟩}, which may, and in this case does, create entanglement. This is a subtle difference between, say collectively measuring σ_x⊗σ_x on two qubits and measuring σ_x on each of them. While the statistics turn out to be the same, the collapsed states are different. Of course, in general the resulting state may be nonentangled, but GME creation/activation protocols require that they are not. In this way we can argue that nonentangling measurements will prevent us from creating GME. Given a genuine three-particle entanglement measure G3pE, the overall three-particle entanglement can then be defined as G3pE(ρ_AB...N) = ∑_x,y,z∈ S G3pE(ρ_xyz), S= {A_j,B_j,...N_j}_j=1^k where each party holds k parties, and G3E(ρ) is. This definition naturally provides additivity and, depending on the definition, it can be contractive under local (single-particle) operations. In this way we can define a proper measure of multi-particle entanglement. A more communication-concerned definition, would take into account the quantum link between “macro-locations”, G3pE(ρ_AB...N) = ∑_X,Y,Z∑_x,y,z=1,...,k G3pE(ρ_X_xY_yZ_z), {X,Y,Z}∈{A,...N}, X≠ Y≠ Z≠ X. § CONCLUSIONS We have discussed several examples of bipartite entangled states, which after a series of free (in the sense of multipartite entanglement resource theory) operations, i.e., taking two identical copies of the same state and performing partial measurements on pairs of qubits handled by each observer, loose BD. In the orthodox paradigm of distant laboratories this signifies the presence of multipartite entanglement. Hence, genuine multipartite entanglement is not a resource with this understanding, as it it can be created from a resource free-state. We then relax the distant laboratory paradigm allowing multiple subsystems in one laboratory. This, however, implies that genuine multiparty entanglement can be produced by local actions. By no means these examples dismiss Definition 2 as a necessary an sufficient condition for multiparticle entanglement. We want to stress, nevertheless, that in these scenarios this form of quantum correlations are created by entangling actions on two subsystems. This is in contrast to the usual activation of entanglement <cit.>. Therein, we also utilize entangling operations, but we amplify an existing quantum links. Here, with these operation we create a new form of quantum link, undoubtely absent before. Our examples show that the concept of genuine multipartite or multiparticle entanglement must be considered with a great caution, and with physical context in mind. Some operations, e.g. introduction of new subsystems, are may not be allowed, as they change the resource of interest itself (e.g. four- rather than three-particle entanglement). We therefore suggest the one particle per subsystem policy. Also, it seems beneficial to focus on genuine N-particle entanglement rather than generic multipartite. It then becomes possible to construct a proper measure of multiparticle entanglement. Of course, this reasoning calls for further discussion. For example, one may wonder about other requirements that need to be satisfied by entanglement measures in one particle per party paradigm. Also, additional considerations are required for multiple degrees of freedom. For example, in the case, in which a photon is entangled with one particle in polarization and in orbital angular momentum with another, one would rather see the two degrees of freedom belonging to the same subsystem. § ACKNOWLEDGEMENTS MW akcknowledges Tomasz Paterek for a fruitful discussion. This work is supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No.2022-0-00463, Development of a quantum repeater in optical fiber networks for quantum internet), by NCN SONATA-BIS grant No. 2017/26/E/ST2/01008. Early stage of this work was supported by the Foundation for Polish Science (IRAP project, ICTQT, Contract No. 2018/MAB/5, co-financed by EU within Smart Growth Operational Programme) 99 schrodingerE. Schr ödinger, Die Gegenwörtige Situation in der Quantenmechanik, Naturwissenschaften 23, 152 (1935). bell J. S. Bell, On the Einstein-Podolsky-Rosen Paradox, Physics Physique Fizika 1, 195 (1964). ekert A. K. Ekert, Quantum Cryptography Based on Bell’s Theorem, Physical Review Letters 67, 661 (1991). peres A. Peres, Separability Criterion for Density Matrices, Physical Review Letters 77, 1413 (1996). horodecki1996 R. Horodecki, M. Horodecki, and P. Horodecki, Teleportation, Bell’s Inequalities and Inseparability, Physics Letters A 222, 21 (1996). horodecki1997 P. Horodecki, Separability Criterion and Inseparable Mixed States with Positive Partial Transposition, Physics Letters A 232, 333 (1997). jamiolkowski A. Jamiołkowski, Linear Transformations which Preserve Trace and Positive Semidefiniteness of Operators, Reports on Mathematical Physics 3, 275 (1972). choi M.-D. Choi, Completely positive linear maps on complex matrices, Linear Algebra and its Applications 10, 285 (1975). terhal B. M. Terhal, Bell Inequalities and the Separability Criterion, Physics Letters A 271, 319 (2000). bennett1996 C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, Mixed-state Entanglement and Quantum Error Correction, Physical Review A 54, 3824 (1996). vedral V. Vedral and M. B. Plenio, Entanglement Measures and Purification Procedures, Physical Review A 57, 1619 (1998). eisert2003 J. Eisert, K. Audenaert, and M. Plenio, Remarks on Entanglement Measures and non-local state distinguishability, Journal of Physics A: Mathematical and General 36, 5605 (2003). chen K. Chen and L.-A. Wu, Test for Entanglement Using Physically Observable Witness Operators and Positive Maps, Physical Review A 69, 022312 (2004). hiesmayr B. C. Hiesmayr, M. Huber, and P. Krammer, Two Computable Sets of Multipartite Entanglement Measures, Physical Review A 79, 062308 (2009). greenberger D. M. Greenberger, M. A. Horne, and A. Zeilinger, Going Beyond Bell’s Theorem, in Bell’s theorem, quantum theory and conceptions of the universe (Springer, 1989) pp. 69–72. guehne O. Gühne, G. T 'oth, and H. J. Briegel, Multipartite Entanglement in Spin Chains, New Journal of Physics 7, 229 (2005). steane A. M. Steane, Error Correcting Codes in Quantum Theory, Physical Review Letters 77, 793 (1996). yamasaki H. Yamasaki, S. Morelli, M. Miethlinger, J. Bavaresco, N. Friis, and M. Huber, Activation of Genuine Multipartite Entanglement: Beyond the Single-copy Paradigm of Entanglement Characterisation, Quantum 6, 695 (2022). bengtsson I. Bengtsson and K. Życzkowski, Geometry of Quantum States: an Introduction to Quantum Entanglement, (Cam- bridge university press, 2017). zyczkowski K. Życzkowski, P. Horodecki, A. Sanpera, and M. Lewen- stein, Volume of the Set of Separable States, Physical Review A 58, 883 (1998). eisert1999 J. Eisert and M. B. Plenio, A Comparison of Entanglement Measures, Journal of Modern Optics 46, 145 (1999). henderson L. Henderson and V. Vedral, Information, Relative Entropy of Entanglement, and Irreversibility, Physical Review Letters 84, 2263 (2000). witte C. Witte and M. Trucks, A Aew Entanglement Measure Induced by the Hilbert–Schmidt Norm, Physics Letters A 257, 14 (1999). pandya P. Pandya, O. Sakarya, and M. Wieśniak, Hilbert-Schmidt Distance and Entanglement witnessing, Physical Review A 102, 012409 (2020). hu Y. Hu, Y.-C. Liu, and J. Shang, Algorithm for Evaluating Distance-Based Entanglement Measures, Chinese Physics B 32, 080307 (2023). ma2023 M. Ma, Y. Li, and J. Shang, Multipartite Entanglement Measures: a Review (2023), arXiv:2309.09459 [quant-ph]. ma2011 Z.-H. Ma, Z.-H. Chen, J.-L. Chen, C. Spengler, A. Gabriel, and M. Huber, Measure of Genuine Multipartite Entanglement with Computable Lower Bounds, Physical Review A 83, 062325 (2011). mintert F. Mintert, A. R. Carvalho, M. Kuś, and A. Buchleitner, Measures and Dynamics of Entangled States, Physics Reports 415, 207 (2005). gilbert E. G. Gilbert, An Iterative Procedure for Computing the Minimum of a Quadratic Form on a Convex Set, SIAM Journal on Control 4, 61 (1966). brandao F. G. Brandão, Quantifying Entanglement with Witness Operators, Physical Review A 72, 022310 (2005). louisell W. Louisell, A. Yariv, and A. Siegman, Quantum Fluctuations and Noise in Parametric Processes. I., Physical Review 124, 1646 (1961). pan J.-W. Pan, M. Daniell, S. Gasparoni, G. Weihs, and A. Zeilinger, Experimental demonstration of four-photon entanglement and high-fidelity teleportation, Physical Review Letters 86, 4435 (2001). gaertner S. Gaertner, M. Bourennane, M. Eibl, C. Kurtsiefer, and H. Weinfurter, High-fidelity source of four-photon entanglement, Applied Physics B 77, 803 (2003). sabin C. Sabín and G. Garc ía-Alcaine, A Classification of Entanglement in Three-qubit Systems, The European Physical Journal D 48, 435 (2008). verstraete F. Verstraete and J. I. Cirac, Renormalization Algorithms for Quantum Many-body Systems in Two and Higher Dimensions (2004), arXiv:cond-mat/0407066 [cond-mat.str-el]. laskowski W. Laskowski, M. Wieśniak, M. Żukowski, M. Bourennane, and H. Weinfurter, Interference contrast in multi-source few-photon optics, Journal of Physics B: Atomic, Molecular and Optical Physics 42, 114004 (2009). bennett1993 C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Teleporting an Unknown Quantum State via Dual Classical and Einstein-Podolsky- Rosen Channels, Physical Review Letters 70, 1895 (1993). horodecki1999 P. Horodecki, M. Horodecki, and R. Horodecki, Bound Entanglement Can Be Activated, Physical Review Letters 82, 1056 (1999).
http://arxiv.org/abs/2407.12399v1
20240717082532
A Practical Solver for Scalar Data Topological Simplification
[ "Mohamed Kissi", "Mathieu Pont", "Joshua A. Levine", "Julien Tierny" ]
cs.LG
[ "cs.LG", "cs.CG", "cs.CV", "cs.GR" ]
Introduction Towards Revisiting Visual Place Recognition for Joining Submaps in Multimap SLAMThe work of Stefan Schubert was supported in part by the German Federal Ministry for Economic Affairs and Climate Action. Markus Weißflog10009-0003-1163-8755 Stefan Schubert10000-0001-9841-0689 Peter Protzel10000-0002-3870-7429 Peer Neubert20000-0002-7312-9935 July 22, 2024 ========================================================================================================================================================================================================= As acquisition devices and computational resources are getting more sophisticated and efficient, modern datasets are growing in size. Consequently, the features of interest contained in these datasets gain in geometric complexity, which challenge their interpretation and analysis. This motivates the design of tools capable of robustly extracting the structural patterns hidden in complex datasets. This task is the purpose of Topological Data Analysis (TDA) <cit.>, which provides a family of techniques for the generic, robust and multi-scale extraction of structural features. It has been successfully applied in a number of data analysis problems <cit.>, in various applications, including turbulent combustion <cit.>, material sciences <cit.>, nuclear energy <cit.>, fluid dynamics <cit.>, bioimaging <cit.>, quantum chemistry <cit.>, or astrophysics <cit.>. TDA provides a variety of topological data abstractions, which enable the extraction of specific types of features of interest. These abstractions include critical points <cit.>, persistence diagrams <cit.>, merge <cit.> and contour trees <cit.>, Reeb graphs <cit.>, or Morse-Smale complexes <cit.>. A central aspect of TDA is its ability to analyze data at multiple scales. Thanks to various importance measure <cit.>, these abstractions can be iteratively simplified, to reveal the prominent structures in a dataset. In practice, this topological simplification can be achieved in two fashions: either by (i) a post-process simplification of the abstractions, or by (ii) a pre-process simplification of the data itself. While the post-process approach (i) requires specific simplification mechanisms tailored to the abstraction at hand <cit.>, the pre-process strategy offers a generic framework which is independent of the considered abstraction. This generic aspect eases software design, as simplification needs to be implemented only once <cit.>. Pre-process simplification has also the advantage of being reusable by multiple abstractions when these are combined within a single data analysis pipeline (see <cit.> for real-life examples). Also, pre-process simplification enables the direct visualization of the simplified data itself (e.g. with isosurfaces). Finally, it is also compatible with further post-process simplification if needed. For these reasons, we focus on pre-process simplification in this work. Several combinatorial approaches <cit.> have been proposed for the pre-simplification of persistence pairs involving extrema. However, no efficient combinatorial algorithm has been proposed for the pre-simplification of saddle pairs, hence preventing a more advanced simplification of 3D datasets. In fact, as pointed out by Chambers et al. <cit.>, optimal simplification (i.e. finding a scalar field g which is δ-away from an input field f, such that g has a minimum number of critical points <cit.>) is a more general, hence more difficult, version of the sublevel set simplification problem, itself being NP-hard in 3D <cit.>. Then, there may not even exist polynomial time algorithms for directly solving this problem. This theoretical limitation requires a shift in strategy. A recent alternative consists in considering persistence optimization frameworks <cit.>, which optimize the data in a best effort manner, given criteria expressed with persistence diagrams. However, while one could leverage these frameworks for data pre-simplification (i.e., to cancel noisy features while preserving the features of interest, as much as possible), current frameworks can require up to days of computation for regular grids of standard size (<ref>), making them impractical for real-life datasets. This paper addresses this issue and introduces a practical solver for the optimization of the topological simplification of scalar data. Our approach relies on a number of pragmatic observations from which we derived specific accelerations, for each sub-step of the optimization (Secs. <ref> and <ref>). Our accelerations are simple and easy to implement, but result in significant gains in terms of runtimes. Extensive experiments (<ref>) report × 60 accelerations on average over state-of-the-art frameworks (with both fewer and faster iterations), thereby making topological simplification optimization practical for real-life datasets. We illustrate the utility of our contributions in two applications. First, our work enables the direct visualization and analysis of topologically simplified data (<ref>). This reduces visual clutter in isosurfaces by simplifying their topology (fewer components and handles). We also investigate filament extraction in three-dimensional data, where we show that our approach helps standard topological techniques for removing filament loops. Second, we show how to use our approach to repair genus defects in surface processing (<ref>). §.§ Related work Beyond post-process simplification schemes tailored for specific topological abstractions (e.g. for merge/contour trees <cit.>, Reeb graphs <cit.> or Morse-Smale complexes <cit.>), the literature related to pre-process simplification can be classified into two categories. Combinatorial methods: the first combinatorial approach for the topological simplification of scalar data on surfaces has been proposed by Edelsbrunner et al. <cit.>. This work can be seen as a generalization of previous approaches in terrain modeling where only persistence pairs involving minima were removed <cit.>. Attali et al. <cit.> extended this framework to generic filtrations, while Bauer et al. <cit.> extended it to discrete Morse theory <cit.>. Tierny et al. presented a generalized approach <cit.>, supporting a variety of simplification criteria, which was later extended by Lukasczyk et al. <cit.> with an efficient shared-memory parallel algorithm. Such combinatorial simplification algorithms can be used directly within optimization procedures <cit.>, to remove noise in the solution at each iteration. While most of the above approaches were specifically designed for scalar data on surfaces, they can be directly applied to domains of higher dimensions. However, they can only simplify persistence pairs involving extrema. For instance, this means that they cannot remove saddle pairs in three-dimensional scalar fields, thus preventing an advanced simplification of this type of datasets. Optimal simplification <cit.> of scalar data is a more general variant of the sublevel set simplification problem, itself being NP-hard in 3D <cit.>. Then, a polynomial time algorithm solving this problem may not even exist. This theoretical limitation requires a shift in strategy. Numerical methods: in contrast to combinatorial methods, which come with strong guarantees on the result, numerical approaches aim at providing an approximate solution in a best effort manner. In other words, these methods may not fully simplify three-dimensional scalar fields up to the desired tolerance either, but they will do their best to provide a result as close as possible to the specified simplification. As such, this type of approaches appear as a practical alternative overcoming the theoretical limitation of combinatorial approaches discussed above. In geometric modeling, several techniques have been described to generate smooth scalar fields on surfaces, with a minimal number of critical points <cit.>. Bremer et al. <cit.> proposed a method based on Laplacian smoothing to reconstruct a two-dimensional scalar field corresponding to a pre-simplified Morse-Smale complex. This work has been extended by Weinkauf et al. <cit.> to bi-Laplacian optimization, with an additional enforcement of gradient continuity across the separatrices of the Morse-Smale complex. While an extension of this work has been documented for the 3D case <cit.>, it only addresses the simplification of persistence pairs involving extrema, without explicit control on the saddle pairs. Recently, a new class of methods dedicated to persistence optimization has been documented. Specifically, these approaches introduce a framework for optimizing a dataset, according to criteria expressed with persistence diagrams, with applications in various tasks including surface matching <cit.>, point cloud processing <cit.>, classification <cit.> and more. Solomon et al. <cit.> presented an approach based on stochastic subsampling applied to 2D images. Carriere et al. <cit.> presented an efficient and generic persistence optimization framework, supporting a wide range of criteria and applications, exploiting the convergence properties of stochastic sub-gradient descent <cit.> for tame functions <cit.>. Nigmetov et al. <cit.> presented an alternative method, drastically reducing the number of optimization iterations, but at the cost of significantly more computationally expensive steps. As described in <ref>, one can leverage these frameworks for the problem of topological simplification, however, with impractical runtimes for three-dimensional datasets of standard size (e.g. up to days of computation). We address this issue in this work by proposing a practical approach for topological simplification optimization, with substantial accelerations over state-of-the-art frameworks for persistence optimization <cit.>. §.§ Contributions This paper makes the following new contributions: * Algorithm: We introduce a practical solver for the optimization of topological simplification for scalar data (<ref>). Our algorithm is based on two accelerations, which are tailored to the specific problem of topological simplification: * We present a simple and practical procedure for the fast update of the persistence diagram of the data along the optimization (<ref>), hence preventing a full re-computation at each step. * We describe a simple and practical procedure for the fast update of the pair assignments between the diagram specified as target, and the persistence diagram of the optimized data (<ref>), also preventing a full re-computation at each step. Overall, the combination of these accelerations makes topological simplification optimization tractable for real-life datasets. * Applications: Thanks to its practical time performance, our work sets up ready-to-use foundations for several concrete applications: * Visualization of topologically simplified data (<ref>): we illustrate the utility of our framework for the direct visualization and analysis of topologically simplified data. Our approach reduces visual clutter in isosurfaces by simplifying connected components as well as, in contrast to previous work, surface handles. We also investigate prominent filament extraction in 3D data, where we show that our approach helps standard topological techniques for removing filament loops. * Surface genus repair (<ref>): we show how to use our framework to repair genus defects in surface processing, with an explicit control on the employed primitives (cutting or filling). * Implementation: We provide a C++ implementation of our algorithm that can be used for reproducibility purposes. § PRELIMINARIES This section presents the background to our work. We refer the reader to textbooks <cit.> for introductions to computational topology. §.§ Input data The input data is provided as a piecewise-linear (PL) scalar field f : →ℝ defined on a d-dimensional simplicial complex (with d ≤ 3 in our applications). If the data is provided on a regular grid, we consider for the implicit Freudenthal triangulation of the grid <cit.>. In practice, the data values are defined on the vertices of , in the form of a data vector, noted _f ∈ℝ^. f is assumed to be injective on the vertices (i.e., the entries of _f are all distinct), which can be easily obtained in practice via a variant of simulation of simplicity <cit.>. §.§ Persistence diagrams Persistent homology has been developed independently by several research groups <cit.>. Intuitively, persistent homology considers a sweep of the data (i.e., a filtration) and estimates at each step the corresponding topological features (i.e., homology generators), as well as maps to the features of the previous step. This enables the identification of the topological features, along with their lifespan, during the sweep. In this work we consider the lexicographic filtration (as described in <cit.>), which we briefly recall here for completeness. Given the input data vector _f ∈ℝ^, one can sort the vertices of by increasing data values, yielding a global vertex order. Based on this order, each d'-simplex ∈ (with d' ∈ [0, d]) can be represented by the sorted list (in decreasing values) of the (d'+1) indices in the global vertex order of its (d'+1) vertices. Given this simplex representation, one can now compare two simplices _i and _j via simple lexicographic comparison, which induces a global lexicographic order on the simplices of . This order induces a nested sequence of simplicial complexes ∅ = _0 ⊂_1 ⊂…⊂_ = (where is the number of simplices of ), which we call the lexicographic filtration of by f. At each step i of the filtration, one can characterize the p^th homology group of _i, noted _p(_i), for instance by counting its number of homology classes <cit.> (i.e., the order of the group) or its number of homology generators (i.e., the rank of the group, a.k.a. the p^th Betti number, noted _p). Intuitively, in 3D, the first three Betti numbers (_0, _1, and _2) respectively provide the number of connected components, of independent cycles and voids of the complex _i. For two consecutive steps of the filtration i and j, the corresponding simplicial complexes are nested (_i ⊂_j). This inclusion induces homomorphims between the homology groups _p(_i) and _p(_j), mapping homology classes at step i to homology classes at step j. Intuitively, for the 0^th homology group, one can precisely map a connected component at step i to a connected component at step j because the former is included in the latter. In general, a p-dimensional homology class γ_i at step i can be mapped to a class γ_j at step j if the p-cycles of γ_i and γ_j are homologous in _j <cit.>. Then, one can precisely track the homology generators between consecutive steps of the filtration. In particular, a persistent generator is born at step j (with j = i +1) if it is not the image of any generator by the homomorphims mapping _p(_i) to _p(_j). Symmetrically, a persistent generator dies at step j if it merges with another, older homology class, which was born before it (this is sometimes called the Elder rule <cit.>). Each p-dimensional persistent generator is associated to a persistence pair (_b, _d), where _b is the p-simplex introduced at the birth of the generator (at step b) and where _d is the (p+1)-simplex introduced at its death (at step d). A p-simplex which is involved in the birth or the death of a generator is called a critical simplex and, in 3D, we call it a minimum, a 1-saddle, a 2-saddle, or a maximum if p equals 0, 1, 2, or 3 <cit.>, respectively. The persistence of the pair (_b, _d), noted (_b, _d), is given by (_b, _d) = _f(v_d) - _f(v_b), where v_b and v_d (the birth and death vertices of the pair) are the vertices with highest global vertex order of _b and _d. We call zero-persistence pairs the pairs with v_b = v_d. Some p-simplices of may be involved in no persistence pair. These mark the birth of persistent generators with infinite persistence (i.e., which never die during the filtration) and they characterize the homology groups of the final step of the filtration (_ =). The set of persistence pairs induced by the lexicographic filtration of by f can be organized in a concise representation called the persistence diagram (<ref>), noted (f), which embeds each non zero-persistence pair (_b, _d) as a point in a 2D space (called the birth-death space), at coordinates (_f(v_b), _f(v_d)). By convention, generators with infinite persistence are reported at coordinates (_f(v_b), _f(v_max)), where v_max is the last vertex in the global vertex order. §.§ Wasserstein distance between persistence diagrams Two diagrams (f) and (g) can be reliably compared in practice with the notion of Wasserstein distance. For this, the two diagrams (f) and (g) need to undergo an augmentation pre-processing phase. This step ensures that the two diagrams admit the same number of points, which will facilitate their comparison. Given a point p = (p_b, p_d) ∈(f), we note (p) its diagonal projection: (p) = (12(p_b + p_d), 12(p_b + p_d)). Let _f and _g be the sets of the diagonal projections of the points of (f) and (g) respectively. Then, (f) and (g) are augmented by appending to them the set of diagonal points _g and _f respectively. After this augmentation, we have |(f)| = |(g)|. Then, given two augmented persistence diagrams (f) and (g), the L^q Wasserstein distance between them is defined as: q((f), (g)) = min_ϕ∈Φ(∑_p ∈(f) c(p, ϕ(p))^q)^1q, where Φ is the set of all bijective maps between the augmented diagrams (f) and (g), which specifically map points of finite (respectively infinite) r-dimensional persistent generators to points of finite (respectively infinite) r-dimensional persistent generators. For this distance, the cost c(p, p') is set to zero when both p and p' lie on the diagonal (i.e., matching dummy features has no impact on the distance). Otherwise, it is set to the Euclidean distance in birth-death space ||p - p'||_2. The Wasserstein distance induces an optimal assignment ϕ^* from (f) to (g) (<ref>), which depicts how to minimally transform (f) into (g) (given the considered cost). This transformation may induce point displacements in the birth-death space, as well as projections to the diagonal (encoding the cancellation of a persistence pair). §.§ Persistence optimization Several frameworks have been introduced for persistence optimization (<ref>). We review a recent, efficient, and generic framework <cit.>. Given a scalar data vector _f ∈ℝ^ (<ref>), the purpose of persistence optimization is to modify _f such that its persistence diagram (f) minimizes a certain loss , specific to the considered problem. Then the solution space of the optimization problem is ℝ^. Let : ℝ^→ℝ^ be the filtration map, which maps a data vector _f from the solution space ℝ^ to a filtration represented as a vector (_f) ∈ℝ^, where the i^th entry contains the index of the i^th simplex _i of in the global lexicographic order (<ref>). For convenience, we maintain a backward filtration map ^+ : ℝ^→ℝ^, which maps a filtration vector (_f) to a vector in ℝ^, whose i^th entry contains the index of the highest vertex (in global vertex order) of the i^th simplex in the global lexicographic order. Given a persistence diagram (f), the critical simplex persistence order can be introduced as follows. First, the points of (f) are sorted by increasing birth and then, in case of birth ties, by increasing death. Let us call this order the diagram order. Then the set of persistence pairs can also be sorted according to the diagram order, by interleaving the birth and death simplices corresponding to each point. This results in an ordering of the critical simplices, called the critical simplex persistence order, where the (2i)^th and (2i+1)^th entries correspond respectively to the birth and death simplices of the i^th point p_i in the diagram order. Critical simplices which are not involved in a persistence pair (i.e., corresponding to homology classes of infinite persistence) are appended to this ordering, in increasing order of birth values. Let us now consider the persistence map : ℝ^→ℝ^, which maps a filtration vector (_f) to a persistence image ((_f)), whose i^th entry contains the critical simplex persistence order (defined above) for the i^th simplex in the global lexicographic order. For convenience, the entries corresponding to filtration indices which do not involve critical simplices are set to -1. Now, to evaluate the relevance of a given diagram (f) for the considered optimization problem, one needs to define a loss term. Let : ℝ^→ℝ be an energy function, which evaluates the diagram energy given its critical simplex persistence order. Then, given an input data vector _f, the associated loss : ℝ^→ℝ is given by: (_f) = ∘∘(_f). Since distinct functions can admit the same persistence diagram, the global minimizer of the above loss may not be unique. However, given the search space (ℝ^), the search for a global minimizer is still not tractable in practice and local minimizers will be searched instead. If is locally Lipschitz and a definable function of persistence, then the composition ∘∘ is also definable and locally Lipschitz <cit.>. This implies that the generic loss is differentiable almost everywhere and admits a well defined sub-differential. Then, a stochastic sub-gradient descent algorithm <cit.> converges almost surely to a critical point of <cit.>. In practice, this means that the loss can be decreased by displacing each diagram point p_i in the diagram (f) according to the sub-gradient. Assuming a constant global lexicographic order, this displacement can be back-propagated into modifications in data values in the vector _f, by identifying the vertices v_i_b and v_i_d corresponding to the birth and death (<ref>) of the i^th point in the diagram order: [ v_i_b = ^+(^-1(2i)); v_i_d = ^+(^-1(2i + 1)), ] and by updating their data values _f(v_i_b) and _f(v_i_d) accordingly. § APPROACH This section describes our overall approach for the optimization of the topological simplification of scalar data. Given the diagram (f) of the input field f, we call a signal pair a persistence pair of (f) which is selected by the user for preservation. Symmetrically, we call a non-signal pair a persistence pair of (f) which is selected by the user for cancellation. Note that this distinction between signal and non-signal pairs is application dependent. In practice, the user can be aided by several criteria, such as persistence <cit.>, geometric measures <cit.>, etc. Then, topological simplification can be expressed as an optimization problem, with the following objectives: * Penalizing the persistence of the non-signal pairs; * Enforcing the precise preservation of the signal pairs. In short, we wish to penalize the undesired features (objective 1), and, at the same time, enforce the precise preservation of the features of the input which are deemed relevant (objective 2). The latter objective is important in practice to preserve the accuracy of the features of interest. As later discussed in <ref> (and illustrated in <ref>), non-signal and signal pairs do interact during the optimization, thereby perturbing signal pairs. In our experiments (<ref>), at each optimization iteration, 11% of the signal pairs are perturbed by non-signal pairs (on average, and up to 32%). In certain configurations, this can drastically alter the persistence of signal pairs. Hence, to address this issue, the precise preservation of the signal pairs should be explicitly constrained. In the following, we formalize this specific optimization problem based on the generic framework described in <ref>. Our novel solver (for efficiently solving it) is presented in <ref>. Let be the target diagram. It can be obtained by copying the diagram (f) of the input field f, and by removing the non-signal pairs. encodes the two objectives of our problem: it describes the constraints for the cancellation of the noisy features of f (objective 1) as well as the lock constraints for its features of interest (objective 2). In general, a perfect reconstruction (i.e., a scalar field g, close to f, such that (g) =) may not exist in 3D (deciding on its existence is NP-hard <cit.>). Thus, a practical strategy consists in optimizing the scalar field f such that its diagram (f) gets as close as possible to (and relaxing ||f-g||_∞). For this, we consider the following simplification energy (to be used within the generic loss , introduced in <ref>): ((f)) = 2((f), )^2. Since the Wasserstein distance is locally Lipschitz and a definable function of persistence <cit.>, the optimization framework of <ref> can be used to optimize = ∘∘ with guaranteed convergence. Specifically, at each iteration, given the optimal assignment ϕ^* induced by the Wasserstein distance between (f) and (<ref>), one can displace each point p_i in (f) towards its individual target ϕ^*(p_i) by adjusting accordingly the corresponding scalar values _f(v_i_b) and _f(v_i_d) (<ref>). In practice, the generic optimization framework reviewed in <ref> computes this displacement (given ϕ^*) via automatic differentiation <cit.> and by using Adam <cit.> for gradient descent. However, depending on the employed step size, a step of gradient descent on _f may change the initial filtration order (<ref>). Thus, after a step of gradient descent, the persistence diagram of the optimized data needs to be recomputed and, thus, so does its optimal assignment ϕ^* to the target . This procedure is then iterated, until the loss at the current iteration is lower than a user-specified fraction s of the loss at the first iteration (or until a maximum number j_max of iterations). We call this overall procedure the baseline optimization for topological simplification. It is summarized in <ref> and illustrated in <ref>. As shown in <ref>, each iteration j of the optimization involves a step of gradient descent, the computation of the diagram (f_j) and the computation of its Wasserstein distance to . While the first of these three steps has linear time complexity, the other two steps are notoriously computationally expensive and both have cubic theoretical worst case time complexity, (^3). In practice, practical implementations for persistence diagram computation tend to exhibit a quadratic behavior <cit.>. Moreover, the exact optimal assignment algorithm <cit.> can be approximated in practice to improve runtimes, for instance with Auction-based <cit.> or sliced approximations <cit.>. However, even when using the above practical implementations for persistence computation and assignment optimization, the baseline optimization approach for topological simplification has impractical runtimes for datasets of standard size. Specifically, for the simplifications considered in our experiments (<ref>), this approach can require up to days of computations per dataset. When it completes within 24 hours, the computation spends 20 % of the time in persistence computation and 75 % in assignment optimization. § ALGORITHM This section describes our algorithm for topological simplification optimization. It is based on a number of practical accelerations of the baseline optimization (<ref>), which are particularly relevant for the problem of topological simplification. §.§ Direct gradient descent Instead of relying on automatic differentiation and the Adam optimizer <cit.> as done in the generic framework reviewed in <ref>, similar to <cit.>, we can derive the analytic expression of the gradient of our energy on a per-iteration basis (<ref>) and perform at each iteration a direct step of gradient descent, in order to improve performance. Specifically, at the iteration j (<ref>), given the current persistence diagram (f_j), if the assignments between diagonal points are ignored (these have zero cost, <ref>), <ref> can be re-written as: ((f_j)) = min_ϕ∈Φ∑_p_i ∈(f_j) ||p_i - ϕ(p_i)||_2^2. As the optimal assignment ϕ^*_j (i.e., minimizing the energy for a fixed (f_j)) is constant at the iteration j, the energy can be re-written as: ((f_j)) = ∑_p_i ∈(f_j)(p_i_b - ϕ^*_j(p_i)_b)^2 + (p_i_d - ϕ^*_j(p_i)_d)^2. Then, given <ref>, for the iteration j, the overall optimization loss (_f_j) can be expressed as a function of the input data vector _f_j: (_f_j) = ∑_p_i ∈(f_j)(_f_j(v_i_b) - ϕ^*_j(p_i)_b)^2 + (_f_j(v_i_d) - ϕ^*_j(p_i)_d)^2. Then, for the iteration j, given the constant assignment ϕ^*_j, this loss is convex with _f_j (in addition to being locally Lipschitz) and gradient descent can be considered. Specifically, let ∇ v_i_b∈ℝ^ be a vector with zero entries, except for the i_b^th entry, set to 1. Let ∇ v_i_d∈ℝ^ be the vector constructed similarly for v_i_d. Then, by the chain rule, we have: ∇(_f_j) = ∑_p_i ∈(f_j)( 2(_f_j(v_i_b) - ϕ^*_j(p_i)_b) ∇ v_i_b + 2(_f_j(v_i_d) - ϕ^*_j(p_i)_d) ∇ v_i_d). We now observe that the gradient can be split into two terms, a birth gradient (noted ∇(_f_j)_b) and a death gradient (noted ∇(_f_j)_d): [ ∇(_f_j)_b = ∑_p_i ∈(f_j) 2(_f_j(v_i_b) - ϕ^*_j(p_i)_b) ∇ v_i_b; ∇(_f_j)_d = ∑_p_i ∈(f_j) 2(_f_j(v_i_d) - ϕ^*_j(p_i)_d) ∇ v_i_d. ] Then, given the above gradient expressions, a step of gradient descent is obtained by: _f_j+1 = _f_j - ((α_b ∇(_f_j)_b + α_d ∇(_f_j)_d ), where α_b, α_d ∈ℝ are the gradient step sizes for the birth and death gradients respectively. Such individual step sizes enable an explicit control over the evolution of the persistence pairs to cancel (see <ref>). §.§ Fast persistence update As described in <ref>, each optimization iteration j involves the computation of the persistence diagram of the data vector _f_j, which is computationally expensive (20 % of the computation time on average). Subsequently, for each persistence pair p_i, the data values of its vertices v_i_b and v_i_d will be updated given the optimal assignment ϕ^*_j. A key observation can be leveraged to improve the performance of the persistence computation stage. Specifically, the updated data vector _f_j+1 only contains updated data values for the subset of the vertices of which are the birth and death vertices v_i_b and v_i_d of a persistence pair p_i. Then, only a small fraction of the vertices are updated from one iteration to the next, as shown in <ref>. In practice, for the simplifications considered in our experiments (<ref>), 90 % of the vertices of do not change their data values between consecutive iterations (on average over our datasets, with the baseline optimization). This indicates that a procedure capable of quickly updating the persistence diagram (f_j+1) based on (f_j) has the potential to improve performance in practice. Several approaches focus on updating a persistence diagram based on a previous estimation <cit.>, with a time complexity that is linear for each vertex order transposition between the two scalar fields. However, this number of transpositions can be extremely large in practice. Instead, we derive a simple procedure based on recent work for computing persistent homology with Discrete Morse Theory (DMT) <cit.>, which we briefly review here for completeness. Specifically, the Discrete Morse Sandwich (DMS) approach <cit.> revisits the seminal algorithm PairSimplices <cit.> within the DMT setting, with specific accelerations for volume datasets. This algorithm is based on two main steps. First, a discrete gradient field is computed, for the fast identification of zero-persistence pairs. Second, the remaining persistence pairs are computed by restricting the algorithm PairSimplices to the critical simplices (with specific accelerations for the persistent homology groups of dimension 0 and d-1). The first key practical insight about this algorithm is that its first step, discrete gradient computation, is documented to represent in practice, in 3D, 66% of the persistence computation time on average <cit.> (in sequential mode). This indicates that, if one could quickly update the discrete gradient between consecutive iterations, the overall persistence computation step could be accelerated by up to a factor of 3 in practice. The second key practical insight about this algorithm is that the discrete gradient computation is a completely local operation, specifically to the lower star of each vertex v <cit.> (i.e., the co-faces of v containing no higher vertex than v in the global vertex order). Thus, we leverage the above two observations to expedite the computation of the diagram (f_j), based on the diagram (f_j-1). Specifically, we mark as updated all the vertices of for which the data value is updated by gradient descent at iteration j-1 (<ref>). Then, the discrete gradient field at step j is copied from that at step j-1 and the local discrete gradient computation procedure <cit.> is only re-executed for these vertices for which the lower star may have changed from step j-1 to step j, i.e. the vertices marked as updated or which contain updated vertices in their star. This localized update guarantees the computation of the correct discrete gradient field at step j, with a very small number of local re-computations. Next, the second step of the DMS algorithm <cit.> (i.e., the computation of the persistence pairs from the critical simplices) is re-executed as-is. §.§ Fast assignment update As described in <ref>, each optimization iteration j involves the computation of the optimal assignment ϕ^*_j from the current diagram (f_j) to the target , which is computationally expensive. However, for the problem of simplification, a key practical observation can be leveraged to accelerate this assignment computation. In practice, an important fraction of the pairs of (f_j) to optimize (among signal and non-signal pairs) may only move slightly in the domain from one iteration to the next (as illustrated in <ref>), and some do not move at all. For these pairs which do not move at step j, the assignment can be re-used from the step j-1, hence reducing the size of the assignment problem (<ref>), and hence reducing its practical runtime. Given two persistence diagrams (f_j) and (f_j-1), we call a still persistence pair a pair of points (p_i, p'_i) with p_i ∈(f_j) and p'_i ∈(f_j-1) such that v_i_b = v_i'_b and v_i_d = v_i'_d. In other words, a still persistence pair is a pair which does not change its birth and death vertices from one optimization iteration to the next. In practice, for the simplifications considered in our experiments (<ref>), 84% of the persistence pairs of (f_j) are still (on average over the iterations and our test datasets, <ref>). This indicates that a substantial speedup could be obtained by expediting the assignment computation for still pairs. Let be the set of still pairs between the iteration j and j-1. Then, for each pair (p_i, p'_i) ∈, we set ϕ^*_j(p_i) ←ϕ^*_j-1(p_i'). Concretely, we re-use at step j the assignment at step j-1 for all the still pairs. Next, let (f_j) be the reduced diagram at step j, i.e., the subset of (f_j) which does not contain still pairs: (f_j) = (f_j) - {p_i ∈(f_j), (p_i, p'_i) ∈}. Similarly, let _j be the reduced target at step j, i.e., the subset of which has not been assigned to still pairs: _j = - {p_i”∈, p_i” = ϕ^*_j(p_i), (p_i, p'_i) ∈}. Then, we finally complete the assignment between (f_j) and by computing the Wasserstein distance between (f_j) and _j, as documented in <ref>. Note that, in the special case where the reduced target _j is empty (i.e., all signal pairs are still), the reduced diagram (f_j) only contains non-signal pairs. Then, the optimal assignment can be readily obtained (without any assignment optimization) by simply assigning each point p_i in (f_j) to its diagonal projection (p_i). However, from our experience, such a perfect scenario never occurs on real-life data, at the notable exception of the very first iteration (before the data values are actually modified by the solver). For the following iterations, many signal pairs are not still in practice. <ref> illustrates this with a simple 2D example involving a multi-saddle vertex. However, in real-life data, such configurations occur very often, and cascade. Also, these configurations get significantly more challenging in 3D. For instance, the birth and death vertices of a given signal pair can both be multi-saddles, themselves possibly involved with non-signal pairs to update (hence yielding perturbations in the signal pair). In certain configurations, this can drastically alter the persistence of the signal pairs affected by such perturbations. This is addressed by our loss (<ref>) which enforces the preservation of the signal pairs via assignment optimization. § RESULTS This section presents experimental results obtained on a computer with two Xeon CPUs (3.0 GHz, 2x8 cores, 64GB of RAM). We implemented our algorithm (<ref>) in C++ (with OpenMP) as a module for TTK <cit.>. We implemented the baseline optimization approach (<ref>) by porting the original implementation by Carriere et al. <cit.> from TensorFlow/Gudhi <cit.> to PyTorch/TTK <cit.> and by applying it to the loss described in <ref>. We chose this approach as a baseline, since its implementation is simple and publicly available, and since it provides performances comparable to alternatives <cit.>. In our implementations, we use the DMS algorithm <cit.> for persistence computation (as it is reported to provide the best practical performance for scalar data) and the Auction algorithm <cit.> for the core assignment optimization, with a relative precision of 0.01, as recommended in the literature <cit.>. Persistence computation with DMS <cit.> is the only step of our approach which leverages parallelism (see <cit.> for a detailed performance analysis). Experiments were performed on a selection of 10 (simulated and acquired) 2D and 3D datasets extracted from public repositories <cit.>, with an emphasis on 3D datasets containing large filament structures (and thus possibly, many persistent saddle pairs). The 3D datasets were resampled to a common resolution (256^3), to better observe runtime variations based on the input topological complexity. Moreover, for each dataset, the data values were normalized to the interval [0, 1], to facilitate parameter tuning across distinct datasets. Our algorithm is subject to two meta-parameters: the gradient step sizes α_b and α_d. To adjust them, we selected as default values the ones which minimized the runtime for our test dataset with the largest diagram. This resulted in α_b = α_d = 0.5 (which coincides, given a persistence pair to cancel, to a displacement of its birth and death vertices halfway towards the other, in terms of function range). For the baseline optimization approach (<ref>), we set the initial learning rate of Adam <cit.> to the largest value which still enabled practical convergence for all our datasets (specifically, 10^-4). For both approaches, we set the maximum number of iterations j_max to 1,000 (however, it has never been reached in our performance experiments). §.§ Quantitative performance The time complexity of each iteration of the baseline optimization is cubic in the worst case, but quadratic in practice (<ref>). As discussed in <ref>, our approach has the same worst case complexity, but behaves more efficiently in practice thanks to our accelerations. <ref> provides an overall comparison between the baseline optimization (<ref>) and our solver (<ref>). Specifically, it compares both approaches in terms of runtime, for a basic simplification scenario: non-signal pairs are identified as the input pairs with a persistence smaller than 1% of the function range (see Appendix A for an aggressive simplification scenario). For both approaches, we set the stopping criterion s to 0.01, such that both methods reach a similar residual loss at termination (and hence produce results of comparable quality). This table shows that for a basic simplification scenario, our approach produces results within minutes (at most 35). In contrast, the baseline approach does not produce a result after 24 hours of computation for the largest examples. Otherwise, it still exceeds hours of computation for diagrams of modest size. Overall, our approach results in an average × 64 speedup. This acceleration can be explained by several factors. First, the direct gradient descent (<ref>) requires fewer iterations than the baseline approach (we discuss this further in the next paragraph, presenting <ref>). Second, our approach also results in faster iterations, given the accelerations presented in <ref>. In practice, the overall runtime for our solver is a function of the size of the input and target diagrams (large diagrams lead to large assignment problems). The size of the topological features in the geometric domain also plays a role (larger features will require more iterations). Finally, the number of still signal pairs also plays a role given our fast assignment update procedure (<ref>, a large number of still signal pairs leads to faster assignments). For instance, the total number of pairs (input plus target) for the Neocortical Layer Axon dataset is about ×20 larger than that of the Aneursym dataset, and the ratio between their respective runtime is also about 20. Moreover, the Foot and Neocortical Layer Axon datasets have comparable overall sizes. However, the latter dataset results in a computation time ×5 larger. This can be partly explained by the fact that the topological features are larger in this dataset, yielding twice more iterations (hence explaining a ×2 slowdown). Moreover, the per-iteration runtime is also ×2.5 slower (explaining the overall ×5 slowdown), due the higher percentage of non-still signal pairs, increasing the size of the assignment problem. The runtime gains provided by our individual accelerations are presented in <ref>. Specifically, our procedure for fast Persistence update (<ref>) can save up to 41.4% of overall computation time, and 6.6% on average. Our procedure for fast assignment update (<ref>) provides the most substantial gains, saving up to 97% of the overall computation time for the largest target diagram, and 76% on average (see Appendix A for a discussion regarding aggressive simplifications). <ref> compares the quality of the output obtained with the baseline optimization (<ref>) and our algorithm (<ref>), for the simplification parameters used in <ref>. The quality is estimated based on the value of the loss at termination ((_g)), which assesses the quality of the topological simplification. To estimate the proximity of the solution g to the input f, we also evaluate the distances ||f-g||_2 (giving a global error for the entire dataset) and ||f-g||_∞ (giving a pointwise worst case error). We refer the reader to Appendix B for complementary quality statistics. Overall, <ref> shows that our approach provides comparable losses to the baseline approach (sometimes marginally better). In terms of data fitting, our approach also provides comparable global distances ||f-g||_2 (sometimes marginally better). For the pointwise worst case error (||f-g||_∞), our approach can result in degraded values (by a factor 2). This can be explained by the fact that, when tuning the parameters of our approach, we optimized the gradient step size to minimize running time, hence possibly triggering in practice bigger pointwise shifts in data values. In contrast, the baseline approach uses the Adam <cit.> algorithm, which optimizes step sizes along the iterations, possibly triggering milder pointwise shifts in data values. In principle, the ||f-g||_∞ distance could be improved for our solver by considering smaller step sizes, but at the expense of more iterations. §.§ Analyzing topologically simplified data Our approach enables the direct visualization and analysis of topologically simplified data. This is illustrated in <ref>, which shows the processing of an acquired dataset (“Aneurysm”) representing a network of arteries. As documented in the literature <cit.>, this network exhibits a typical tree-like structure, whose accurate geometric extraction is relevant for medical analysis. The filament structure of the arteries can be simply extracted by considering the discrete integral lines <cit.> (a.k.a. v-paths <cit.>) which connect 2-saddles to maxima and which have a minimum function value above 0.1 (scalar fields are normalized). This value 0.1 generates an isosurface (transparent surfaces, <ref>) which accurately captures the geometry of the blood vessels. Hence, selecting the discrete integral lines above that threshold guarantees the extraction of the filament structures within the vessels. As shown in <ref>, the diagram (f) contains several saddle pairs, corresponding to persistent 1-dimensional generators <cit.> (curves colored by persistence in the inset zooms), which yields incorrect loops in the filament structure (which is supposed to have a tree-like structure <cit.>). To remove loops in networks of discrete integral lines, an established topological technique, relying on standard discrete Morse theory <cit.>, consists in reversing the discrete gradient <cit.> along saddle connectors. We recap this procedure here for completeness. Given the persistence diagram (f), we process its non-signal saddle pairs in increasing order of persistence. For each saddle pair (_b, _d), its saddle connector is constructed by following the discrete gradient of f from _d down to _b. Next, the pair of critical simplices (_b, _d) is cancelled, in the discrete sense, by simply reversing the discrete gradient along its saddle connector <cit.> (i.e., each discrete vector is reversed to point to the preceding co-face). Such a reversal is marked as valid if it does not create any cycle in the discrete gradient field. The validity of a reversal is important since invalid reversals result in discrete vector fields which no longer describe valid scalar fields, and from which the subsequent extraction of integral lines can generate further loops (which we precisely aim to remove). The cancellation of a saddle pair (_b, _d) is skipped if the reversal of its saddle connector is not valid, or if its saddle connector does not exist. The latter case occurs for instance for nested saddle pairs, when an invalid reversal of a small persistence pair prevents the subsequent reversal of a larger one. Finally, when all the non-signal saddle pairs have been processed, the simplified filament structures are simply obtained from the simplified discrete gradient, by initiating integral lines from 2-saddles up to maxima. However, in the example of <ref>, this saddle connector reversal procedure fails at simplifying the spurious loops in the filament structures, while maintaining a valid discrete gradient (<ref>(b)). As discussed in the literature <cit.>, integral line reversal is indeed not guaranteed to completely simplify saddle pairs (v-path co-location <cit.> as well as specific cancellation orderings <cit.> can challenge reversals, the latter issue being a manifestation of the NP-hardness of the problem <cit.>). This is evaluated in the bottom left histogram, which reports the number of skipped saddle connector reversals as a function of the persistence of the corresponding pair. Specifically, this histogram shows that the reversal of several high-persistence saddle pairs could not be performed, hence the presence of large loops in the extracted filament structures. Our approach can be used to efficiently generate a function g which is close to the input f and from which the removal of saddle pairs has been optimized, while maintaining intact the rest of the features (see the resulting diagram (g), <ref>). Specifically, we set as non-signal pairs all the saddle pairs of the input, and we set as signal pairs all the others (irrespective of their persistence). This enables a direct visualization and analysis of the topologically simplified data, where isosurface handles have been cut (<ref>c, bottom-right zoom vs. <ref>b, bottom-right zoom) and where most spurious filament loops have been consequently simplified (<ref>, top zoom). Note that, as shown in the bottom right histogram, our optimization modifies the input data f into a function g where reversal skips still occur. This is due to the fact that our solver identifies a local minimum of the simplification energy (<ref>) and that, consequently, a few saddle pairs, with low persistence, may still remain (we recall that sublevel set simplification is NP-hard <cit.>, see <ref> for further discussions). However, the skipped reversals which remain after our optimization (<ref>, bottom right histogram) only involve very low persistence pairs, hence allowing the cancellation of the largest loops overall. <ref> illustrates our simplification optimization for a challenging dataset (“Dark Sky”: dark matter density in a cosmology simulation). The isosurface capturing the cosmic web <cit.> (inset zooms) has a complicated topology (many noisy connected components and handles), which challenges its visual inspection. Its core filament structure also contains many small-scale loops since many persistent saddle connector reversals could not be performed (<ref>, bottom left histogram). Our solver provides a local minimum g to the simplification energy (<ref>) with a number of non-signal pairs reduced by 92% (see Appendix C for further stress experiments). This results in a less cluttered visualization, as the resulting cosmic web (<ref>(c)) has a less complicated topology (noisy connected components are removed and small scale handles are cut, inset zooms). Moreover, our optimization modifies the data in a way that is more conducive to persistent saddle connector reversals (bottom right histogram), hence simplifying more loops and, thus, better revealing overall the large-scale filament structure of the cosmic web. §.§ Repairing genus defects in surface processing Our work can also be used to repair genus defects in surface processing, where surface models, in particular when they are acquired, can include spurious handles due to acquisition artifacts. While several approaches have been proposed to address this issue <cit.>, they typically rely on intensive automatic optimizations, aiming at selecting the best sequence of local simplification primitives (i.e. cutting or filling). In contrast, our approach relies on a simpler and lightweight procedure, which provides control to the user over the primitives to use. Moreover, most existing techniques simplify only one sublevel set, while our approach processes the whole function range. For this, we consider the three-dimensional signed distance field f to the input surface , computed on a regular grid (i.e., f encodes for each grid vertex v the distance to the closest point on the surface , multiplied by -1 if v is located within the volume enclosed by ). For such a field, the zero level set f^-1(0) coincides with . Then, the removal of a handle in can be performed by creating a simplified signed distance field g, where the corresponding saddle pair has been canceled. Finally, the zero level set g^-1(0) provides the simplified surface '. This process is illustrated in <ref> where the handle of a torus is removed. Note that, from a topological point of view, this operation can be performed in two ways: either by cutting the handle (<ref>(b)), or by filling it (<ref>(c)). This can be controlled in our solver by simply adjusting the step sizes for the birth and death gradients (<ref>). Specifically, given a saddle pair to remove p_i ∈(f), handle cutting is obtained by setting α_d to zero. Then, the death vertex v_i_d will not be modified (above the zero level set), while only the birth vertex v_i_b (located in the star of the 1-saddle creating the handle) will increase its value above 0, effectively disconnecting the handle in the output surface '. Handle filling is obtained symmetrically, by setting α_b to zero (effectively forcing the 2-saddle to decrease its value below 0). <ref> presents a realistic example of an acquired surface from a public repository <cit.>, which contains a spurious handle, due to acquisition artifacts. First, the signed distance field is computed and its 1-dimensional persistent generators <cit.> are extracted. The shortest generator corresponds to a small handle, which happens to be a genus defect in this example. Then, the user can choose to repair this defect via cutting or filling, resulting in a repaired surface S' which is close to the input S, and from which the spurious handle has been removed. §.§ Limitations Our approach is essentially numerical and, thus, suffers from the same limitations as previous numerical methods for topological simplification (<ref>). Specifically, the non-signal pairs are canceled by our approach by decreasing their persistence to a target value of zero. However, this decrease is ultimately limited by the employed numerical precision (typically, 10^-6 for single-precision floating point values). From a strictly combinatorial point of view, this can result in residual pairs with an arbitrarily small persistence (i.e., in the order of the numerical precision). In principle, this drawback is common to all numerical methods (although sometimes mitigated via smoothing). Then, when computing topological abstractions, these residual pairs need to be removed from the computed abstraction (e.g., with integral line reversal, <ref>). However, as discussed in the literature <cit.>, post-process mechanisms for simplifying topological abstractions may not guarantee a complete simplification of the abstractions either (this is another concrete implication of the NP-hardness of sublevel set simplification <cit.>). However, our experiments (<ref>) showed that our numerical optimization helped such combinatorial mechanisms, by pre-processing the data in a way that resulted eventually in fewer persistent reversal skips (Figs. <ref> and <ref>, right versus left histograms). Similar to previous persistence optimization frameworks, our approach generates a local minimum of the simplification energy (<ref>), and thus it is not guaranteed to reach the global minimum. As a reminder, in 3D, an optimal simplification (i.e., (g) =) may not exist and finding a sublevel set simplification is NP-hard <cit.>. However, our experiments (<ref>) showed that our approach still generated solutions whose quality was on par with the state-of-the-art (comparable losses and distances to the input), while providing substantial accelerations. Moreover, as shown in <ref>, these solutions enabled the direct visualization of isosurfaces whose topology was indeed simplified (fewer components and handles) and they were also conducive to improved saddle connector reversals. § CONCLUSION This paper introduced a practical solver for topological simplification optimization. Our solver is based on tailored accelerations, which are specific to the problem of topological simplification. Our accelerations are simple and easy to implement, but result in significant gains in terms of runtime, with × 60 speedups on average on our datasets over state-of-the-art persistence optimization frameworks (with both fewer and faster iterations), for comparable output qualities. This makes topological simplification optimization practical for real-life three-dimensional datasets. We showed that our contributions enabled a direct visualization and analysis of the topologically simplify data, where the topology of the extracted isosurfaces was indeed simplified (fewer connected components and handles). We applied our approach to the extraction of prominent filament structures in 3D data, and showed that our pre-simplification of the data led to practical improvements for the removal of spurious loops in filament structures. We showed that our contributions could be used to repair genus defects in surface processing, where handles due to acquisition artifacts could be easily removed, with an explicit control on the repair primitives (cutting or filling). While it is tailored to the problem of simplification, our solver is still generic and could in principle be used for other persistence optimization problems, however, with possibly less important performance gains. In the future, we will consider other optimization problems and investigate other acceleration strategies for these specific problems. Since our solver can optimize persistence pairs localized within a neighborhood of the field, we will also investigate divide-and-conquer parallelizations. This work is partially supported by the European Commission grant ERC-2019-COG “TORI” (ref. 863464, <https://erc-tori.github.io/>), by the U.S. Department of Energy, Office of Science, under Award Number(s) DE-SC-0019039, and by a joint graduate research fellowship (ref. 320650) funded by the CNRS and the University of Arizona. abbrv-doi-hyperref § APPENDIX < g r a p h i c s > § AGGRESSIVE SIMPLIFICATION Section 5.1 (main manuscript) evaluates our approach from a quantitative point of view, for a simplification scenario where all the pairs less persistent than 1% of the function range are considered as non-signal. In this appendix, we report the same experiments, but with a more aggressive threshold (45% of the function range). Specifically, <ref> provides a comparison between the baseline optimization (Section 3, main manuscript) and our solver (Section 4, main manuscript) in terms of runtime. Similarly to the basic simplification scenario (Table 1, main manuscript), our solver computes the simplifications within minutes (at most 39), for the same average speedup over the baseline (×64). For both the baseline and our solver, while the number of iterations increases in comparison to the basic simplification (since more and larger features need to be simplified), iterations are significantly faster as the assignment problems are much smaller. The runtime gains provided by our individual accelerations, for this aggressive simplification scenario, are presented in <ref>. Specifically, our procedure for fast Persistence update (Section 4.2, main manuscript) can save up to 54.9% of overall computation time, and 19.6% on average, which is a substantial improvement over the basic simplification scenario (Table 2, main manuscript). As more iterations are required to simplify persistent features (<ref>), less and less vertices are updated along the iterations (since low-persistence features are cancelled in the early iterations), hence advantaging our fast persistence update procedure. For the fast Assignment update (Section 4.3, main manuscript), the average gain decreases to 30% (with regard to the basic simplification scenario, Table 2, main manuscript) since assignment problems become smaller (and so does their importance in the overall computation). Negative entries in <ref> indicate cases where the acceleration actually degrades runtimes. For the fast persistence update, this happens when the number of updated vertices is so large that their identification overweights the gradient computation for the non-updated vertices. Similar remarks can be made for the fast assignment update, where the identification of the still pairs can penalize runtime for small assignment problems. Overall, both our accelerations (fast persistence update and fast assignment update) improve performance in both simplification scenarios, with the fast assignment update being more important for mild simplifications, and the fast persistence update for aggressive ones. <ref> compares the quality of the output obtained with the baseline optimization (Section 3, main manuscript) and our algorithm (Section 4, main manuscript), for the simplification parameters used in <ref>. In particular, this table provides similar observations to the basic simplification scenario (Table 3, main manuscript): our approach provides comparable losses to the baseline approach (sometimes marginally better). In terms of data fitting, our approach also provides comparable global distances ||f-g||_2 (sometimes marginally better). For the pointwise worst case error (||f-g||_∞), similarly to the basic simplification scenario, our approach can result in degraded values (roughly by a factor of 2), as discussed in further details in the main manuscript. § SIGNAL PAIR PRESERVATION EVALUATION Table 3 (main manuscript) provides some quality statistics regarding the output of our algorithm. Specifically, it details the achieved loss ((_g)) as well as pointwise distances between the input and the simplified fields (||f-g||_2 and ||f-g||_∞). In this appendix, we provide complementary quality statistics, where we now evaluate the preservation of the features of interest after our simplification. For this, <ref> reports statistics (minimum, average, maximum) of the displacement in the birth-death space (between 0 and 1) for the signal pairs, both for a mild (white lines) and an aggressive (grey lines) simplification based on persistence (1% and 45% of the function range, respectively). Specifically, displacements are evaluated given the optimal assignment (achieved by the Wasserstein distance) between and (g). Overall, this table shows that the position of the signal pairs in the birth-death space is well constrained by our solver, with a worst displacement of 2.25×10^-02 for a challenging example (aggressive simplification of the Dark Sky dataset, where many multi-saddles are involved in both signal and non-signal pairs). For all datasets, the achieved worst displacement is negligible with regard to the employed persistence threshold (by an order of magnitude). § EXTREME SIMPLIFICATION Section 5.2 (main manuscript) evaluates our approach from a qualitative point of view, for various practical scenarios of simplification: removing saddle pairs (Figure 1) or removing pairs less persistent than an aggressive persistence threshold, i.e. 0.25 (Figure 7). In this appendix, we revisit this latter experiment to stress our approach. Specifically, we consider the challenging Dark Sky dataset (large input diagrams, many saddle pairs, intricate geometry) and specify an extreme simplification. In particular, all the finite persistence pairs are considered as non-signal (bars marked with spheres at their extremities, <ref>) and only the infinite bar involving the global minimum (cropped by convention at the globally maximum data value, bar with an upward arrow, <ref>) is considered as a signal pair. The corresponding results are shown in <ref>. Specifically, this figure shows that, despite this challenging dataset and extreme simplification criterion, our approach still manages to simplify 95% of the non-signal pairs, which is a slight improvement over the original experiment reported in the Figure 7 of the main manuscript (92% for a persistence threshold of 0.25). Moreover, from a qualitative point of view, all the filament loops have been simplified: the persistence diagram does not contain any finite persistence pairs whose life-span crosses the death isovalue 0.4 (dashed horizontal line). Only the infinite bar related to the global minimum (bar with an upward arrow) crosses it. In other words, this means that the cosmic web volume (i.e. the sublevel set for the isovalue 0.4) is made of only one connected component and contains no topological handles.
http://arxiv.org/abs/2407.12318v1
20240717050847
Information Compression in Dynamic Games
[ "Dengwang Tang", "Vijay Subramanian", "Demosthenis Teneketzis" ]
cs.GT
[ "cs.GT", "cs.MA", "cs.SY", "eess.SY", "math.OC", "math.ST", "stat.TH", "90C40, 91A10, 91A15, 91A25, 91A50" ]
[1]Dengwang Tang dwtang@umich.edu 2]Vijay Subramanian vgsubram@umich.edu 2]Demosthenis Teneketzis teneket@umich.edu [1]Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, 90089-2560, USA [2]Electrical and Computer Engineering Division, Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, MI, 48109, USA One of the reasons why stochastic dynamic games with an underlying dynamic system are challenging is because strategic players have access to enormous amount of information which leads to the use of extremely complex strategies at equilibrium. One approach to resolve this challenge is to simplify players’ strategies by identifying appropriate compression of information maps so that the players can make decisions solely based on the compressed version of information, called the information state. Such maps allow players to implement their strategies efficiently. For finite dynamic games with asymmetric information, inspired by the notion of information state for single-agent control problems, we propose two notions of information states, namely mutually sufficient information (MSI) and unilaterally sufficient information (USI). Both these information states are obtained by applying information compression maps that are independent of the strategy profile. We show that Bayes-Nash Equilibria (BNE) and Sequential Equilibria (SE) exist when all players use MSI-based strategies. We prove that when all players employ USI-based strategies the resulting sets of BNE and SE payoff profiles are the same as the sets of BNE and SE payoff profiles resulting when all players use full information-based strategies. We prove that when all players use USI-based strategies the resulting set of weak Perfect Bayesian Equilibrium (wPBE) payoff profiles can be a proper subset of all wPBE payoff profiles. We identify MSI and USI in specific models of dynamic games in the literature. We end by presenting an open problem: Do there exist strategy-dependent information compression maps that guarantee the existence of at least one equilibrium or maintain all equilibria that exist under perfect recall? We show, by a counterexample, that a well-known strategy-dependent information compression map used in the literature does not possess any of the properties of the strategy-independent compression maps that result in MSI or USI. [JEL Classification]C72, C73, D80 [MSC Classification]90C40, 91A10, 91A15, 91A25, 91A50 [Acknowledgements]The authors would like to thank Yi Ouyang, Hamidreza Tavafoghi, Ashutosh Nayyar, Tilman Börgers, and David Miller for helpful discussions. Information Compression in Dynamic Games [ July 22, 2024 ======================================== § INTRODUCTION The model of stochastic dynamic games has found application in many engineering and socioeconomic settings, such as transportation networks, power grid, spectrum markets, and online shopping platforms. In these settings, multiple agents/players make decisions over time on top of an ever-changing environment with players having different goals and asymmetric information. For example, in transportation networks, individual drivers make routing decisions based on information from online map services in order to reach their respective destinations as fast as possible. Their actions then collectively affect traffic conditions in the future. Another example involves online shopping platforms, where buyers leave reviews to inform potential future buyers, while sellers update prices and make listing decisions based on the feedback from buyers. In these systems, players' decisions are generally not only interdependent, but also affect the underlying environment as well as future decisions and payoffs of all players in complex ways. Determining the set of equilibria, or even solving for one equilibrium, in a given stochastic dynamic game can be a challenging task. The main challenges include: (a) the presence of an underlying environment/system that can change over time based on the actions of all players; (b) incomplete and asymmetric information; (c) large number of players, states, and actions; and (d) growing amount of information over time which results in a massive strategy space. As a result of the advances in technology, stochastic dynamic games today are often played by players (e.g. big corporations) that have access to substantial computational resources along with a large amount of data for decision making. Nevertheless, even these players are computationally constrained, and they must make decisions in real-time, hence complicated strategies may not be feasible for them. Therefore, it is important to determine computationally efficient strategies for players to play at equilibria. Compression of players' information and then use of the strategies based on the compressed information is a well-heeled methodology that results in computationally efficient strategies. In this paper we address some of the above-mentioned challenges. We concentrate on the challenges associated with information compression, namely the existence of equilibria under information compression, and the preservation of all equilibrium payoff profiles under information compression. We leave as a topic of future investigation the discovery of efficient algorithms for the computation of equilibria based on strategies that use compressed information. Specifically, our goal is to identify appropriate strategy-independent [Strategy independent information compression maps are maps that are not parameterized by a strategy profile. Examples of strategy-independent information compression maps include those that use a fixed-subset of the game's history (e.g. the most recent observation) or some statistics based on the game's history (e.g. the number of times player i takes a certain action). Strategy-dependent maps are parameterized by a strategy profile (see Section <ref>).] information compression maps in dynamic games so that the resulting compressed information has properties/features sufficient to satisfy the following requirements: (R1) existence of equilibria when all players use strategies based on the compressed information; (R2) equality of the set of all equilibrium payoff profiles that are achieved when all players use full information based-strategies with the set of all equilibrium payoff profiles that are achieved when all players use strategies based on the compressed information. Inspired by the literature on single-agent decision/control problems, particularly the notion of information state, we develop notions of information state (compressed information) that satisfy requirements (R1) and (R2). Specifically, we introduce the notions of Mutually Sufficient Information (MSI) and Unilaterally Sufficient Information (USI). We show that MSI has properties/features sufficient to satisfy (R1), whereas USI has properties sufficient to satisfy (R2) under several different equilibrium concepts. The remainder of the paper is organized as follows: In Section <ref> we briefly review related literature in stochastic control and game theory. In Section <ref> we list our contributions. In Section <ref> we introduce our notation. In Section <ref> we formulate our game model. In Section <ref> and Section <ref> we introduce the notion of mutually sufficient information and unilaterally sufficient information respectively. We present our main results in Section <ref>. We discuss these results in Section <ref>. We discuss an open problem, primarily associated with strategy-dependent information compression, in Section <ref>. We provide supporting results in Appendix <ref>. We present alternative characterizations of sequential equilibria in Appendix <ref>. We provide proofs of the results of Sections <ref> and <ref> in Appendix <ref>. We present the details of the discussions in Section <ref> and Section <ref> in Appendix <ref>. §.§ Related Literature We first present a brief literature survey on information compression in single-agent decision problems because it has inspired several of the key ideas presented in this paper. Single-agent decision/control problems are problems where one agent chooses actions over time on top of an ever-changing system to maximize their total reward. These problems have been extensively studied in the control theory <cit.>, operations research <cit.>, computer science <cit.>, and mathematics <cit.> literature. Models like Markov Decision Process (MDP) and Partially Observable Markov Decision Process (POMDP) have been analyzed and applied widely in real-world systems. It is well known that in an MDP, the agent can use a Markov strategy—making decisions based on the current state—without loss of optimality. A Markov strategy can be seen as a strategy based on compressed information: the full information—state and action history—is compressed into only the current state. Furthermore, in finite horizon problems, such optimal Markov strategies can be found through a sequential decomposition procedure. It is also well known that any POMDP can be transformed into an MDP with an appropriate belief acting as the underlying state <cit.>. As a result, the agent can use a belief-based strategy without loss of optimality. A belief-based strategy compresses the full information into the conditional belief of the current state. Critically, this information compression is strategy-independent <cit.>. For general single-agent control problems, sufficient conditions that guarantee optimality of compression-based strategies have been proposed under the names of sufficient statistic <cit.> and information state <cit.>. In these works, the authors transform single-agent control problems with partial observations into equivalent problems with complete observations with the sufficient statistic/information state acting as the underlying state. Multi-agent dynamic decision problems are either teams where all agents have the same objective, or games where agents have different objectives and are strategic. Information compression in dynamic teams has been investigated in <cit.>, and many other works (see <cit.> and <cit.> for a list of references). Dynamic games can be divided into two categories: those with a static underlying environment (e.g. repeated games), and those with an underlying dynamic system. Over the years, economics researchers have studied repeated games extensively (e.g. see <cit.>). As our focus is on dynamic games with an underlying dynamic system, we will not discuss the literature on repeated games. Among models for dynamic games with an underlying dynamic system, the model of zero-sum games, as a particular class which possesses special properties, has been analyzed in <cit.> and many others (see <cit.> for a list of references). Non-zero-sum games with an underlying dynamic system and symmetric information have also been studied extensively <cit.>. For such dynamic games with perfect information, the authors of <cit.> introduce the concept of Markov Perfect Equilibrium (MPE), where each player compresses their information into a Markov state. Dynamic games with asymmetric information have been analyzed in <cit.>. In <cit.>, the authors introduce the concept of Common Information Based Markov Perfect Equilibrium (CIB-MPE), which is an extension of MPE in partially observable systems. In a CIB-MPE, all players choose their actions at each time based on the Common-Information-Based (CIB) belief (a compression of the common information) and private information instead of full information. The authors establish the existence of CIB-MPE under the assumption that the CIB belief is strategy-independent. Furthermore, the authors develop a sequential decomposition procedure to solve for such equilibria. In <cit.>, the authors extend the result of <cit.> to a particular model where the CIB beliefs are strategy-dependent. They introduce the concept of Common Information Based Perfect Bayesian Equilibrium (CIB-PBE). In a CIB-PBE all players choose their actions based on the CIB belief and their private information. They show that such equilibria can be found through a sequential decomposition whenever the decomposition has a solution. The authors conjecture the existence of such equilibria. The authors of <cit.> extend the model of <cit.> to games among teams. They consider two compression maps and their associated equilibrium concepts. For the first compression map, which is strategy-independent, they establish preservation of equilibrium payoffs. For the second information compression map, which is strategy-dependent, they propose a sequential decomposition of the game. If the decomposition admits a solution, then there exists a CIB-BNE based on the compressed information. Furthermore, they provide an example where CIB-BNE based on this specific compressed information do not exist. The example also proves that the conjecture about the existence of CIB-PBEs, made in <cit.>, is false. In addition to the methods on information compression that appear in <cit.>, there are two lines of work on games where the players' decisions are based on limited information. In the first line of work, players face exogenous hard constraints on the information that can be used to choose actions <cit.>. In the second line of work, players can utilize any finite automaton with any number of states to choose actions, however more complex automata are assumed to be more costly <cit.>. In our work, we also deal with finite automaton based strategies. However, there is a critical difference between our work and both lines of literatureboth of the above-mentioned lines of work: Our primary interest is to study conditions under which a compression based strategy profile can form an equilibrium under standard equilibrium concepts when unbounded rationality and perfect recall are allowed. Under these equilibrium concepts, we do not restrict the strategy of any player, nor do we impose any penalty on complicated strategies. In other words, a compression based strategy needs to be a best response compared to all possible strategies with full recall in terms of the payoff alone. The methodology for information compression presented in this paper is similar in spirit to that of <cit.>. However, this paper is significantly different from those works as it deals with the discovery of information compression maps that lead not only to the existence (in general) of various types of compressed information based equilibria but also to the preservation of all equilibrium payoff profiles (a topic not investigated in <cit.>). This paper builds on <cit.>; it identifies embodiments of the two information compression maps studied in <cit.> for a much more general class of games than that of <cit.>, and a broader set of equilibrium concepts. §.§ Contributions Our main contributions are the following: * We propose two notions of information states/compressed information for dynamic games with asymmetric information that result in from strategy-independent compression maps: Mutually Sufficient Information (MSI) and Universally Sufficient Information (USI) — Definitions <ref> and <ref>, respectively. We present an example that highlights the differences between MSI and USI. * We show that in finite dynamic games with asymmetric information, Bayes–Nash Equilibria (BNE) and Sequential Equilibria (SE) exist when all players use MSI-based strategies — Theorems <ref> and <ref>, respectively. * We prove that when all players employ USI-based strategies the resulting sets of BNE and SE payoff profiles are same as the sets of BNE and SE payoff profiles resulting when all players use full information based strategies — Theorems <ref> and <ref>, respectively. * We prove that when all players use USI-based strategies the resulting set of weak Perfect Bayesian Equilibrium (wPBE) payoff profiles can be a proper subset of the set of all wPBE payoff profiles — Proposition <ref>. A result similar to that of Proposition <ref> is also true under Watson's PBE <cit.>. Figure <ref> depicts the results stated in Contributions 3 and 4 above. * We present several examples — Examples <ref> through <ref> — of finite dynamic games with asymmetric information where we identify MSI and USI. Additional contributions of this work are:   * A set of alternative definitions of SE — Appendix <ref>. These definitions are equivalent to the original definition of SE given in <cit.> and help simplify some of the proofs of the main results in this paper. * A new methodology for establishing existence of equilibria. The methodology is based on a best response function defined through a dynamic program for a single-agent control problem. * A counterexample showing that a well-known strategy-dependent compression map, resulting in sufficient private information along with common information-based beliefs, does not guarantee existence of equilibria based on the above-stated compressed information. §.§ Notation We follow the notational convention of stochastic control literature (i.e. using random variables to define the system, representing information as random variables, etc.) instead of the convention of game theory literature (i.e. game trees, nodes, information sets, etc.) unless otherwise specified. This allows us to apply techniques from stochastic control, which we rely heavily upon, in a more natural way. We use capital letters to represent random variables, bold capital letters to denote random vectors, and lower case letters to represent realizations. We use superscripts to indicate players, and subscripts to indicate time. We use i to represent a typical player and -i to represent all players other than i. We use t_1:t_2 to indicate the collection of timestamps (t_1, t_1+1, ⋯, t_2). For example, X_1:4^i stands for the random vector (X_1^1, X_2^i, X_3^i, X_4^i). For random variables or random vectors represented by Latin letters, we use the corresponding script capital letters to denote the space of values these random vectors can take. For example, ℋ_t^i denotes the space of values the random vector H_t^i can take. The products of sets refers to Cartesian products. We use (·) and [·] to denote probabilities and expectations, respectively. We use Δ() to denote the set of probability distributions on a finite set . For a distribution ν∈Δ(), we use supp(ν) to denote the support of ν. When writing probabilities, we will omit the random variables when the lower case letters that represent the realizations clearly indicate the random variable it represents. For example, we will use (y_t^i|x_t, u_t) as a shorthand for (Y_t^i = y_t^i|X_t = x_t, U_t=u_t). When λ is a function from _1 to Δ(_2), with some abuse of notation we write λ(ω_2|ω_1):=(λ(ω_1))(ω_2) as if λ is a conditional distribution. We use 1_A to denote the indicator random variable of an event A. In general, probability distributions of random variables in a dynamic system are only well defined after a complete strategy profile is specified. We specify the strategy profile that defines the distribution in superscripts, e.g. ^g(x_t^i|h_t^0). When the conditional probability is independent of a certain part of the strategy (g_t^i)_(i, t)∈, we may omit this part of the strategy in the notation, e.g. ^g_1:t-1(x_t|y_1:t-1, u_1:t-1), ^g^i(u_t^i|h_t^i) or (x_t+1|x_t, u_t). We say that a realization of some random vector (for example h_t^i) is admissible under a partially specified strategy profile (for example g^-i) if the realization has strictly positive probability under some completion of the partially specified strategy profile (In this example, that means ^g^i, g^-i(h_t^i) > 0 for some g^i). Whenever we write a conditional probability or conditional expectation, we implicitly assume that the condition has non-zero probability under the specified strategy profile. When only part of the strategy profile is specified in the superscript, we implicitly assume that the condition is admissible under the specified partial strategy profile. In this paper, we make heavy use of value functions and reward-to-go functions. Such functions will be clearly defined within their context with the following convention: Q stands for state-action value functions; V stands for state value functions; and J stands for reward-to-go functions for a given strategy profile (as opposed to Q or V, both of which are typically defined via a maximum over all strategies). § GAME MODEL AND OBJECTIVES §.§ Game Model In this section we formulate a general model for a finite horizon dynamic game with finitely many players. Denote the set of players by ℐ. Denote the set of timestamps by 𝒯={1,2,⋯, T}. At time t, player i∈ℐ takes action U_t^i, obtains instantaneous reward R_t^i, and then learns new information Z_t^i. Player i may not necessarily observe the instantaneous rewards R_t^i directly. The reward is observable only if it is part of Z_t^i. Define Z_t=(Z_t^i)_i∈ℐ, U_t=(U_t^i)_i∈ℐ, and R_t=(R_t^i)_i∈ℐ. We assume that there is an underlying state variable X_t and (X_t+1, Z_t, R_t) = f_t(X_t, U_t, W_t), t∈𝒯, where (f_t)_t∈𝒯 are fixed functions. The primitive random variable X_1 represents the initial move of nature. The primitive random vector H_1=(H_1^i)_i∈ℐ represents the initial information of the players. The initial state and information X_1 and H_1 are, in general, correlated. The random variables (W_t)_t=1^T are mutually independent primitive random variables representing nature's move. The vector (X_1, H_1) is assumed to be mutually independent with W_1, W_2, ⋯, W_T. The distributions of the primitive random variables are common knowledge to all players. Define 𝒳_t, 𝒰_t, 𝒵_t, 𝒲_t, ℋ_1 to be the sets of possible values of X_t, U_t, Z_t, W_t, H_1 respectively. The sets 𝒳_t, 𝒰_t, 𝒵_t, 𝒲_t, ℋ_1 are assumed to be common knowledge among all players. In this work, in order to focus on conceptual difficulties instead of technical issues, we make the following assumption. 𝒳_t, 𝒰_t, 𝒵_t, 𝒲_t, ℋ_1 are finite sets, and R_t^i is supported on [-1, 1]. We assume perfect recall, i.e. the information player i has at time t is H_t^i = (H_1^i, Z_1:t-1^i), and player i's action U_t^i is contained in the new information Z_t^i. A behavioral strategy g^i=(g_t^i)_t∈𝒯 of player i is a collection of functions g_t^iℋ_t^i↦Δ(𝒰_t^i), where ℋ_t^i is the space where H_t^i takes values. Under a behavioral strategy profile g=(g^i)_i∈ℐ, the total reward/payoff of player i in this game is given by J^i(g):=^g[∑_t=1^T R_t^i ]. This is not a restrictive model: By choosing appropriate state representation X_t and instantaneous reward vector R_t, it can be used to model any finite-node extensive form sequential game with perfect recall. We initially consider two solution concepts for dynamic games with asymmetric information: Bayes–Nash Equilibrium (BNE) and Sequential Equilibrium (SE). We define BNE and SE below. A behavioral strategy profile g is said to form a Bayes-Nash equilibrium (BNE) if for any player i and any behavioral strategy g̃^i of player i, we have J^i(g)≥ J^i(g̃^i, g^-i). Let g=(g^i)_i∈ℐ be a behavioral strategy profile. Let =(_t^i)_i∈ℐ,t∈𝒯 be a collection of history-action value functions, i.e. _t^iℋ_t^i ×𝒰_t^i ↦ℝ. The strategy profile g is said to be sequentially rational under if for each i∈ℐ, t∈𝒯 and each h_t^i∈ℋ_t^i, supp(g_t^i(h_t^i))⊆u_t^imax _t^i(h_t^i, u_t^i). is said to be fully consistent with g if there exist a sequence of pairs of strategies and history-action value functions (g^(n), ^(n))_n=1^∞ such that * g^(n) is fully mixed, i.e. every action is chosen with positive probability at every information set. * ^(n) is consistent with g^(n), i.e., _τ^(n), i(h_τ^i, u_τ^i) =^g^(n)[∑_t=τ^T R_t^i|h_τ^i, u_τ^i], for each i∈ℐ, τ∈𝒯, h_τ^i∈ℋ_τ^i, u_τ^i∈𝒰_τ^i. * (g^(n), ^(n))→ (g, ) as n→∞. A tuple (g, ) is said to be a sequential equilibrium if g is sequentially rational under and is fully consistent with g. Whereas Definition <ref> of SE is different from that of <cit.>, we show in Appendix <ref> that it is equivalent to the concept in <cit.>. We use Definition <ref> as it is more suitable for the development of our results. In this paper, we are interested in analyzing the performance of strategy profiles that are based on some form of compressed information. Let _t^i be a function of H_t^i that can be sequentially updated, i.e. there exist functions (ι_t^i)_t∈𝒯 such that _1^i = ι_1^i(H_1^i), _t^i = ι_t^i(_t-1^i, Z_t-1^i), t∈𝒯\{1}. Write ^i=(_t^i)_t∈𝒯 and =(^i)_i∈ℐ. We will refer to ^i as the compression of player i's information under ι^i = (ι_t^i)_t∈𝒯. A ^i-based (behavioral) strategy ρ^i=(ρ_t^i)_t∈𝒯 is a collection of functions ρ_t^i_t^i↦Δ(𝒰_t^i). A strategy profile where each player i uses a ^i-based strategy is called a -based strategy profile. If a -based strategy profile forms an Bayes-Nash (resp. sequential) equilibrium, then it is called a -based Bayes-Nash (resp. sequential) equilibrium. Note that unlike <cit.>, we require the -based BNE and -based SE to contain no profitable deviation among all full-history-based strategies. §.§ Objectives Our goal is to discover properties/features of the compressed information sufficient to guarantee that (i) there exists -based BNE and SE; (ii) the set of -based BNE (resp. SE) payoff profiles is equal to the set of (general strategy based) BNE (resp. SE) profiles under perfect recall. To achieve the above-stated objectives we proceed as follows: First, we introduce two notions of information state, namely MSI and USI (Section <ref>). Then, we investigate the existence of MSI-based and USI-based BNE and SE, as well as the preservation of the set of all BNE and SE payoff profiles when USI-based strategies are employed by all players (Section <ref>). A key challenge in achieving the above-stated goal is the following: Unlike the case of perfect recall, one may not be able to recover _t-1^i from _t^i. Therefore, ^i-based (behavioral) strategies are not equivalent to mixed strategies supported on the set of ^i-based pure strategies. This fact creates difficulty for analyzing ^i-based strategies since the standard technique of using Kuhn's Theorem <cit.> to transform mixed strategies to behavioral strategies does not apply. To resolve this challenge, we developed stochastic control theory-based techniques that allow us to work with ^i-based behavioral strategies directly rather than transforming from a mixed strategy. In the following sections, when referring to the compressed information _t^i, we will consider the compression mappings ι^i to be fixed and given, so that _t^i is fixed given H_t^i. The space of compressed information _t^i is a fixed, finite set given ι^i. When we use _t^i to represent a realization of _t^i, we assume that it corresponds to the compression of H_t^i=h_t^i under the fixed ι^i. § TWO DEFINITIONS OF INFORMATION STATE Before we define notions of information state in dynamic games we introduce the notion of information state for one player when other players' strategies are fixed. The following definition is an extension of the definition of information state in <cit.>. Let g^-i be a behavioral strategy profile of players other than i. We say that ^i is an information state under g^-i if there exist functions (P_t^i,g^-i)_t∈𝒯, (r_t^i,g^-i)_t∈𝒯, where P_t^i,g^-i_t^i×𝒰_t^i ↦Δ(_t+1^i) and r_t^i,g^-i_t^i×𝒰_t^i ↦ [-1, 1], such that * ^g^i, g^-i(_t+1^i|h_t^i, u_t^i) = P_t^i,g^-i(_t+1^i|_t^i, u_t^i) for all t∈𝒯\{T}; * ^g^i, g^-i[R_t^i|h_t^i, u_t^i] = r_t^i,g^-i(_t^i, u_t^i) for all t∈𝒯, for all g^i, and all (h_t^i, u_t^i) admissible under (g^i, g^-i). (Both P_t^i,g^-i and r_t^i,g^-i may depend on g^-i, but they do not depend on g^i.) In the absence of other players, the above definition is exactly the same as the definition of information state for player i's control problem. When other players are present, the parameters of player i's control problem, in general, depend on the strategy of other players. As a consequence, an information state under one strategy profile g^-i may not be an information state under a different strategy profile g̃^-i. §.§ Mutually Sufficient Information We say that =(^i)_i∈ℐ is mutually sufficient information (MSI) if for all players i∈ℐ and all ^-i-based strategy profiles ρ^-i, ^i is an information state under ρ^-i. In words, MSI represents mutually consistent compression of information in a dynamic game: Player i could compress their information to ^i without loss of performance when other players are compressing their information to ^-i. Note that MSI imposes interdependent conditions on the compression maps of all players: It requires the compression maps of all players to be consistent with each other. The following lemma provides a sufficient condition for a compression maps to yield mutually sufficient information. If for all i∈ℐ and all ^-i-based strategy profiles ρ^-i, there exist functions (Φ_t^i, ρ^-i)_t∈𝒯 where Φ_t^i, ρ^-i_t^i ↦Δ(𝒳_t×_t^-i) such that ^g^i, ρ^-i(x_t, _t^-i|h_t^i) = Φ_t^i,ρ^-i(x_t, _t^-i|_t^i), for all behavioral strategies g^i, all t∈𝒯, and all h_t^i admissible under (g^i, ρ^-i), then =(^i)_i∈ℐ is mutually sufficient information. See Appendix <ref>. In words, the condition of Lemma <ref> means that _t^i has the same predictive power as H_t^i in terms of forming a belief on the current state and other players' compressed information whenever other players are using compression-based strategies. This belief is sufficient for player i to predict other player's actions and future state evolution. Since other players are using compression-based strategies, player i does not have to form a belief on other player's full information in order to predict other players' actions. §.§ Unilaterally Sufficient Information We say that ^i is unilaterally sufficient information (USI) for player i∈ℐ if there exist functions (F_t^i, g^i)_t∈𝒯 and (Φ_t^i, g^-i)_t∈𝒯 where F_t^i, g^i_t^i↦Δ(ℋ_t^i), Φ_t^i, g^-i_t^i ↦Δ(𝒳_t×ℋ_t^-i) such that ^g(x_t, h_t|_t^i) = F_t^i, g^i(h_t^i|_t^i)Φ_t^i,g^-i(x_t, h_t^-i|_t^i), for all behavioral strategy profiles g, all t∈𝒯, and all _t^i admissible under g.[In the case where random vectors X_t, H_t^i and H_t^-i share some common components, (<ref>) should be interpreted in the following way: x_t, h_t^i and h_t^-i are three separate realizations that are not necessarily congruent with each other (i.e. they can disagree on their common parts). In the case of incongruency, the left-hand side equals 0. The equation needs to be true for all combinations of x_t∈𝒳_t, h_t^i∈ℋ_t^i and h_t^-i∈ℋ_t^-i.] The definition of USI can be separated into two parts: The first part states that the conditional distribution of H_t^i, player i's full information, given _t^i, the compressed information, does not depend on other players' strategies. This is similar to the idea of sufficient statistics in the statistics literature <cit.>: If player i would like to use their “data” H_t^i to estimate the “parameter” g^-i, then _t^i is a sufficient statistic for this parameter estimation problem. The second part states that _t^i has the same predictive power as H_t^i in terms of forming a belief on the current state and other players' full information. In contrast to the definition of mutually sufficient information, if ^i is unilaterally sufficient information, then ^i is sufficient for player i's decision making regardless of whether other players are using any information compression map. §.§ Comparison Using Lemma <ref> it can be shown that if ^i is USI for each i∈ℐ, then =(^i)_i∈ℐ is MSI. The converse is not true. The following example illustrates the difference between MSI and USI. Consider a two stage stateless (i.e. X_t=∅) game of two players: Alice (A) moves first and Bob (B) moves afterwards. There is no initial information (i.e. H_1^A=H_1^B=∅). At time t=1, Alice chooses U_1^A∈{0, 1}. The instantaneous rewards of both players are given by R_1^A = U_1^A, R_1^B = -U_1^A. The new information of both Alice and Bob at time 1 is Z_1^A=Z_1^B = U_1^A, i.e. Alice's action is observed. At time t=2, Bob chooses U_2^B ∈{-1, 1}. The instantaneous rewards of both players are given by R_2^A = U_2^B, R_2^B = 0. Set _t^A=H_t^A and _t^B=∅ for both t∈{1, 2}. It can be shown that is mutually sufficient information. However, ^B is not unilaterally sufficient information: We have ^g(h_2^B|_2^B)=^g(u_1^A) = g_1^A(u_1^A|∅), while the definition of USI requires that ^g(h_2^B|_2^B) = F_t^B, g^B(h_2^B|_2^B) for some function F_t^B, g^B that does not depend on g^A. § INFORMATION-STATE BASED EQUILIBRIUM In this section, we formulate our result on MSI and USI based equilibria for two equilibrium concepts: Bayes–Nash equilibria and sequential equilibria. §.§ Information-State Based Bayes–Nash Equilibrium If is mutually sufficient information, then there exists at least one -based BNE. See Appendix <ref>. The main idea for the proof of Theorem <ref> is the definition of a best-response correspondence through the dynamic program for an underlying single-agent control problem. If =(^i)_i∈ℐ where ^i is unilaterally sufficient information for player i, then the set of -based BNE payoffs is the same as that of all BNE. See Appendix <ref>. The intuition behind Theorem <ref> is that one can think of player i's information that is not included in the unilaterally sufficient information _t^i as a private randomization device for player i: When player i is using a strategy that depends on their information outside of _t^i, it is as if they are using a randomized ^i-based strategy. The main idea for the proof of Theorem <ref> is to show that for every BNE strategy profile g, player i can switch to an “equivalent” randomized ^i-based strategy ρ^i while maintaining the equilibrium and payoffs.[Besides the connection of USI to sufficient statistics, the idea behind the construction of the equivalent ^i-based strategy is also closely related to the idea of the Rao–Blackwell estimator <cit.>, where a new estimator is obtained by taking the conditional expectation of the old estimator given the sufficient statistics.] The theorem then follows from iteratively switching the strategy of each player. Example <ref> can also be used to illustrate that when is an MSI but not an USI, -based BNE exist but -based strategies do not attain all equilibrium payoffs. thmstylethree *example*Example <ref> [Continued] In this example, _t^A = H_t^A, _t^B = ∅ for t=1,2 is MSI. Furthermore, it can be shown that the following strategy profiles are BNE of the game: (E1) Alice plays U_1^A=1 at time 1 and Bob plays U_2^B=1 irrespective of Alice's action at time 1; and (E2) Alice plays U_1^A=0 at time 1; Bob plays U_2^B = 1 if U_1^A=0 and U_2^B=-1 if U_1^A=1. Equilibrium (E1) is a -based equilibrium. However, (E2) cannot be attained by -based strategy profile for the following reason: In any -based equilibrium, Bob plays the same mixed strategy irrespective of Alice's action and his expected payoff at the end of the game is -1. At (E2), Bob's expected payoff at the end of the game is 0. Therefore, the payoff at (E2) cannot be attained by any -based strategy profile. §.§ Information-State Based Sequential Equilibrium If is mutually sufficient information, then there exists at least one -based sequential equilibrium. See Appendix <ref>. The proof of Theorem <ref> follows steps similar to that of Theorem <ref>. The difference is that we explicitly construct a sequence of conjectured history-action value functions ^(n) (as defined in Definition <ref>) using the dynamic program of player i's decision problem. Then we argue that the strategies and the conjectures satisfies Definition <ref>. If =(^i)_i∈ℐ where ^i is unilaterally sufficient information for player i, then the set of -based sequential equilibrium payoffs is the same as that of all sequential equilibria. See Appendix <ref>. The proof of Theorem <ref> mostly follows the same ideas for Theorem <ref>: for each sequential equilibrium strategy profile g, we construct an “equivalent” ^i-based strategy ρ^i for player i with similar construction as in Theorem <ref>. The critical part is to show that ρ^i is still sequentially rational under the concept of sequential equilibrium. § DISCUSSION In this section we first investigate if USI can preserve the set of equilibrium payoffs achievable under perfect recall when refinements of BNE other than SE, namely, various versons of Perfect Bayesian Equilibrium (PBE), are considered. Then, we identify MSI and USI in specific models that appeared in the literature. §.§ Other Equilibrium Concepts We first present Example <ref> to show that the result of Theorem <ref> is not true when we replace SE with the concept of weak Perfect Bayesian Equilibrium (wPBE) <cit.> which is a refinement of BNE that is weaker than SE. Then, we discuss how the result of Proposition <ref>, that is, part of Example <ref> and appears below, applies or does not apply to other versions of PBE, namely, those defined in <cit.> and <cit.>. The concept of wPBE is defined as follows: Let (g, μ) be an assessment, where g is a behavioral strategy profile as specified in Section <ref> and μ is a system of functions representing player's beliefs in the extensive-form game representation. Then, (g, μ) is said to be a weak perfect Bayesian equilibrium <cit.> if g is sequentially rational to μ and μ satisfies Bayes rule with respect to g on the equilibrium path. The concept of wPBE does not impose any restriction on beliefs off the equilibrium path. Consider a two-stage game with two players: Bob (B) moves at stage 1; Alice (A) and Bob move simultaneously at stage 2. Let X_1^A, X_1^B be independent uniform random variables on {-1, +1} representing the types of the players. The state satisfies X_1=(X_1^A, X_1^B) and X_2=X_1^B. The set of actions are 𝒰_1^B = {-1, +1}, 𝒰_2^A = 𝒰_2^B = {-1, 0, +1}. The information structure is given by H_1^A = X_1^A, H_1^B = X_1^B; H_2^A = (X_1^A, U_1^B), H_2^B = (X_1^B, U_1^B), i.e. types are private and actions are observable. The instantaneous payoffs of Alice are given by R_1^A = -1, if U_1^B = -1; 0, otherwise, R_2^A = 1, if U_2^A = X_2 or U_2^A = 0; 0, otherwise. . The instantaneous payoffs of Bob are given by R_1^B = 0.2, if U_1^B = -1; 0, otherwise, R_2^B = -1, if U_2^A = U_2^B; 0, otherwise. . Define _1^A = X_1^A and _2^A = U_1^B. It can be shown that ^A is unilaterally sufficient information for Alice.[In fact, this example can be seen as an instance of the model described in Example <ref> which we introduce later.] Set _t^B = H_t^B, i.e. no compression for Bob's information. Then, ^B is trivially unilaterally sufficient information for Bob. In the game defined in Example <ref>, the set of -based wPBE payoffs is a proper subset of that of all wPBE payoffs. See Appendix <ref>. Note that since any wPBE is first and foremost a BNE, by Theorem <ref>, any general strategy based wPBE payoff profile can be attained by a -based BNE. However, Proposition <ref> implies that there exists a wPBE payoff profile such that none of the -based BNEs attaining this payoff profile are wPBEs. Intuitively, the reason for some wPBE payoff profiles to be unachievable under -based wPBE payoffs in this example can be explained as follows. The state X_1^A in this game can be thought of as a private randomization device of Alice that is payoff irrelevant (i.e. a private coin flip) that should not play a role in the outcome of the game. However, under the concept of wPBE, the presence of X_1^A facilitates Alice to implement off-equilibrium strategies that are otherwise not sequentially rational. This holds due to the following: For a fixed realization of U_1^B, the two realizations of X_1^A give rise to two different information sets. Under the concept of wPBE, if the two information sets are both off equilibrium path, Alice is allowed to form different beliefs and hence justify the use of different mixed actions under different realizations of X_1^A. Therefore, the presence of X_1^A can expand Alice's set of “justifiable” mixed actions off-equilibrium. By restricting Alice to use ^A-based strategies, i.e. choosing her mixed action not depending on X_1^A, Alice loses the ability to use some mixed actions off-equilibrium in a “justifiable” manner, and hence losing her power to sustain certain equilibrium outcomes. This phenomenon, however, does not happen under the concept of sequential equilibrium, since SE (quite reasonably) would require Alice to use the same belief on two information sets if they only differ in the realization of X_1^A. With similar approaches, one can establish the analogue of Proposition <ref> for the perfect Bayesian equilibrium concept defined in <cit.> (which we refer to as “Watson's PBE”). Simply put, this is since Watson's PBE imposes conditions on the belief update for each pair of successive information states in a separated manner. There exist no restrictions across different pairs of successive information states. As a result, for a fixed realization of U_1^B, Alice is allowed to form different beliefs under two realizations of X_1^A just like under wPBE as long as both beliefs are reasonable on their own. In fact, in the proof of Proposition <ref>, the two off-equilibrium belief updates both satisfy Watson's condition of plain consistency <cit.>. Approaches similar to those in the proof of Proposition <ref>, however, do not apply to the PBE concept defined with the independence property of conditional probability systems specified in <cit.> (which we refer to as “Battigalli's PBE”). In fact, Battigalli's PBE is equivalent to sequential equilibrium if the dynamic game consists of only two strategic players <cit.>. We conjecture that in general games with three or more players, if is USI, then the set of all -based Battigalli's PBE payoffs is the same as that of all Battigalli's PBE payoffs. However, establishing this result can be difficult due to the complexity of Battigalli's conditions. §.§ Information States in Specific Models In this section, we identify MSI and USI in specific game models studied in the literature. Whereas we recover some existing results using our framework, we also develop some new results. Consider stateless dynamic games with observable actions, i.e. X_t=∅, H_1^i=∅, Z_t^i=U_t for all i∈ℐ. One instance of such games is the class of repeated games <cit.>. In this game, H_t^i=U_1:t-1 for all i∈ℐ. Let (ι_t^0)_t∈𝒯 be an arbitrary, common update function and let ^i=^0 be generated from (ι_t^0)_t∈𝒯. Then is mutually sufficient information since Lemma <ref> is trivially satisfied. As a result, Theorem <ref> holds for , i.e. there exist at least one -based BNE. However, in general, is not unilaterally sufficient information. To see that, one can consider the case when player j≠ i is using a strategy that chooses different mixed actions for different realizations of U_1:t-1. In this case ^g^i, g^-i(_t+1^i|h_t^i, u_t^i) would potentially depend on U_1:t-1 as a whole. This means that ^i is not an information state for player i under g^-i, which violates Lemma <ref>. Furthermore for , the result of Theorem <ref> does not necessarily hold, i.e. the set of -based BNE payoffs may not be the same as that of all BNE. Example <ref> can be used to show this. The model of <cit.> is a special case of our dynamic game model where Z_t^i=(X_t+1, U_t), i.e. the (past and current) states and past actions are observable. In this case, =(_t^i)_t∈𝒯, i∈ℐ with _t^i=X_t is mutually sufficient information; note that H_t^i=(X_1:t, U_1:t-1). Consider a ^-i-based strategy profile ρ^-i, i.e. ρ_t^j𝒳_t↦Δ(𝒰_t^j) for t∈𝒯, j∈ℐ\{i}. We have ^g^i, ρ^-i(x̃_t, _t^-i|h_t^i) =^g^i, ρ^-i(x̃_t, _t^-i|x_1:t, u_1:t-1) = 1_{x̃_t = x_t}∏_j≠ i1_{_t^j = x_t} =:Φ_t^i, ρ^-i(x̃_t, _t^-i|x_t). Hence is mutually sufficient information by Lemma <ref>. As a result, there exists at least one -based BNE. Similar to Example <ref>, in general, is not unilaterally sufficient information, and the set of -based BNE payoffs may not be the same as that of all BNE. The argument for both claims can be carried out in an analogous way to Example <ref>. The model of <cit.> is a special case of our dynamic model satisfying the following conditions. * The information of each player i can be separated into the common information H_t^0 and private information L_t^i, i.e. there exists a strategy-independent bijection between H_t^i and (H_t^0, L_t^i) for all i∈ℐ. * The common information H_t^0 can be sequentially updated, i.e. H_t+1^0 = (H_t^0, Z_t^0), where Z_t^0 = ⋂_i∈ℐ Z_t^i is the common part of the new information of all players at time t. * The private information L_t^i can be sequentially updated, i.e. there exist functions (ζ_t^i)_t=0^T-1 such that L_t+1^i = ζ_t^i(L_t^i, Z_t^i). In <cit.>, the authors impose the following assumption. [Strategy independence of beliefs] There exist a function P_t^0 such that ^g(x_t, l_t|h_t^0) = P_t^0(x_t, l_t|h_t^0), for all behavioral strategy profiles g whenever ^g(h_t^0) > 0, where l_t=(l_t^i)_i∈ℐ. In this model, if we set _t^i=(Π_t, L_t^i) where Π_t ∈Δ(𝒳_t×𝒮_t) is a function of H_t^0 defined through Π_t(x_t, l_t) := P_t^0(x_t, l_t|H_t^0), then =(^i)_i∈ℐ is mutually sufficient information. First note that _t^i can be sequentially updated as Π_t can be sequentially updated using Bayes rule. Then ^g^i, ρ^-i(x̃_t, l̃_t^-i|h_t^i) = ^g^i, ρ^-i(x̃_t, l̃_t^-i|h_t^0, l_t^i) =^g^i, ρ^-i(x̃_t, l_t^i, l̃_t^-i|h_t^0)^g^i, ρ^-i(l_t^i|h_t^0) =P_t^0(x̃_t, (l_t^i, l̃_t^-i)|h_t^0)∑_x̂_t, l̂_t^-i P_t^0(x̂_t, (l_t^i, l̂_t^-i)|h_t^0) =π_t(x̃_t, (l_t^i, l̃_t^-i)) ∑_x̂_t,l̂_t^-iπ_t(x̂_t, (l_t^i, l̂_t^-i)) =:Φ̃_t^i, ρ^-i(x̃_t, l̃_t^-i|_t^i), for some function Φ̃_t^i, ρ^-i, where π_t is the realization of Π_t corresponding to H_t^0=h_t^0. In steps (<ref>) and (<ref>) we apply Bayes rule on the conditional probabilities given h_t^0, and we use Assumption <ref> to express the belief with the strategy-independent function P_t^0. Note that _t^-i is contained in the vector (_t^i, L_t^-i), hence we conclude that ^g^i, ρ^-i(x̃_t, _t^-i|h_t^i) =:Φ_t^i, ρ^-i(x̃_t, _t^-i|_t^i), for some function Φ_t^i, ρ^-i. By Lemma <ref> we conclude that is mutually sufficient information. Therefore there exists at least one -based BNE. Similar to Examples <ref> and <ref>, in general, is not unilaterally sufficient information, and the set of -based BNE payoffs may not be the same as that of all BNE. The argument for both claims can be carried out in an analogous way to Examples <ref> and <ref>. The following model is a variant of <cit.> and <cit.>. * Each player i is associated with a local state X_t^i, and X_t=(X_t^i)_i∈ℐ. * Each player i is associated with a local noise process W_t^i, and W_t=(W_t^i)_i∈ℐ. * There is no initial information, i.e. H_1^i=∅ for all i∈ℐ. * There is a public noisy observation Y_t^i of the local state. The state transitions, observation processes, and reward generation processes satisfy the following: (X_t+1^i, Y_t^i) = f_t^i(X_t^i, U_t, W_t^i), ∀ i∈ℐ, R_t^i = r_t^i(X_t, U_t), ∀ i∈ℐ. * The information player i has at time t is H_t^i=(Y_1:t-1, U_1:t-1, X_1:t^i) for i∈ℐ, where Y_t=(Y_t^i)_i∈ℐ. * All the primitive random variables, i.e. the random variables in the collection (X_1^i)_i∈ℐ∪ (W_t^i)_i∈ℐ, t∈𝒯, are mutually independent. In the model of Example <ref>, _t^i=(Y_1:t-1, U_1:t-1, X_t^i) is unilaterally sufficient information.[^i-based strategies in this setting are closely related to the “strategies of type s” defined in <cit.>. In <cit.>, the authors showed that strategy profiles of type s can attain all equilibrium payoffs attainable by general strategy profiles. However, the authors did not show that strategy profiles of type s can do so while being an equilibrium.] See Appendix <ref>. Finally, we note that the concept of USI is useful in the context of games among teams as well. We omit the details of the following example due to its complicated nature. In the model of games among teams with delayed intra-team information sharing analyzed in <cit.>, the authors defined the notion of sufficient private information (SPI). It can be shown (through the arguments in <cit.> and <cit.>) that K_t^i = (H_t^0, S_t^i), which consists of the common information H_t^0 and the SPI S_t^i, is unilaterally sufficient information. § AN OPEN PROBLEM Identifying strategy-dependent compression maps that guarantee existence of at least one equilibrium (BNE or SE) or maintain all equilibria that exist under perfect recall is an open problem. A known strategy-dependent compression map is one that compress separately first each agent's private information, (resulting in “sufficient private information”), and then the agents' common information (resulting in “common information based (CIB) beliefs” on the system state and the agents' sufficient private information <cit.>). Such a compression does not possess any of the properties of the strategy-independent compression maps that result in MSI or USI. The following example presents a game where belief-based equilibria, i.e. equilibrium strategy profiles based on the above-described compression, do not exist. Consider the following two-stage zero-sum game. The players are Alice (A) and Bob (B). Alice acts at stage t=1 and Bob at stage t=2. The game's initial state X_1 is distributed uniformly at random on {-1, +1}. Let H_t^A, H_t^B denote Alice's and Bob's information at stage t, and U_t^A, U_t^B denote Alice's and Bob's actions at stage t, t=1,2. We assume that H_1^A=X_1, H_1^B=∅, i.e. Alice knows X_1 and Bob does not. At stage t=1, Alice chooses U_1^A∈{-1, 1}, and the state transition is given by X_2 = X_1 · U_1^A. At stage t=2, we assume that H_2^A=(X_1:2, U_1^A) and H_2^B=U_1^A, i.e. Bob observes Alice's action but not the state before or after Alice's action. At time t=2, Bob picks an action U_2^B ∈{U, D}. Alice's instantaneous rewards are given by R_1^A = c if U_1^A=+1; 0 if U_1^A=-1, and R_2^A = 2 if X_2=+1, U_2^B=U; 1 if X_2=-1, U_2^B=D; 0 otherwise, where c∈ (0, 1/3). The stage reward for Bob is R_t^B = -R_t^A for t=1,2. The above game is a signaling game which can be represented in extensive form as in Figure <ref>. In order to define the concept of belief based equilibrium for this game, we specify the common information H_t^0, along with Alice's and Bob's private information, denoted by L_t^A, L_t^B, respectively, for t=1,2 as follows: H_1^0 = ∅, L_1^A = X_1, L_1^B = ∅, H_2^0 = U_1^A, L_2^A = X_2, L_2^B = ∅. We prove the following result. In the game of Example <ref> belief-based equilibria do not exist. See Appendix <ref>. § CONCLUSION In this paper, we investigated sufficient conditions for strategy-independent compression maps to be viable in dynamic games. Motivated by the literature on information states for control problems <cit.>, we provided two notions of information state, both resulting in from strategy-independent information compression maps for dynamic games, namely mutually sufficient information (MSI) and unilaterally sufficient information (USI). While MSI guarantees the existence of compression-based equilibria, USI guarantees that compression-based equilibria can attain all equilibrium payoff profiles that are achieved when all agents have perfect recall. We established the results under both the concepts of Bayes-Nash equilibrium and sequential equilibrium. We discussed how USI does not guarantee the preservation of payoff profiles under certain other equilibrium refinements. We considered a strategy-depedent compression map that results in sufficient private information, for each agent, along with a CIB belief. We showed, by an example, that this information compression map does not possess any of the properties of the strategy-independent compression maps that result in MSI or USI. The discovery of strategy-dependent information compression maps that lead to results similar to those of Theorem <ref> and <ref> or to those of Theorems <ref> and <ref> is a challenging open problem of paramount importance. Another important open problem is the discovery of information compression maps under which certain subsets of equilibrium payoff profiles are attained when strategies based on the resulting compressed information are used. The results of this paper have been derived for finite-horizon finite games. The extension of the results to infinite-horizon games and to games with continuous action and state spaces are other interesting technical problems. §.§ Author Contributions This work is a collaborative intellectual effort of the three authors, with Dengwang Tang being the leader. Due to the interconnected nature of the results, it is impossible to separate the contributions of each author. §.§ Funding This work is supported by National Science Foundation (NSF) Grant No. ECCS 1750041, ECCS 2025732, ECCS 2038416, ECCS 1608361, CCF 2008130, CMMI 2240981, Army Research Office (ARO) Award No. W911NF-17-1-0232, and Michigan Institute for Data Science (MIDAS) Sponsorship Funds by General Dynamics. §.§ Data Availability Not applicable since all results in this paper are theoretical. § DECLARATIONS §.§ Conflict of Interest The authors have no competing interests to declare that are relevant to the content of this article. §.§ Ethical Approval Not applicable since no experiments are involved in this work. § INFORMATION STATE OF SINGLE-AGENT CONTROL PROBLEMS In this section we consider single-agent Markov Decision Processes (MDPs) and develop auxiliary results. This section is a recap of <cit.> with more detailed results and proofs. Let X_t be a controlled Markov Chain controlled by action U_t with initial distribution ν_1∈Δ(𝒳_1) and transition kernels P=(P_t)_t∈𝒯, P_t𝒳_t×𝒰_t↦Δ(𝒳_t+1). Let r=(r_t)_t∈𝒯, r_t𝒳_t×𝒰_t↦ℝ be a collection of instantaneous reward functions. An MDP is denoted by a tuple (ν_1, P, r). For a Markov strategy g=(g_t)_t∈𝒯, g_t𝒳_t↦Δ(𝒰_t), we use ^g, ν_1, P and ^g, ν_1, P to denote the probabilities of events and expectations of random variables under the distribution specified by controlled Markov Chain (ν_1, P) and strategy g. When (ν_1, P) is fixed and clear from the context, we use ^g and ^g respectively. Define the total expected reward in the MDP (ν_1, P, r) under strategy g by J(g; ν_1, P, r) := ^g, ν_1, P[∑_t=1^T r_t(X_t, U_t) ]. Define the value function and state-action quality function by V_τ(x_τ; P, r) := max_g_τ:T^g_τ:T, P[∑_t=τ^T r_t(X_t, U_t)|x_τ], ∀τ∈ [T+1], _τ(x_τ, u_τ; P, r) := r_τ(x_τ, u_τ) + ∑_x̃_τ+1 V_τ+1(x̃_τ+1) P_τ(x̃_τ+1|x_τ, u_τ), ∀τ∈ [T]. Note that V_T+1(·;P,r)≡ 0. <cit.> Let _t=Ψ_t(X_t) for some function Ψ_t. Then, _t is called an information state for (P, r) if there exist functions P_t^_t×𝒰_t↦Δ(_t+1), r_t^_t×𝒰_t↦ℝ such that * P_t(_t+1|x_t, u_t) = P_t^(_t+1|Ψ_t(x_t), u_t); and * r_t(x_t, u_t) = r_t^(Ψ_t(x_t), u_t). If _t is an information state, then _t is also a controlled Markov Chain with initial distribution ν_1^∈Δ(_1) and transition kernel P^=(P_t^)_t∈𝒯, where ν_1^(_1) = ∑_x_11_{_1 = Ψ_1(x_1) }ν_1(x_1). The tuple (ν_1^, P^, r^) defines a new MDP. For a -based strategy ρ=(ρ_t)_t∈𝒯, ρ_t_t↦Δ(𝒰_t), the J, V, functions can be defined as above for the new MDP. We state the following standard result (see, for example, Section 2 of <cit.>). Let _t=Ψ_t(X_t) be an information state for (P, r). Then * V_t(x_t; P, r) = V_t(Ψ_t(x_t); P^, r^) for all x_t; * _t(x_t, u_t; P, r) = _t(Ψ_t(x_t), u_t; P^, r^) for all x_t, u_t. Let g be a Markov strategy, an K-based strategy ρ is said to be associated with g if ρ_t(_t) = ^g, ν_1, P[g_t(X_t)|_t], whenever ^g, ν_1, P(_t) > 0. The following lemma will be used in the proofs in Appendix <ref>. Let (ν_1, P, r) be an MDP. Let _t be an information state for (P, r). Let an -based strategy ρ be associated with a Markov strategy g, then * ^g, ν_1, P(_t)=^ρ, ν_1, P(_t) for all _t∈_t and t∈𝒯; * J(g; ν_1, P, r) = J(ρ; ν_1, P, r). In this proof all probabilities and expectations are assumed to be defined with (ν_1, P). Given a Markov strategy g, let ρ be an information state-based strategy that satisfies (<ref>). First, we have ^g(u_t|_t) = ^g[g_t(u_t|X_t)|_t] = ρ_t(u_t|_t) , for all _t such that ^g(_t) > 0. * Proof by induction: Induction Base: We have ^g(_1) = ^ρ(_1) since the distribution of _1=Ψ_1(X_1) is strategy-independent. Induction Step: Suppose that ^g(_t) = ^ρ(_t), for all _t∈_t. We prove the result for time t+1. Combining (<ref>) and (<ref>), and incorporating the information state transition kernel P_t^K defined in Definition <ref>, we have ^g(_t+1) = ∑__t, ũ_t^g(_t+1|_t, ũ_t)^g(ũ_t|_t)^g(_t) =∑__t, ũ_t P_t^(_t+1|_t, ũ_t)ρ_t(u_t|_t)^ρ(_t) =^ρ(_t+1). Therefore we have established the induction step. * Using (<ref>)(<ref>) along with the result of part (1), we obtain ^g[r_t(X_t, U_t)] = ^g[r_t^(_t, U_t)] =∑__t, ũ_t r_t^(_t, ũ_t) ^g(ũ_t|_t) ^g(_t) =∑__t, ũ_t r_t^(_t, ũ_t) ρ_t(ũ_t|_t) ^ρ(_t) =^ρ[r_t(X_t, U_t)], for each t∈𝒯. The result then follows from linearity of expectation. This concludes the proof. § ALTERNATIVE CHARACTERIZATIONS OF SEQUENTIAL EQUILIBRIA This section deals with the game model introduced in Section <ref>. We provide three alternative definitions of sequential equilibria that are equivalent to the original one given by <cit.>. These definitions help simplify some of the proofs in Appendix <ref>. We would like to note that several alternative definitions of sequential equilibria are also given in <cit.>. The definition of weak perfect equilibrium in Proposition 6 of <cit.> is close to our definitions in spirit in terms of using sequences of payoff functions instead of beliefs as a vehicle to define sequential rationality. Notice that fixing the behavioral strategies g^-i of players other than player i, player i's best response problem (at every information set) can be considered as a Markov Decision Process with state H_t^i and action U_t^i, where the transition kernels and instantaneous reward functions depend on g^-i. Inspired by this observation, we introduce an alternative definition of sequential equilibrium for our model, where we form conjectures of transition kernels and reward functions instead of forming beliefs on nodes. This allows us for a more compact representation of the appraisals and beliefs of players. We will later show that this alternative definition is equivalent to the classical definition of sequential equilibrium in <cit.>. For player i∈ℐ, let P^i=(P_t^i)_t∈𝒯\{T}, P_t^iℋ_t^i×𝒰_t^i↦Δ(𝒵_t^i) and r^i=(r_t^i)_t∈𝒯, r_t^iℋ_t^i ×𝒰_t^i ↦ [-1, 1] be collections of functions that represent conjectures of transition kernels and instantaneous reward functions. For a behavioral strategy profile g^i, define the reward-to-go function J_t^i recursively through J_T^i(g_T^i; h_T^i, P^i, r^i) := ∑_ũ_T^i r_T^i(h_T^i, ũ_T^i) g_T^i(ũ_T^i|h_T^i);  J_t^i(g_t:T^i; h_t^i, P^i, r^i) := ∑_ũ_t^i[ r_t^i(h_t^i, ũ_t^i) + ∑_z̃_t^i J_t+1^i(g_t+1:T^i; (h_t^i, z̃_t^i), P^i, r^i) P_t^i(z̃_t^i|h_t^i, ũ_t^i) ] g_t^i(ũ_t^i|h_t^i). defofJ:VT,defofJ:Vt Let g=(g^i)_i∈ℐ be a behavioral strategy profile. Let (P, r)=(P^i, r^i)_i∈ℐ be a conjectured profile. Then, g is said to be sequentially rational under (P, r) if for each i∈ℐ, t∈𝒯 and each h_t^i∈ℋ_t^i, J_t^i(g_t:T^i; h_t^i, P^i, r^i)≥ J_t^i(g̃_t:T^i; h_t^i, P^i, r^i), for all behavioral strategies g̃_t:T^i. Conjectured profile (P, r) is said to be fully consistent with g if there exist a sequence of behavioral strategy and conjecture profiles (g^(n), P^(n), r^(n))_n=1^∞ such that * g^(n) is fully mixed, i.e. every action is chosen with positive probability at every information set. * For each i∈ℐ, (P^(n), i, r^(n), i) is consistent with g^(n), -i, i.e. for each i∈ℐ, t∈𝒯, h_t^i∈ℋ_t^i, u_t^i∈𝒰_t^i, P_t^(n), i(z_t^i|h_t^i, u_t^i) =^g^(n), -i(z_t^i|h_t^i, u_t^i), r_t^(n), i(h_t^i, u_t^i) = ^g^(n), -i[R_t^i|h_t^i, u_t^i]. * (g^(n), P^(n), r^(n))→ (g, P, r) as n→∞. A triple (g, P, r) is said to be a “model-based” sequential equilibrium[Here we borrow the terms “model-based” (resp. “model-free”) from the reinforcement learning literature: “Model-based” means that an algorithm constructs the underlying model (P, r), while “model-free” usually means that the algorithm directly constructs state-action value functions .] if g is sequentially rational under (P, r) and (P, r) is fully consistent with g. One can also form conjectures directly on the optimal reward-to-go given a state-action pair (h_t^i, u_t^i). Let g=(g^i)_i∈ℐ be a behavioral strategy profile. Let =(_t^i)_i∈ℐ,t∈𝒯 be a collection of functions where _t^iℋ_t^i ×𝒰_t^i ↦ [-T, T]. The strategy profile g is said to be sequentially rational under if for each i∈ℐ, t∈𝒯 and each h_t^i∈ℋ_t^i, supp(g_t^i(h_t^i))⊆u_t^imax _t^i(h_t^i, u_t^i). The collection of functions is said to be fully consistent with g if there exist a sequence of behavioral strategy and conjectured profiles (g^(n), ^(n))_n=1^∞ such that * g^(n) is fully mixed, i.e. every action is chosen with positive probability at every information set. * ^(n) is consistent with g^(n), i.e., _τ^(n), i(h_τ^i, u_τ^i) =^g^(n)[∑_t=τ^T R_t^i|h_τ^i, u_τ^i], for each i∈ℐ, τ∈𝒯, h_τ^i∈ℋ_τ^i, u_τ^i∈𝒰_τ^i. * (g^(n), ^(n))→ (g, ) as n→∞. A tuple (g, ) is said to be a “model-free” sequential equilibrium if g is sequentially rational under and is fully consistent with g. A slightly different definition is also equivalent: A tuple (g, ) is said to be a “model-free” sequential equilibrium (version 2) if it satisfies Definition <ref> with condition (2) for full consistency replaced by the following condition: (2') For each i, ^(n), i is consistent with g^(n), -i, i.e. _τ^(n), i(h_τ^i, u_τ^i) =^g^(n), -i[R_τ^i|h_τ^i, u_τ^i] + g̃_τ+1:T^imax ^g̃_τ+1:T^i, g^(n), -i[∑_t=τ+1^T R_t^i|h_τ^i, u_τ^i], for each τ∈𝒯, h_τ^i∈ℋ_τ^i, u_τ^i∈𝒰_τ^i. To introduce the last definition of SE, which corresponds to the original definition proposed in <cit.>, we first describe the game in Section <ref> as an extensive-form game tree as follows: To convert the game from a simultaneous move game to a sequential game, we set ℐ={1, 2, ⋯, I}, where the index indicates the order of movement. For convenience, for i∈ℐ, we use the superscript <i (resp. >i) to represent the set of players {1, ⋯, i-1} (resp. {i+1, ⋯, I}) that moves before (resp. after) player i in any given round. At time t=0, nature takes action w_0=(x_1, h_1) and the game enters t=1. For each time t∈𝒯, player 1 takes action u_t^1 first, then followed by player 2 taking action u_t^2, and so on, while nature takes action w_t after player I takes action u_t^I. In this extensive form game, there are three types of nodes: (1) a node where some player i∈ℐ takes action (at some time t∈𝒯), (2) a node where nature takes action (at some time t∈{0}∪𝒯), and (3) a terminal node, where the game has terminated. We denote the set of the first type of nodes corresponding to player i and time t as 𝒪_t^i. A node o_t^i∈𝒪_t^i can also be represented as a vector o_t^i = (x_1, h_1, w_1:t-1, u_1:t-1, u_t^<i) which contains all the moves (by all players and nature) before it. As a result, o_t^i also uniquely determines the states x_1:t and information increment vectors z_1:t-1. We denote the set of the terminal nodes as 𝒪_T+1. A terminal node o_T+1∈𝒪_T+1 also has a vector representation o_T+1 = (x_1, h_1, w_1:T, u_1:T). Given a terminal node o_T+1, all the actions of players and nature throughout the game are uniquely determined, hence the realizations of (R_t)_t∈𝒯 defined in Section <ref> are also uniquely determined. Let Λ=(Λ^i)_i∈ℐ, Λ^i𝒪_T+1↦ℝ be the mappings from terminal nodes to total payoffs, i.e. Λ^i(o_T+1) = ∑_t=1^T r_t^i, where r_t^i is the realization of R_t^i corresponding to o_T+1. Also define Λ_τ^i(o_T+1) = ∑_t=τ^T r_t^i for each τ∈𝒯. Now, as we have constructed the extensive-form game, it is helpful to view the nodes in the game tree as a stochastic process. Define O_t^i to be a random variable with support on 𝒪_t^i that represents the node player i is at before taking action at time t. Let O_T+1 be a random variable with support on 𝒪_T+1 that represents the terminal node the game ends at. If we view (𝒯×ℐ) ∪{T+1 } as a set of time indices with lexicographic ordering, the random process (O_t^i)_(t, i)∈𝒯×ℐ∪(O_T+1) is a controlled Markov Chain controlled by action U_t^i at time (t, i). An assessment is a pair (g, μ), where g is a behavioral strategy profile of players (excluding nature) as described in Section <ref>, and μ=(μ_t^i)_t∈ℐ, i∈ℐ, μ_t^iℋ_t^i ↦Δ(𝒪_t^i) is a belief system. Then, g is said to be sequentially rational given μ if ∑_o_t^i^g_t:T^i, g_t^>i, g_t:T^-i [Λ^i(O_T+1)|o_t^i]μ_t^i(o_t^i|h_t^i) ≥∑_o_t^i^g̃_t:T^i, g_t^>i, g_t:T^-i [Λ^i(O_T+1)|o_t^i]μ_t^i(o_t^i|h_t^i), for all i∈ℐ, t∈𝒯, h_t^i∈ℋ_t^i, and all behavioral strategies g̃_t:T^i. The belief system μ is said to be fully consistent with g if there exist a sequence of assessments (g^(n), μ^(n))_n=1^∞→ (g, μ) such that g^(n) is a fully mixed strategy profile and * g^(n) is fully mixed. * μ^(n) is consistent with g^(n), i.e. μ_t^(n), i (o_t^i|h_t^i) = ^g^(n)(o_t^i|h_t^i) for all t∈𝒯, i∈ℐ ,h_t^i∈ℋ_t^i, and o_t^i∈𝒪_t^i. * (g^(n), μ^(n)) → (g, μ) as n→∞. An assessment (g, μ) is said to be a (classical) sequential equilibrium if g is sequentially rational given μ and μ is fully consistent with g. Since the instantaneous rewards R_1:t-1^i have already been realized at time t, replacing the total reward Λ with reward-to-go Λ_t in (<ref>) would result in an equivalent definition. Definitions  <ref>, <ref>, <ref>, and <ref> are equivalent for strategy profiles. We complete the proof via four steps: In each step, we show that if g is a strategy profile satisfying one definition of SE, then it satisfy one of the other definitions of SE as well. We follow the following diagram: Definition <ref> ⇒ Definition <ref> ⇒ Definition <ref> ⇒ Definition <ref> ⇒ Definition <ref>.   Step 1: Classical SE (Definition <ref>) ⇒ “Model-based” SE (Definition <ref>)   Let (g, μ) satisfy Definition <ref>. Let (g^(n), μ^(n)) be a sequence of assessments that satisfies conditions (1)-(3) of fully consistency in Definition <ref>. Set P_t^(n), i(z_t^i|h_t^i, u_t^i) = ^g^(n)(z_t^i|h_t^i, u_t^i) and r_t^(n), i(h_t^i, u_t^i) = ^g^(n)[R_t^i|h_t^i, u_t^i] for all h_t^i∈ℋ_t^i, u_t^i∈𝒰_t^i. Recall that we can write O_t^i = (X_1, H_1, W_1:t-1, U_1:t-1, U_t^<i), and (X_1:t, Z_1:t-1) can be expressed as a function of O_t^i. Therefore there exist fixed functions f_t^i, Z, f_t^i, R such that Z_t^i = f_t^i, Z(O_t^i, U_t^i, U_t^>i, W_t), R_t^i = f_t^i, R(O_t^i, U_t^i, U_t^>i, W_t). Furthermore, for all j>i, there also exists functions f_t^j, i, H such that H_t^j = f_t^j, i, H(O_t^i) (since H_t^j=(H_1^i,Z_1:t-1^i)). Since μ_t^(n), i(o_t^i|h_t^i) = ^g^(n)(o_t^i| h_t^i) we have P_t^(n), i(z_t^i|h_t^i, u_t^i) = ∑_o_t^i, ũ_t^>i, w̃_t1_{z_t^i = f_t^i, Z(o_t^i, u_t^i, ũ_t^>i, w̃_t) }(w̃_t) (∏_j=i+1^I g_t^(n), j(ũ_t^j|f_t^j, i, H(o_t^i))) μ_t^(n)(o_t^i|h_t^i), r_t^(n), i(h_t^i, u_t^i) = ∑_o_t^i, ũ_t^>i, w̃_t f_t^i, R(o_t^i, u_t^i,ũ_t^>i, w̃_t) (w̃_t) (∏_j=i+1^I g_t^(n), j(ũ_t^j|f_t^j, i, H(o_t^i))) μ_t^(n)(o_t^i|h_t^i). Therefore, as μ^(n)→μ, g^(n)→ g, we have (P^(n), r^(n) )→ (P, r) for some (P, r). Let τ∈𝒯 and g̃_τ:T^i be an arbitrary strategy. First, observe that one can represent the conditional reward-to-go ^g^(n)[∑_t=τ^T R_t^i|h_τ^i] using μ^(n) or (P^(n), r^(n)). Hence we have ∑_o_τ^i^g̃_τ:T^i, g_τ^(n), >i, g_τ+1:T^(n), -i [Λ_τ^i(O_T+1)|o_τ^i]μ_τ^(n), i(o_τ^i|τ_t^i) = J_t^i(g̃_τ:T^i; h_τ^i, P^(n), i, r^(n), i), where J_t^i is as defined in (<ref>). Observe that the left-hand side of (<ref>) is continuous in (g_τ^(n), >i, g_τ+1:T^(n), -i, μ_τ^(n), i) since it is a sum of products of components of (g_τ^(n), >i, g_τ+1:T^(n), -i, μ_τ^(n), i). Also observe that the right-hand side of (<ref>) is continuous in (P^(n), i, r^(n), i) since it is a sum of products of components of (P^(n), i, r^(n), i) by the definition in (<ref>). Therefore by taking limit as n→∞, we conclude that ∑_o_τ^i^g̃_τ:T^i, g^-i [Λ_τ^i(O_T+1)|o_τ^i]μ_τ^i(o_τ^i|h_τ^i) = J_τ^i(g̃_τ:T^i; h_τ^i, P^i, r^i), for all strategies g̃_τ:T^i. Using sequential rationality of g with respect to μ and (<ref>) we conclude that J_t^i(g_τ:T^i; h_τ^i, P^i, r^i) ≥ J_t^i(g̃_τ:T^i; h_τ^i, P^i, r^i), for all τ∈𝒯, i∈ℐ, h_τ^i ∈ℋ_τ^i, i.e. g is also sequentially rational given (P, r).   Step 2: “Model-based” SE (Definition <ref>) ⇒ “Model-free” SE version 1 (Definition <ref>)   Let (g, P, r) be a sequential equilibrium under Definition <ref>, and let (g^(n), P^(n), r^(n)) satisfy conditions (1)-(3) of full consistency in Definition <ref>. Set _τ^(n), i(h_τ^i, u_τ^i) = ^g^(n)[∑_t=τ^T R_t^i|h_τ^i, u_τ^i ], for all τ∈𝒯, i∈ℐ, h_τ^i ∈ℋ_τ^i, u_τ^i∈𝒰_τ^i. Then ^(n), i satisfies the recurrence relation _T^(n), i(h_T^i, u_T^i) = r_T^(n), i(h_T^i, u_T^i), V_t^(n), i(h_t^i) := ∑_ũ_t^i_t^(n), i(h_t^i, ũ_t^i)g_t^(n), i(ũ_t^i|h_t^i), ∀ t∈𝒯, _t^(n), i(h_t^i, u_t^i) = r_t^(n), i(h_t^i, u_t^i) + ∑_z̃_t^i V_t+1^(n), i((h_t^i, z̃_t^i)) P_t^(n), i(z̃_t^i|h_t^i, u_t^i), ∀ t∈𝒯\{T}. Since (g^(n), P^(n), r^(n)) → (g, P, r) as n→∞, we have ^(n)→ where =(_t^i)_t∈𝒯, i∈ℐ satisfies _T^i(h_T^i, u_T^i) = r_T^i(h_T^i, u_T^i), V_t^i(h_t^i) := ∑_ũ_t^i_t^i(h_t^i, ũ_t^i)g_t^i(ũ_t^i|h_t^i), ∀ t∈𝒯, _t^i(h_t^i, u_t^i) = r_t^i(h_t^i, u_t^i) + ∑_z̃_t^i V_t+1^i((h_t^i, z̃_t^i)) P_t^i(z̃_t^i|h_t^i, u_t^i), ∀ t∈𝒯\{T}. Comparing (<ref>) with the reward-to-go function J_t^i defined in (<ref>), we observe that V_t^i(h_t^i) = J_t^i(g_t:T^i; h_t^i, P^i, r^i), for all t∈𝒯, i∈ℐ, h_τ^i ∈ℋ_τ^i. Let g̃_t^i be a strategy such that ĝ_t^i(h_t^i) = η∈Δ(𝒰_t^i), then J_t^i((g̃_t^i, g_t+1:T^i); h_t^i, P^i, r^i) = ∑_ũ_t( r_t^i(h_t^i, ũ_t^i) + ∑_z̃_t^i J_t+1^i(g_t+1:T^i; (h_t^i, z̃_t^i), P^i, r^i) P_t^i(z̃_t^i|h_t^i, ũ_t^i))η(ũ_t^i) = ∑_ũ_t(r_t^i(h_t^i, ũ_t^i) + ∑_z̃_t^i V_t+1^i((h_t^i, z̃_t^i)) P_t^i(z̃_t^i|h_t^i, ũ_t^i))η(ũ_t^i) = ∑_ũ_t_t^i(h_t^i, û_t^i) η(ũ_t^i), where we substitute (<ref>) in (<ref>), (<ref>) in (<ref>), and (<ref>) in (<ref>). By sequential rationality of g with respect to (P, r), we have J_t^i(g_t:T^i; h_t^i, P^i, r^i) ≥ J_t^i((g̃_t^i, g_t+1:T^i); h_t^i, P^i, r^i), which means that ∑_ũ_t_t^i(h_t^i, ũ_t^i) g_t^i(ũ_t^i|h_t^i) ≥∑_ũ_t_t^i(h_t^i, ũ_t^i) η(ũ_t^i), for all η∈Δ(𝒰_t^i) for all t∈𝒯, i∈ℐ, h_τ^i ∈ℋ_τ^i. Hence g is sequentially rational given . Therefore (g, ) is a sequential equilibrium under Definition <ref>.   Step 3: “Model-free” SE version 1 (Definition <ref>) ⇒ “Model-free” SE version 2 (Definition <ref>)   Let (g, ) be a sequential equilibrium under Definition <ref> and let (g^(n), ^(n)) satisfies conditions (1)-(3) of full consistency in Definition <ref>. Then ^(n), i satisfies _T^(n), i(h_T^i, u_T^i) = ^g^(n), -i[R_T^i|h_T^i, u_T^i], V_t^(n), i(h_t^i) := ∑_ũ_t^i_t^(n), i(h_t^i, ũ_t^i)g_t^(n), i(ũ_t^i|h_t^i), ∀ t∈𝒯, _t^(n), i(h_t^i, u_t^i) = ^g^(n), -i[R_t^i|h_t^i, u_t^i] + ∑_z̃_t^i V_t+1^(n), i((h_t^i, z̃_t^i)) ^g^(n), -i(z̃_t^i|h_t^i, u_t^i), ∀ t∈𝒯\{T}, and ^(n)→ as n→∞. Set _τ^(n), i(h_τ^i, u_τ^i) = ^g^(n), -i[R_τ^i|h_τ^i, u_τ^i] + g̃_τ+1:T^imax ^g̃_τ+1:T^i, g^(n), -i[∑_t=τ+1^T R_t^i|h_τ^i, u_τ^i], for each τ∈𝒯, h_τ^i∈ℋ_τ^i, u_τ^i∈𝒰_τ^i. Then ^(n), i satisfies the recurrence relation _T^(n), i(h_T^i, u_T^i) = ^g^(n), -i[R_T^i|h_T^i, u_T^i], V̂_t^(n), i(h_t^i) := max_ũ_t^i_t^(n), i(h_t^i, ũ_t^i), ∀ t∈𝒯, _t^(n), i(h_t^i, u_t^i) = ^g^(n), -i[R_t^i|h_t^i, u_t^i] + ∑_z̃_t^iV̂_t+1^(n), i((h_t^i, z̃_t^i)) ^g^(n), -i(z̃_t^i|h_t^i, u_t^i), ∀ t∈𝒯\{T}. Claim: _t^(n)→_t^i as n→∞. Given the claim, we have (g^(n), ^(n)) satisfying conditions (1)(2')(3) of full consistency in Definition <ref>. Therefore (g, ) is also a sequential equilibrium under Definition <ref>, and we complete this part of the proof. Proof of Claim: By induction on time t∈𝒯. Induction Base: Observe that _T^(n) = _T^(n) by construction. Since _T^(n)→_T we also have _T^(n)→_T. Induction Step: Suppose that the result is true for time t. We prove it for time t-1. By induction hypothesis and g^(n)→ g, we have V̂_t^(n), i(h_t^i) = max_ũ_t^i_t^(n), i(h_t^i, ũ_t^i) max_ũ_t^i_t^i(h_t^i, ũ_t^i). Since ^(n)→ and g^(n)→ g, we have V_t^(n), i(h_t^i) = ∑_ũ_t^i_t^(n), i(h_t^i, ũ_t^i)g_t^(n), i(ũ_t^i|h_t^i) ∑_ũ_t^i_t^i(h_t^i, ũ_t^i)g_t^i(ũ_t^i|h_t^i)=: V_t^i(h_t^i). Since g is sequentially rational given , we have ∑_ũ_t^i_t^i(h_t^i, ũ_t^i)g_t^i(ũ_t^i|h_t^i) = max_ũ_t^i_t^i(h_t^i, ũ_t^i). Combining (<ref>)(<ref>)(<ref>) we have V̂_t^(n), i(h_t^i) → V_t^i(h_t^i) for all h_t^i∈ℋ_t^i. Since ℋ_t^i is a finite set, we have max_h̃_t^i|V̂_t^(n), i(h̃_t^i) - V_t^(n), i(h̃_t^i)| 0. We then have |_t-1^(n), i(h_t^i, u_t^i) -_t-1^(n), i(h_t^i, u_t^i)| = | ∑_z̃_t-1^i[V̂_t^(n), i((h_t-1^i, z̃_t-1^i)) - V_t^(n), i((h_t-1^i, z̃_t-1^i))] ^g_t-1^(n), -i(z̃_t-1^i|h_t-1^i, u_t-1^i) | ≤max_z̃_t-1^i|V̂_t^(n), i((h_t-1^i, z̃_t-1^i)) - V_t^(n), i((h_t-1^i, z̃_t-1^i))| 0, where we substitute (<ref>)(<ref>) in (<ref>). Since _t-1^(n), i(h_t^i, u_t^i) →_t-1^i(h_t^i, u_t^i), we conclude that _t-1^(n), i(h_t^i, u_t^i) →_t-1^i(h_t^i, u_t^i), establishing the induction step.   Step 4: “Model-free” SE version 2 (Definition <ref>) ⇒ Classical SE (Definition <ref>)   Let (g, ) be a sequential equilibrium under Definition <ref> and let (g^(n), ^(n)) satisfies conditions (1)(2')(3) of full consistency in Definition <ref>. Define the beliefs μ^(n) on the nodes of the extensive-form game through μ^(n)(o_t^i|h_t^i) = ^g^(n)(o_t^i|h_t^i). By taking subsequences, without lost of generality, assume that μ^(n)→μ. Let ĝ_t^i be an arbitrary strategy, then by condition (2') of Definition <ref>, we can write ∑_ũ_t^i_t^(n), i(h_t^i, ũ_t^i) ĝ_t^i(ũ_t^i|h_t^i) = max_g̃_t+1:T^i∑_o_t^i^ĝ_t^i, g̃_t+1:T^i, g_t^(n), >i, g_t+1:T^(n), -i[Λ_t^i(O_T+1)|o_t^i] μ_t^(n), i(o_t^i|h_t^i). For each o_t^i, ^g̃_t^≥ i, g̃_t+1:T[Λ_t^i(O_t+1)|o_t^i] is continuous in (g̃_t^≥ i, g̃_t+1:T) since it is the sum of product of components of (g̃_t^≥ i, g̃_t+1:T). Therefore, ∑_o_t^i^ĝ_t^i, g̃_t+1:T^i, g_t^(n), >i, g_t+1:T^(n), -i[Λ_t^i(O_t+1)|o_t^i] μ_t^(n), i(o_t^i|h_t^i) ∑_o_t^i^ĝ_t^i, g̃_t+1:T^i, g_t^>i, g_t+1:T^ -i[Λ_t^i(O_t+1)|o_t^i] μ_t^i(o_t^i|h_t^i), for each behavioral straetegy g̃_t+1:T^i. Applying Berge's Maximum Theorem <cit.>, and taking the limit on both sides of (<ref>), we obtain ∑_ũ_t^i_t^i(h_t^i, ũ_t^i) ĝ_t^i(ũ_t^i|h_t^i)= max_g̃_t+1:T^i∑_o_t^i^ĝ_t^i, g̃_t+1:T^i, g_t^>i, g_t+1:T^-i[Λ_t^i(O_T+1)|o_t^i] μ_t^i(o_t^i|h_t^i), for all t∈𝒯, i∈ℐ, h_t^i∈ℋ_t^i, and all behavioral strategy ĝ_t^i. Sequential rationality of g to means that g_t^i ∈ĝ_t^imax ∑_ũ_t^i_t^i(h_t^i, ũ_t^i)  ĝ_t^i(ũ_t^i|h_t^i) =ĝ_t^imax max_g̃_t+1:T^i∑_o_t^i^ĝ_t^i, g̃_t+1:T^i, g_t^>i, g_t+1:T^-i[Λ_t^i(O_T+1)|o_t^i] μ_t^i(o_t^i|h_t^i), for all t∈𝒯, i∈ℐ, and all h_t^i∈ℋ_t^i. Recall that the node O_t^i uniquely determines (X_1, W_1:t-1, U_1:t-1). Therefore, the instantaneous rewards R_τ^i for τ≤ t-1 are uniquely determined by O_t^i as well. For τ≤ t-1, let r_τ^i be realizations of R_τ^i under O_t^i=o_t^i. Recall that Λ^i is the total reward function and Λ_t^i is the reward-to-go function starting with (and including) time t. We have ^ĝ_t^i, g̃_t+1:T^i, g_t^>i, g_t+1:T^-i[Λ^i(O_T+1) - Λ_t^i(O_T+1)|o_t^i] = ∑_τ=1^t-1 r_τ^i to be independent of the strategy profile. Therefore we have g_t^i ∈ĝ_t^imax max_g̃_t+1:T^i∑_o_t^i^ĝ_t^i, g̃_t+1:T^i, g_t^>i, g_t+1:T^-i[Λ^i(O_T+1)|o_t^i] μ_t^i(o_t^i|h_t^i). Fixing h_τ^i, the problem of optimizing J_τ^i(g̃_τ:T^i; h_τ^i, μ_τ^i):= ∑_o_τ^i^g̃_τ:T^i, g_τ^>i, g_τ+1:T^-i[Λ^i(O_T+1)|o_τ^i] μ_τ^i(o_τ^i|h_τ^i), over all g̃_τ:T^i is a POMDP problem with * Timestamps T̃ = {τ, τ+1, ⋯, T, T+1}; * State process (O_t^i)_t=τ^T∪ (O_T+1); * Control actions (U_t^i)_t=τ^T; * Initial state distribution μ_τ^i(h_τ^i)∈Δ(𝒪_τ^i); * State transition kernel ^g_t^>i, g_t+1^<i(o_t+1^i|o_t^i, u_t^i) for t<T and ^g_T^>i(o_T+1|o_T^i, u_T^i) for t=T; * Observation history: (H_t^i)_t=τ^T; * Instantaneous rewards are 0. Terminal reward is Λ^i(O_T+1). The belief μ is fully consistent with g by construction. From standard results in game theory, we know that μ_t+1^i(h_t+1^i) can be updated with Bayes rule from μ_t^i(h_t^i) and g whenever applicable. Therefore, (μ_t)_t=τ^T represent the true beliefs of the state given observations in the above POMDP problem. Therefore, through standard control theory <cit.>, (<ref>) is a sufficient condition for g_t:T^i to be optimal for the above POMDP problem, which means that g is sequentially rational given μ. Therefore we conclude that (g, μ) is a sequential equilibrium under Definition <ref>. § PROOFS FOR SECTIONS <REF> AND <REF> §.§ Proof of Lemma <ref> If for all i∈ℐ and all ^-i-based strategy profiles ρ^-i, there exist functions (Φ_t^i, ρ^-i)_t∈𝒯 where Φ_t^i, ρ^-i_t^i ↦Δ(𝒳_t×_t^-i) such that ^g^i, ρ^-i(x_t, _t^-i|h_t^i) = Φ_t^i,ρ^-i(x_t, _t^-i|_t^i), for all behavioral strategies g^i, all t∈𝒯, and all h_t^i admissible under (g^i, ρ^-i), then =(^i)_i∈ℐ is mutually sufficient information. Let g^i be an arbitrary behavioral strategy for player i and ρ^-i be any K^-i-based strategy profile. Let h_t^i be admissible under (g^i, ρ^-i). We have ^g^i, ρ^-i(x̃_t, ũ_t^-i|h_t^i) = ∑_h̃_t^-i^g^i, ρ^-i(ũ_t^-i|x̃_t, h̃_t^-i, h_t^i, u_t^i)^g^i, ρ^-i(x̃_t, h̃_t^-i|h_t^i, u_t^i) =∑_h̃_t^-i(∏_j≠ iρ_t^j(ũ_t^j|_t^j) )^g^i, ρ^-i(x̃_t, h̃_t^-i|h_t^i) =∑__t^-i(∏_j≠ iρ_t^j(ũ_t^j|_t^j) )^g^i, ρ^-i(x̃_t, _t^-i|h_t^i) =∑__t^-i( ∏_j≠ iρ_t^j(ũ_t^j|_t^j) )Φ_t^i, ρ^-i(x̃_t, _t^-i|_t^i), where in (<ref>) we applied the Law of Total Probability. In (<ref>) we combined the realizations of h_t^-i corresponding to the same compressed information k_t^-i. In the final equation, we used the condition of Lemma <ref>. By the definition of the model, Z_t^i = f_t^i,Z(X_t, U_t, W_t) for some fixed function f_t^i,Z independent of the strategy profile. Since the compressed information can be sequentially updated as _t+1^i=ι_t+1^i(_t^i, Z_t^i), this means that we can write _t+1^i = ξ_t^i(_t^i, X_t, U_t, W_t) for some fixed function ξ_t^i. Since W_t is a primitive random variable, we conclude that (_t+1^i| _t^i, x_t, u_t) is independent of any strategy profile. Therefore, ^g^i, ρ^-i(_t+1^i|h_t^i, u_t^i) = ∑_x̃_t, ũ_t^-i(_t+1^i|_t^i, x̃_t, (ũ_t^-i, u_t^i) ) ^g^i, ρ^-i(x̃_t, ũ_t^-i|h_t^i) =∑_x̃_t, ũ_t^-i[ (_t+1^i|_t^i, x̃_t, (ũ_t^-i, u_t^i) ) ∑__t^-i( ∏_j≠ iρ_t^j(ũ_t^j|_t^j) )Φ_t^i, ρ^-i(x̃_t, _t^-i|_t^i) ] =:P_t^i, ρ^-i(_t+1^i|_t^i, u_t^i), for some function P_t^i, g^-i, where in (<ref>) we used the Law of Total Probability, and we substituted (<ref>) in (<ref>). Since R_t^i = f_t^i,R(X_t, U_t, W_t) for some fixed function f_t^i,R and W_t is a primitive random variable, we have [R_t^i|X_t, U_t] to be independent of the strategy profile g. By an argument similar to the one that leads from (<ref>) to (<ref>) we obtain ^g^i, ρ^-i[R_t^i|h_t^i, u_t^i] = ∑_x̃_t, ũ_t^-i[ [R_t^i|x̃_t, (u_t^i, ũ_t^-i)] ∑__t^-i( ∏_j≠ iρ_t^j(ũ_t^j|_t^j) )Φ_t^i, ρ^-i(x̃_t, _t^-i|_t^i) ] =: r_t^i, ρ^-i(_t^i, u_t^i), for some function r_i^i, ρ^-i. With (<ref>) and (<ref>), we have shown that satisfies Definition <ref> and hence is MSI. §.§ Proof of Theorem <ref> If is mutually sufficient information, then there exists at least one -based BNE. The proof will proceed as follows: We first construct a best-response correspondence using stochastic control theory, and then we establish the existence of equilibria by applying Kakutani's fixed-point theorem to this correspondence. For technical reasons, we first consider only behavioral strategies where each action has probability at least ϵ>0 of being played at each information set. We then take ϵ to zero.   Fixing a K^-i-based strategy profile ρ^-i, we first argue that _t^i is a controlled Markov process controlled by player i's action U_t^i. From the definition of an information state (Definition <ref>) we know that ^g̃^i, ρ^-i(_t+1^i|h_t^i, u_t^i) = P_t^i, ρ^-i(_t+1^i|_t^i, u_t^i). Since (_1:t^i, U_1:t^i) is a function of (H_t^i, U_t^i), by the smoothing property of conditional probability we have ^g̃^i, ρ^-i(_t+1^i|_1:t^i, u_1:t^i) = P_t^i, ρ^-i(_t+1^i|_t^i, u_t^i). Therefore we have shown that _t^i is a controlled Markov process controlled by player i's action U_t^i. From the definition of information state (Definition <ref>) we know that ^g̃^i, ρ^-i[R_t^i|_t^i, u_t^i ] = r_t^i, ρ^-i(_t^i, u_t^i), for all (_t^i, u_t^i) admissible under (g̃^i, ρ^-i). Therefore, using the Law of Total Expectation we have J^i(g̃^i, ρ^-i) = ^g̃^i, ρ^-i[∑_t=1^T R_t^i] = ^g̃^i, ρ^-i[∑_t=1^T ^g̃^i, ρ^-i[R_t^i|_t^i, U_t^i ] ] =^g̃^i, ρ^-i[∑_t=1^T r_t^i, ρ^-i(_t^i, U_t^i) ]. By standard MDP theory, there exist ^i-based strategies ρ^i that maximize J^i(g̃^i, ρ^-i) over all behavioral strategies g̃^i. Furthermore, optimal ^i-based strategies can be found through dynamic programming. Assume ϵ> 0, let 𝒫^ϵ, i denote the set of ^i-based strategies for player i where each action u_t^i∈𝒰_t^i is chosen with probability at least ϵ at any information set. To endow 𝒫^ϵ, i with a topology, we consider it as a product of sets of distributions, i.e. 𝒫^ϵ, i = ∏_t∈𝒯∏__t^i∈_t^iΔ^ϵ(𝒰_t^i), where Δ^ϵ(𝒰_t^i) = {η∈Δ(𝒰_t^i): η(u_t^i)≥ϵ ∀ u_t^i∈𝒰_t^i }. Define 𝒫^ϵ = ∏_i∈ℐ𝒫^ϵ, i. Denote the set of all ^i-based strategy profiles by 𝒫^0. For the rest of the proof, assume that ϵ is small enough such that Δ^ϵ(𝒰_t^i) is non-empty for all t∈𝒯 and i∈ℐ. For each t∈𝒯, i∈ℐ and _t^i∈_t^i, define the correspondence BR_t^ϵ, i[_t^i]: 𝒫^ϵ, -i↦Δ^ϵ(𝒰_t^i) sequentially through _T^ϵ, i(_T^i, u_T^i;ρ^-i) :=r_T^i, ρ^-i(_T^i, u_T^i), BR_t^ϵ, i[_t^i](ρ^-i) := η∈Δ^ϵ(𝒰_t^i)max ∑_ũ_t^i_t^ϵ, i(_t^i, ũ_t^i;ρ^-i) η(ũ_t^i), V_t^ϵ, i(_t^i;ρ^-i) :=max_η∈Δ^ϵ(𝒰_t^i)∑_ũ_t^i_t^ϵ, i(_t^i, ũ_t^i;ρ^-i) η(ũ_t^i), _t-1^ϵ, i(_t-1^i, u_t-1^i; ρ^-i) :=r_t-1^i, ρ^-i(_t-1^i, u_t-1^i) + + ∑__t^i∈_t^i V_t^ϵ, i(_t^i ; ρ^-i) P_t-1^i, ρ^-i(_t^i|_t-1^i, u_t^i). defofBR:inidefofBR:BRdefofBR:VdefofBR:K Define BR^ϵ: 𝒫^ϵ↦𝒫^ϵ by BR^ϵ(ρ) = ∏_i∈ℐ∏_t∈𝒯∏__t^i∈_t^iBR_t^ϵ, i[_t^i](ρ^-i). Claim: * P_t^i, ρ^-i(_t+1^i|_t^i, u_t^i) is continuous in ρ^-i on 𝒫^ϵ, -i for all t∈𝒯 and all _t+1^i∈_t+1^i, _t^i∈_t^i, u_t^i∈𝒰_t^i. * r_t^i, ρ^-i(_t^i, u_t^i) is continuous in ρ^-i on 𝒫^ϵ, -i for all t∈𝒯 and all _t^i∈_t^i, u_t^i∈𝒰_t^i. Given the claims we prove by induction that _t^ϵ, i(_t^i, u_t^i; ρ^-i) is continuous in ρ^-i on 𝒫^ϵ, -i for each _t^i∈_t^i, u_t^i∈𝒰_t^i. Induction Base: _T^ϵ, i(_T^i, u_T^i; ρ^-i) = r_T^i, ρ^-i(_T^i, u_T^i) is continuous in ρ^-i on 𝒫^ϵ, -i due to part (a) of the claims. Induction Step: Suppose that the induction hypothesis is true for t. Then V_t^ϵ, i(_t^i; ρ^-i) is continuous in ρ^-i on 𝒫^ϵ, -i due to Berge's Maximum Theorem <cit.>. Then, _t-1^ϵ, i(_t-1^i, u_t-1^i; ρ^-i) is continuous in ρ^-i on 𝒫^ϵ, -i due to the claims. Applying Berge's Maximum Theorem <cit.> once again, we conclude that BR_t^ϵ, i[_t^i] is upper hemicontinuous on 𝒫^ϵ, -i. For each ρ^-i∈𝒫^ϵ, -i, BR_t^ϵ, i[_t^i](ρ^-i) is non-empty and convex since it is the solution set of a linear program. As a product of compact-valued upper hemicontinuous correspondences, BR^ϵ is upper hemicontinuous. For each ρ∈𝒫^ϵ, BR^ϵ(ρ) is non-empty and convex. By Kakutani's fixed point theorem, BR^ϵ has a fixed point. The above construction provides an approximate K-based BNE for small ϵ. Next, we show that we can take ϵ to zero to obtain an exact BNE: Let (ϵ_n)_n=1^∞ be a sequence such that ϵ_n > 0, ϵ_n→ 0. Let ρ^(n) be a fixed point of BR^ϵ_n. Then for each i∈ℐ we have ρ^(n), i∈ρ̃^i∈𝒫^ϵ_n, imax J^i(ρ̃^i, ρ^(n), -i). Let ρ^(∞)∈𝒫^0 be the limit of a sub-sequence of (ρ^(n))_n=1^∞. Since J^i(ρ) is continuous in ρ on 𝒫^0, and ϵ↦𝒫^ϵ, i is a continuous correspondence with compact, non-empty value, through applying Berge's Maximum Theorem <cit.> one last time, we conclude that for each i∈ℐ ρ^(∞), i∈ρ̃^i∈𝒫^0, imax J^i(ρ̃^i, ρ^(∞), -i), i.e. ρ^(∞), i is optimal among ^i-based strategies in response to ρ^(∞), -i. Recall that we have shown that there exist ^i-based strategies ρ^i that maximizes J^i(g̃^i, ρ^-i) over all behavioral strategies g̃^i. Therefore, we conclude that ρ^(∞) forms a BNE, proving the existence of -based BNE.   Proof of Claim: We establish the continuity of the two functions by showing that they can be expressed with basic functions (i.e. summation, multiplication, division). Let ĝ^i be a behavioral strategy where player i chooses actions uniformly at random at every information set. For ρ^-i∈𝒫^ϵ, -i, we have ^ĝ^i, ρ^-i(_t^i) > 0 for all _t^i∈_t^i since (ĝ^i, ρ^-i) is a strategy profile that always plays strictly mixed actions. Therefore we have P_t^i, ρ^-i(_t+1^i|_t^i, u_t^i) = ^ĝ^i, ρ^-i(_t+1^i| _t^i, u_t^i) = ^ĝ^i, ρ^-i(_t+1^i, _t^i, u_t^i) ^ĝ^i, ρ^-i(_t^i, u_t^i), r_t^i, ρ^-i(_t^i, u_t^i) = ^ĝ^i, ρ^-i[R_t^i| _t^i, u_t^i] =∑_x_t∈𝒳_t, u_t^-i∈𝒰_t [R_t^i| x_t, u_t] ^ĝ^i, ρ^-i(x_t, u_t^-i| _t^i, u_t^i), where [R_t^i| x_t, u_t] is independent of the strategy profile. We know that both ^ĝ^i, ρ^-i(_t+1^i, _t^i, u_t^i) and ^ĝ^i, ρ^-i(_t^i, u_t^i) are sums of products of components of ρ^-i and ĝ^i, hence both are continuous in ρ^-i. Therefore P_t^i, ρ^-i(z_t^i|_t^i, u_t^i) is continuous in ρ^-i on 𝒫^ϵ, -i. The continuity of r_t^i, ρ^-i(_t^i, u_t^i) in ρ^-i on 𝒫^ϵ, -i can be shown with an analogous argument. §.§ Proof of Theorem <ref> If =(^i)_i∈ℐ where ^i is unilaterally sufficient information for player i, then the set of -based BNE payoffs is the same as that of all BNE. To establish Theorem <ref>, we first introduce Definition <ref>, an extension of Definition <ref>, for the convenience of the proof. Then, we establish Lemmas <ref>, <ref>, <ref>. Finally, we conclude the proof of Theorem <ref> from the three lemmas. In the following definition, we provide an extension of the definition of the information state where not only player i's payoff are considered. This definition allows us to characterize compression maps that preserve payoff profiles, as required in the statement of Theorem <ref>. Let g^-i be a behavioral strategy profile of players other than i and 𝒥⊆ℐ be a subset of players. We say that ^i is an information state under g^-i for the payoffs of 𝒥 if there exist functions (P_t^i,g^-i)_t∈𝒯, (r_t^j,g^-i)_j∈𝒥, t∈𝒯, where P_t^i,g^-i: _t^i×𝒰_t^i ↦Δ(_t+1^i) and r_t^j,g^-i: _t^i×𝒰_t^i ↦ [-1, 1], such that * ^g^i, g^-i(_t+1^i|h_t^i, u_t^i) = P_t^i,g^-i(_t+1^i|_t^i, u_t^i) for all t∈𝒯\{T}; and * ^g^i, g^-i[R_t^j|h_t^i, u_t^i] = r_t^j,g^-i(_t^i, u_t^i) for all j∈𝒥 and all t∈𝒯, for all g^i, and all (h_t^i, u_t^i) admissible under (g^i, g^-i). Notice that condition (2) of Definition <ref> means that the information state K^i is sufficient for evaluating other agents' payoffs as well. This property is essential in establishing the preservation of payoff profiles of other agents when player i switches to a compression-based strategy. If ^i is unilaterally sufficient information, then ^i is an information state under g^-i for the payoffs of ℐ under all behavioral strategy profiles g^-i. Let Φ_t^i, g^-i be as in the definition of USI (Definition <ref>), we have ^g(x_t, h_t^-i| h_t^i) = Φ_t^i, g^-i(x_t, h_t^-i|_t^i). Applying the Law of Total Probability, ^g(x̃_t, ũ_t^-i|h_t^i) = ∑_h̃_t^-i^g(ũ_t^-i|x̃_t, h̃_t^-i, h_t^i) ^g(x̃_t, h̃_t^-i| h_t^i) = ∑_h̃_t^-i(∏_j≠ ig_t^j(ũ_t^j|h̃_t^j) ) Φ_t^i, g^-i(x̃_t, h̃_t^-i| _t^i) =: P̃_t^i, g^-i(x̃_t, ũ_t^-i|_t^i). We know that _t+1^i = ι_t+1^i(_t^i, Z_t^i) = ξ_t^i(_t^i, X_t, U_t, W_t) for some fixed function ξ_t^i independent of the strategy profile g. Since W_t is a primitive random variable, (_t+1^i | _t^i, x_t, u_t) is independent of the strategy profile g. Therefore, ^g(_t+1^i|h_t^i, u_t^i) = ∑_x̃_t, ũ_t^-i(_t+1^i | _t^i, x̃_t, (ũ_t^-i, u_t^i)) P̃_t^i, g^-i(x̃_t, ũ_t^-i|_t^i) =:P_t^i, g^-i(_t+1^i|_t^i, u_t^i), establishing part (1) of Definition <ref>. Consider any j∈ℐ. Since R_t^j is a strategy-independent function of (X_t, U_t, W_t), [R_t^j|x_t, u_t] is independent of g. Therefore ^g[R_t^j|h_t^i, u_t^i] = ∑_x̃_t, ũ_t^-i[R_t^j|x̃_t, (u_t^i, ũ_t^-i)] P̃_t^i, g^-i(x̃_t, ũ_t^-i|_t^i) =:r_t^j, g^-i(_t^i, u_t^i), establishing part (2) of Definition <ref>. In Lemma <ref>, we show that any behavioral strategy of player i can be replaced by an equivalent randomized USI-based strategy while preserving payoffs of all players. Let ^i be unilaterally sufficient information. Then for every behavioral strategy profile g^i, if the ^i based strategy ρ^i is given by ρ_t^i(u_t^i|_t^i) = ∑_h̃_t^i∈ℋ_t^ig_t^i(u_t^i|h̃_t^i)F_t^i, g^i(h̃_t^i|_t^i), where F_t^i, g^i(h̃_t^i|_t^i) is defined in Definition <ref>, then J^j(g^i, g^-i) = J^j(ρ^i, g^-i), for all j∈ℐ and all behavioral strategy profiles g^-i of players other than i. Let j∈ℐ. Consider an MDP with state H_t^i, action U_t^i and instantaneous reward r̃_t^i, j(h_t^i, u_t^i):=^g^-i[R_t^j|h_t^i, u_t^i]. By Lemma <ref>, ^i is an information state (as defined in Definition <ref>) for this MDP. Hence J^j(g^i, g^-i) = J^j(ρ^i, g^-i) follows from the Policy Equivalence Lemma (Lemma <ref>). In Lemma <ref>, we proceed to show that a behavioral strategy can be replaced with an USI-based strategy while preserving not only the payoffs of all players, but also the equilibrium. If ^i is unilaterally sufficient information for player i, then for any BNE strategy profile g=(g^i)_i∈ℐ there exists a ^i-based strategy ρ^i such that (ρ^i, g^-i) forms a BNE with the same expected payoff profile as g. Let ρ^i be associated with g^i as specified in Lemma <ref>. Set g̅ = (ρ^i, g^-i). Since J^i(ρ^i, g^-i) = J^i(g^i, g^-i) and g^i is a best response to g^-i, ρ^i is also a best response to g^-i. Consider j≠ i. Let g̃^j be an arbitrary behavioral strategy of player j. By using Lemma <ref> twice we have J^j(g̅^j, g̅^-j) = J^j(ρ^i, g^-i) = J^j(g)≥ J^j(g̃^j, g^-j) = J^j(g̃^j, (ρ^i, g^-{i, j})) = J^j(g̃^j, g̅^-j). Therefore g̅^j is a best response to (ρ^i, g^-{i, j}). We conclude that g̅ = (ρ^i, g^-i) is also a BNE. Given any BNE strategy profile g, applying Lemma <ref> iteratively for each i∈ℐ, we obtain a -based BNE strategy profile ρ with the same expected payoff profile as g. Therefore the set of -based BNE payoffs is the same as that of all BNE. §.§ Proof of Theorem <ref> If is mutually sufficient information, then there exists at least one -based sequential equilibrium. The proof of Theorem <ref> follows similar steps to that of Theorem <ref>, where we construct a sequence of strictly mixed strategy profiles via the fixed points of dynamic program based best response mappings. In addition, we show the sequential rationality of the strategy profile constructed. Let (ρ^(n))_n=1^∞ be a sequence of -based strategy profiles that always assigns strictly mixed actions as constructed in the proof of Theorem <ref>. By taking a sub-sequence, without loss of generality, assume that ρ^(n)→ρ^(∞) for some -based strategy profile ρ^(∞). Let ^(n) be conjectures of reward-to-go functions consistent (in the sense of Definition <ref>) with ρ^(n), i.e. _τ^(n), i(h_t^i, u_t^i) := ^ρ^(n)[∑_t=τ^T R_t^i|h_τ^i, u_τ^i]. Let ^(∞) be the limit of a sub-sequence of (^(n))_n=1^∞ (such a limit exists since the range of each _τ^(n), i is a compact set). We proceed to show that (ρ^(∞), ^(∞)) forms a sequential equilibrium (as defined in Definition <ref>). Note that by construction, ^(∞) is fully consistent with ρ^(∞). We only need to show sequential rationality. Claim: Let _t^ϵ, i be as defined in (<ref>) in the proof of Theorem <ref>, then _t^(n), i(h_t^i, u_t^i) = _t^ϵ_n, i(_t^i, u_t^i; ρ^(n), -i), for all i∈ℐ, t∈𝒯, h_t^i∈ℋ_t^i, and u_t^i∈𝒰_t^i. By construction in the proof of Theorem <ref>, ρ_t^(n), i(_t^i)∈BR_t^ϵ_n, i[_t^i](ρ^(n), -i). Given the claim, this means that ρ_t^(n), i(_t^i) ∈η∈Δ^ϵ_n(𝒰_t^i) max ∑_ũ_t^i_t^(n), i(h_t^i, ũ_t^i) η(ũ_t^i), for all i∈ℐ, t∈𝒯 and h_t^i∈ℋ_t^i. Applying Berge's Maximum Theorem <cit.> in a similar manner to the proof of Theorem <ref> we obtain ρ_t^(∞), i(_t^i) ∈η∈Δ(𝒰_t^i) max ∑_ũ_t^i_t^(∞), i(h_t^i, ũ_t^i) η(ũ_t^i), for all i∈ℐ, t∈𝒯 and h_t^i∈ℋ_t^i. Therefore, we have shown that ρ^(∞) is sequentially rational under ^(∞) and we have completed the proof. Proof of Claim: For clarity of exposition we drop the superscript (n) of ρ^(n). We know that _t^(n), i satisfies the following equations: _T^(n), i(h_T^i, u_T^i) =^ρ[R_T^i|h_T^i, u_T^i], V_t^(n), i(h_t^i) :=∑_ũ_t^i_t^(n), i(h_t^i, ũ_t^i) ρ^ i_t(ũ_t^i|_t^i), _t-1^(n), i(h_t-1^i, u_t-1^i) := ^ρ[R_t-1^i|h_t-1^i, u_t-1^i] + ∑_h̃_t^i∈ℋ_t^i V_t^(n), i(h̃_t^i) ^ρ(h̃_t^i|h_t-1^i, u_t^i). Since is mutually sufficient information, we have ^ρ(_t+1^i|h_t^i, u_t^i) := P_t^i, ρ^-i(_t+1^i|_t^i, u_t^i), ^ρ[R_t^i|h_t^i, u_t^i] := r_t^i, ρ^-i(_t^i, u_t^i), where P_t^i, ρ^-i and r_t^i, ρ^-i are as specified in Definition <ref>. Therefore, through an inductive argument, one can show then _t^(n), i(h_t^i, u_t^i) depends on h_t^i only through _t^i, and _T^(n), i(_T^i, u_T^i) =r_T^i, ρ^-i(_T^i, u_T^i), V_t^(n), i(_t^i) :=∑_ũ_t^i_t^i(_t^i, ũ_t^i;ρ^-i) ρ^i_t(ũ_t^i|_t^i), _t-1^(n), i(_t-1^i, u_t-1^i) := r_t-1^i,ρ^-i(_t-1^i, u_t-1^i)+ ∑__t^i∈_t^i V_t^(n), i(_t^i) P_t-1^i,ρ^-i(_t^i|_t-1^i, u_t^i). The claim is then established by comparing (<ref>) with (<ref>) and combining with the fact that ρ_t^i(_t^i)∈BR_t^ϵ, i[_t^i] (ρ^-i). §.§ Proof of Theorem <ref> If =(^i)_i∈ℐ where ^i is unilaterally sufficient information for player i, then the set of -based sequential equilibrium payoffs is the same as that of all sequential equilibria. To prove the assertion of Theorem <ref> we establish a series of technical results that appear in Lemmas <ref> - <ref> below. The two key results needed for the proof of the theorem are provided by Lemmas <ref> and <ref>. Lemma <ref> asserts that a player can switch to a USI-based strategy without changing the dynamic decision problems faced by the other players. The result of Lemma <ref> allows to establish the analogue of the payoff equivalence result of Lemma <ref> under the concept of sequential equilibrium. Lemma <ref> asserts that any one player can switch to a USI-based strategy without affecting the sequential equilibrium (under perfect recall) and its payoffs. The proof of Lemma <ref> is based on two technical results provided by Lemmas <ref> and <ref>. The proof of Lemma <ref> is based on Lemmas <ref> and <ref> which states that the history-action value function of a player i∈ℐ can be expressed with their USI. Suppose that ^i is unilaterally sufficient information. Then ^g(h_t^i|h_t^j) = ^g(h_t^i|_t^i)^g(_t^i|h_t^j), whenever ^g(_t^i) > 0, ^g(h_t^j) > 0. From the definition of unilaterally sufficient information (Definition <ref>) we have ^g(h̃_t^i, h̃_t^j|_t^i) = F_t^i, g^i(h̃_t^i|_t^i) F_t^i,j, g^-i(h̃_t^j|_t^i), where F_t^i,j, g^-i(h_t^j|_t^i) := ∑_x̃_t, h̃_t^-{i, j}Φ_t^i,g^-i(x̃_t, (h_t^j, h̃_t^-{i, j})|_t^i). Therefore, we conclude that H_t^i and H_t^j are conditionally independent given _t^i. Since _t^i is a function of H_t^i, we have ^g(h_t^i|h_t^j) = ^g(h_t^i, _t^i|h_t^j) =^g(h_t^i|_t^i)^g(_t^i|h_t^j). Suppose that ^i is unilaterally sufficient information for player i∈ℐ. Then there exist functions (Π_t^j,i,g^-{i,j})_j∈ℐ\{i},t∈𝒯, (r_t^i, j,g^-{i, j})_j∈ℐ\{i}, t∈𝒯, where Π_t^i,j,g^-{i, j}: _t^i×ℋ_t^j ×𝒰_t^i ×𝒰_t^j ↦Δ(ℋ_t+1^j), r_t^i,j,g^-{i,j}: _t^i×ℋ_t^j ×𝒰_t^i ×𝒰_t^j ↦ [-1, 1] such that * ^g(h̃_t+1^j|h_t^i, h_t^j, u_t^i, u_t^j) = Π_t^j, i, g^-{i, j}(h̃_t+1^j|_t^i, h_t^j, u_t^i, u_t^j) for all t∈𝒯\{T}; and * ^g[R_t^j|h_t^i, h_t^j, u_t^i, u_t^j] = r_t^i, j, g^-{i, j}(_t^i, h_t^j, u_t^i, u_t^j) for all t∈𝒯, for all j∈ℐ\{i} and all behavioral strategy profiles g whenever the left-hand side expressions are well-defined. Let ĝ^l be some fixed, fully mixed behavioral strategy for player l∈ℐ. Fix j≠ i. First, ^g(x_t, h_t^-{i, j}| h_t^i, h_t^j) = ^ĝ^{i, j}, g^-{i, j} (x_t, h_t^-{i, j}| h_t^i, h_t^j) =Φ_t^i, (ĝ^j, g^-{i, j}) (x_t, h_t^-i|_t^i) ∑_x̃_t,h̃_t^-{i, j}Φ_t^i, (ĝ^j, g^-{i, j})(x̃_t, (h̃_t^-{i, j}, h_t^j)|_t^i) =:Φ_t^i, j, g^-{i, j}(x_t, h_t^-{i, j}| _t^i, h_t^j), for any behavioral strategy profile g, where in (<ref>) we used the fact that since (h_t^i, h_t^j) are included in the conditioning, the conditional probability is independent of the strategies of player i and j <cit.>. In (<ref>) we used Bayes rule and the definition of USI (Definition <ref>). Therefore, using the Law of Total Probability, ^g(x̃_t, ũ_t^-{i,j}|h_t^i, h_t^j) = ∑_h̃_t^-{i, j}^g(ũ_t^-{i,j}|x̃_t, h̃_t^-{i, j}, h_t^i, h_t^j) ^g(x̃_t, h̃_t^-{i, j}| h_t^i, h_t^j) = ∑_h̃_t^-{i, j}(∏_l ∈ℐ\{i,j}g_t^l(ũ_t^l|h̃_t^l) ) Φ_t^i, j, g^-{i, j}(x̃_t, h̃_t^-{i, j}| _t^i, h_t^j) =: P̃_t^i, j, g^-{i, j}(x̃_t, ũ_t^-{i, j}|_t^i, h_t^j), for any behavioral strategy profile g. We know that H_t+1^j = ξ_t^j(X_t, U_t, H_t^j) for some function ξ_t^j independent of the strategy profile g, hence using the Law of Total Probability we have ^g(h̃_t+1^j|h_t^i, h_t^j, u_t^i, u_t^j) = ∑_x̃_t, ũ_t^-{i, j}1_{h̃_t+1^j = ξ_t^i(x̃_t, (u_t^{i, j}, ũ_t^-{i, j}), h_t^j) }P̃_t^i, j, g^-{i, j}(x̃_t, ũ_t^-{i, j}|_t^i, h_t^j) =:Π_t^j, i, g^-{i, j}(h̃_t+1^j|_t^i, h_t^j, u_t^i, u_t^j), establishing part (1) of Lemma <ref>. Since [R_t^j|x_t, u_t] is strategy-independent, for j∈ℐ\{i}, using the Law of Total Expectation we have ^g[R_t^j|h_t^i, h_t^j, u_t^i, u_t^j] = ∑_x̃_t, ũ_t^-i[R_t^j|x̃_t, (u_t^{i, j}, ũ_t^-{i, j})] P̃_t^i, j, g^-{i, j}(x̃_t, ũ_t^-{i, j}|_t^i, h_t^j) =:r_t^i, j, g^-{i, j}(_t^i, h_t^j, u_t^i, u_t^j), establishing part (2) of Lemma <ref>. Suppose that ^i is unilaterally sufficient information. Let g=(g^j)_j∈ℐ be a fully mixed behavioral strategy profile. Let a ^i-based strategy ρ^i be such that ρ_t^i(u_t^i|_t^i) = ∑_h̃_t^ig_t^ i(u_t^i|h̃_t^i)F_t^i, g^i(h̃_t^i|_t^i). Then * ^g(h̃_t+1^j|h_t^j, u_t^j) = ^ρ^i, g^-i(h̃_t+1^j|h_t^j, u_t^j) for all t∈𝒯\{T}; and * ^g[R_t^j|h_t^j, u_t^j] = ^ρ^i, g^-i[R_t^j|h_t^j, u_t^j] for all t∈𝒯, for all j∈ℐ\{i} and all h_t^j∈ℋ_t^j, u_t^j∈𝒰_t^j. Fixing g^-i, H_t^i is a controlled Markov Chain controlled by U_t^i and player i faces a Markov Decision Problem. By Lemma <ref>, _t^i is an information state (as defined in <ref>) of this MDP. Therefore, by the Policy Equivalence Lemma (Lemma <ref>) we have ^g^i, g^-i(_t^i) = ^ρ^i, g^-i(_t^i). Furthermore, from the definition of USI we have ^g^i, g^-i(h_t^j|_t^i) = ∑_x̃_t, h̃_t^-{i, j}Φ_t^i,g^-i(x̃_t, (h_t^j, h_t^-{i, j})|_t^i) =: F_t^i,j, g^-i(h_t^j|_t^i). Using Bayes Rule, we then have ^g^i, g^-i(_t^i|h_t^j) = ^g^i, g^-i(h_t^j|_t^i)^g^i, g^-i(_t^i) ∑__t^i^g^i, g^-i(h_t^j|_t^i)^g^i, g^-i(_t^i) =F_t^i, j, g^-i(h_t^j|_t^i) ^g^i, g^-i(_t^i)∑__t^i F_t^i, j, g^-i(h_t^j|_t^i) ^g^i, g^-i(_t^i). Note that (<ref>) applies for all strategies g^i. Replacing g^i with ρ^i we have ^ρ^i, g^-i(_t^i|h_t^j) = F_t^i, j, g^-i(h_t^j|_t^i) ^ρ^i, g^-i(_t^i)∑__t^i F_t^i, j, g^-i(h_t^j|_t^i) ^ρ^i, g^-i(_t^i). Combining (<ref>), (<ref>), and (<ref>) we conclude that ^g^i, g^-i(_t^i|h_t^j) = ^ρ^i, g^-i(_t^i|h_t^j). Using (<ref>), Lemma <ref>, and Lemma <ref> we have ^g(h̃_t+1^j|h_t^j, u_t^j) = ∑_h̃_t^i: ^g(h̃_t^i, h_t^j) > 0∑_ũ_t^i^g(h̃_t+1^j|h̃_t^i, h_t^j, ũ_t^i, u_t^j)^g(ũ_t^i|h̃_t^i, h_t^j, u_t^j)^g(h̃_t^i|h_t^j, u_t^j) = ∑_h̃_t^i, ũ_t^iΠ_t^j, i, g^-{i, j} (h̃_t+1^j|_t^i, h_t^j, ũ_t^i, u_t^j) g_t^i(ũ_t^i|h̃_t^i) ^g(h̃_t^i|h_t^j) =∑_h̃_t^i, ũ_t^iΠ_t^j, i, g^-{i, j} (h̃_t+1^j|_t^i, h_t^j, ũ_t^i, u_t^j) g_t^i(ũ_t^i|h̃_t^i) ^g(h̃_t^i|_t^i)^g(_t^i|h_t^j) =∑__t^i, ũ_t^iΠ_t^j, i, g^-{i, j} (h̃_t+1^j|_t^i, h_t^j, ũ_t^i, u_t^j) (∑_ĥ_t^i g_t^i(ũ_t^i|ĥ_t^i) ^g(ĥ_t^i|_t^i)) ^g(_t^i|h_t^j) =∑__t^i, ũ_t^iΠ_t^j, i, g^-{i, j} (h̃_t+1^j|_t^i, h_t^j, ũ_t^i, u_t^j) ρ_t^i(ũ_t^i|_t^i) ^g(_t^i|h_t^j), where in (<ref>) we utilized Lemma <ref> and the function Π_t^j, i, g^-{i, j} defined in it. In (<ref>) we applied Lemma <ref>. In the last equation we used (<ref>) and the definition of USI. Following a similar argument, we can show that ^ρ^i, g^-i(h̃_t+1^j|h_t^j, u_t^j) =∑__t^i, ũ_t^iΠ_t^j, i, g^-{i, j} (h̃_t+1^j|_t^i, h_t^j, ũ_t^i, u_t^j) ρ_t^i(ũ_t^i|_t^i) ^ρ^i, g^-i(_t^i|h_t^j). Using (<ref>) and comparing (<ref>) with (<ref>), we conclude that ^g(h̃_t+1^j|h_t^j, u_t^j) = ^ρ^i, g^-i(h̃_t+1^j|h_t^j, u_t^j), proving statement (1) of the Lemma. Following an analogous argument, we can show that ^g[R_t^j|h_t^j, u_t^j] =∑__t^i, ũ_t^i r_t^i, j, g^-{i, j} (_t^i, h_t^j, ũ_t^i, u_t^j) ρ_t^i(ũ_t^i|_t^i) ^g(_t^i|h_t^j) ^ρ^i, g^-i[R_t^j|h_t^j, u_t^j] =∑__t^i, ũ_t^i r_t^i, j, g^-{i, j} (_t^i, h_t^j, ũ_t^i, u_t^j) ρ_t^i(ũ_t^i|_t^i) ^ρ^i, g^-i(_t^i|h_t^j), where r_t^i, j, g^-{i, j} is defined in Lemma <ref>. We similarly conclude that ^g[R_t^j|h_t^j, u_t^j] = ^ρ^i, g^-i[R_t^j|h_t^j, u_t^j], proving statement (2) of the Lemma. Suppose that ^i is unilaterally sufficient information for player i. Let g^-i be a fully mixed behavioral strategy profile for players other than i. Define _τ^i through _τ^i(h_τ^i, u_τ^i) = ^g^-i[R_τ^i|h_τ^i, u_τ^i] + g̃_τ+1:T^imax ^g̃_τ+1:T^i, g^-i[∑_t=τ+1^T R_t^i|h_τ^i, u_τ^i]. Then there exist a function _τ^i: _τ^i×𝒰_τ^i↦ [-T, T] such that _τ^i(h_τ^i, u_τ^i) = _τ^i(_τ^i, u_τ^i). By Lemma <ref>, ^i is an information state for the payoff of player i under g^-i. Fixing g^-i, H_t^i is a controlled Markov Chain controlled by U_t^i. Through Definition <ref>, _t^i is an information state of this controlled Markov Chain. The Lemma then follows from a direct application of Lemma <ref>. Suppose that ^i is unilaterally sufficient information for player i. Let g be (the strategy part of) a sequential equilibrium. Then there exist a ^i-based strategy ρ^i such that (ρ^i, g^-i) is (the strategy part of) a sequential equilibrium with the same expected payoff profile as g. Recall that in Theorem <ref> we established the equivalence of a variety of definitions of Sequential Equilibrium for strategy profiles. Let (g, ) be a sequential equilibrium under Definition <ref>. Let (g^(n), ^(n)) be a sequence of strategy and conjecture profiles that satisfies conditions (1)(2')(3) of Definition <ref>. Set ρ^(n), i through ρ_t^(n), i(u_t^i|_t^i) = ∑_h̃_t^ig_t^(n), i(u_t^i|h̃_t^i)F_t^i, g^(n), i(h̃_t^i|_t^i), where F_t^i, g^(n), i is defined in Definition <ref>. By replacing the sequence with one of its sub-sequences, without loss of generality, assume that ρ^(n), i→ρ^i for some ρ^i. For the ease of notation, denote g̅^(n) = (ρ^(n), i, g^(n), -i) and g̅ = (ρ^i, g^-i). We have g̅^(n)→g̅. In the rest of the proof, we will show that (g̅, ) is a sequential equilibrium. We only need to show that g̅ is sequentially rational to and (g̅^(n), ^(n)) satisfies conditions (2') of Definition <ref>, as conditions (1)(3) of Definition <ref> are true by construction. Since g̅^-i = g^-i, we automatically have g̅^j to be sequentially rational given ^j for all j∈ℐ\{i}, and ^(n), i to be consistent with g̅^(n), -i for each n. It suffices to establish * ρ^i is sequentially rational with respect to ^i; and * ^(n), j is consistent with g̅^(n), -j for each j∈ℐ\{i}. To establish (i), we will use the Lemma <ref> to show that _t^i(h_t^i, u_t^i) is a function of (_t^i, u_t^i), and hence one can use an _t^i based strategy to optimize _t^i. Proof of (i): By construction, ρ_t^(n), i(_t^i) = ∑_h̃_t^i: _t^i = _t^i g_t^(n), i(h̃_t^i) ·η_t^(n) (h̃_t^i|_t^i), for some distribution η_t^(n)(_t^i) ∈Δ(ℋ_t^i). Let η_t(_t^i) be an accumulation point of the sequence [η_t^(n)(_t^i)]_n=1^∞. We have ρ_t^i(_t^i) = ∑_h̃_t^i: _t^i = _t^i g_t^i(h̃_t^i)·η_t (h̃_t^i|_t^i). As a result, we have supp(ρ_t^i(_t^i)) ⊆⋃_h̃_t^i: _t^i = _t^isupp(g_t^i(h̃_t^i)). By Lemma <ref> we have _t^(n), i(h_t^i, u_t^i) = _t^(n), i(_t^i, u_t^i) for some function _t^(n), i. Since ^(n), i→^i, we have _t^i(h_t^i, u_t^i) = _t^i(_t^i, u_t^i) for some function ^i. By sequential rationality we have supp(g_t^i(h̃_t^i)) ⊆u_t^imax _t^i(_t^i, u_t^i), for all h̃_t^i whose corresponding compression _t^i satisfies _t^i = _t^i. Therefore, by (<ref>) and (<ref>) we conclude that supp(ρ_t^i(_t^i)) ⊆u_t^imax _t^i(_t^i, u_t^i), establishing sequential rationality of ρ^i with respect to ^i.   To establish (ii), we will use the Lemmas <ref> and <ref> to show that when player i switches their strategy from g^(n), i to ρ^(n), i, other players face the same control problem at every information set. As a result, their ^(n), j functions stays the same. Proof of (ii): Consider player j≠ i. Through standard control theory, we know that a collection of functions ^j is consistent (in the sense of condition (2') of Definition <ref>) with a fully mixed strategy profile g̃^-j if and only if it satisfies the following equations: _T^j(h_T^j, u_T^j) = ^g̃^-j[R_T^j|h_T^j, u_T^j], Ṽ_t^j(h_t^j) = max_ũ_t^j_t^j(h_t^j, ũ_t^j), ∀ t∈𝒯, _t^j(h_t^j, u_t^j) = ^g̃^-j[R_t^j|h_t^j, u_t^j] + ∑_h̃_t+1^jṼ_t+1^ j(h̃_t+1^j)^g̃^-j(h̃_t+1^j|h_t^j, u_t^j), ∀ t∈𝒯\{T}. By Lemma <ref>, we have ^g^(n), -j(h̃_t+1^j|h_t^j, u_t^j) = ^ρ^(n), i, g^(n), -{i, j}(h̃_t+1^j|h_t^j, u_t^j), ^g^(n), -j[R_t^j|h_t^j, u_t^j] = ^ρ^(n), i, g^(n), -{i, j}[R_t^j|h_t^j, u_t^j], and hence we conclude that ^(n), j is also consistent with g̅^(n), -j = (ρ^(n), i, g^(n), -{i, j}).   Now we have shown that (g̅, ) forms a sequential equilibrium. The second half of the Lemma, which states that g̅ yields the same expected payoff as g, can be shown with the following argument: By Lemma <ref>, g̅^(n) yields the same expected payoff profile as g^(n). Since the expected payoff of each player is a continuous function of the behavioral strategy profile, we conclude that g̅ yields the same expected payoff as g. Finally, we conclude Theorem <ref> from Lemma <ref>. Given any SE strategy profile g, applying Lemma <ref> iteratively for each i∈ℐ, we obtain a -based SE strategy profile ρ with the same expected payoff profile as g. Therefore the set of -based SE payoffs is the same as that of all SE. § PROOFS FOR SECTION <REF> AND SECTION <REF> §.§ Proof of Proposition <ref> In the game defined in Example <ref>, the set of -based wPBE payoffs is a proper subset of that of all wPBE payoffs. Set g_1^B to be the strategy of Bob where he always chooses U_1^B=+1, and g_2^A: 𝒳_1^A ×𝒰_1^B↦Δ(𝒰_2^A) is given by g_2^A(x_1^A, u_1^B) = 0 w.p. 1, if u_1^B=+1; x_1^A w.p. 2/3,  0 w.p. 1/3, otherwise, and g_2^B: 𝒳_1^B×𝒰_1^B ↦Δ(𝒰_2^B) is the strategy of Bob where he always chooses U_2^B=-1 irrespective of U_1^B. The beliefs μ_1^B: 𝒳_1^B↦Δ(𝒳_1^A), μ_2^A: 𝒳_1^A ×𝒰_1^B ↦Δ(𝒳_1^B), and μ_2^B:𝒳_1^B ×𝒰_1^B ↦Δ(𝒳_1^A) are given by μ_1^B(x_1^B) = the prior of X_1^A, μ_2^A(x_1^A, u_1^B) = -1 w.p. 1/2, +1 w.p. 1/2, if u_1^B=+1; x_1^A w.p. 1, otherwise, μ_2^B(x_1^B, u_1^B) = the prior of X_1^A. One can verify that g is sequentially rational given μ, and μ is “preconsistent” <cit.> with g, i.e. the beliefs can be updated with Bayes rule for consecutive information sets on and off-equilibrium paths. In particular, (g, μ) is a wPBE. (It can also be shown that (g, μ) satisfies Watson's PBE definition <cit.>. However, (g, μ) is not a PBE in the sense of Fudenberg and Tirole <cit.>, since μ violates their “no-signaling-what-you-don't-know” condition.) We proceed to show that no -based wPBE can attain the payoff profile of g. Suppose that ρ = (ρ^A, ρ^B) is a -based weak PBE strategy profile. First, observe that at t=2, Alice can only choose her actions based on U_1^B according to the definition of ^A-based strategies. Let α, β∈Δ({-1, 0, 1}) be Alice's mixed action at time t=2 under U_2^A=-1 and U_2^A=+1 respectively under strategy ρ^A. With some abuse of notation, denote ρ^A = (α, β). There exists no belief system under which Alice is indifferent between all of her three actions at time t=2. Therefore, no strictly mixed action at t=2 would be sequentially rational. Therefore, sequential rationally of ρ^A (with respect to some belief) implies that min{α(-1), α(0), α(+1) } = min{β(-1), β(0), β(+1) } = 0. To respond to ρ^A = (α, β), Bob can always maximizes his stage 2 instantaneous reward to 0 by using a suitable response strategy. If Bob plays -1 at t=1, his best total payoff is given by 0.2; if Bob plays +1 at t=1, his best total payoff is given by 0. Hence Bob strictly prefers -1 to +1. Therefore, in any best response (in terms of total expected payoff) to Alice's strategy ρ^A, Bob plays U_1^B = -1 irrespective of his private type. Therefore, Alice has an instantaneous payoff of -1 at t=1 and a total payoff ≤ 0 under ρ, proving that the payoff profile of ρ is different from that of g. §.§ Proof of Proposition <ref> In the model of Example <ref>, _t^i=(Y_1:t-1, U_1:t-1, X_t^i) is unilaterally sufficient information. We first prove Lemma <ref>, which establish the conditional independence of the state processes given the common information. In the model of Example <ref>, there exists functions (ξ_t^g^i)_g^i∈𝒢^i, i∈ℐ, ξ_t^g^i: 𝒴_1:t-1×𝒰_1:t-1↦Δ(𝒳_1:t^i) such that ^g(x_1:t|y_1:t-1, u_1:t-1) = ∏_i∈ℐξ_t^g^i(x_1:t^i|y_1:t-1, u_1:t-1), for all strategy profiles g and all (y_1:t-1, u_1:t-1) admissible under g. Denote H_t^0=(_1:t-1, _1:t-1). We prove the result by induction on time t. Induction Base: The result is true for t=1 since H_1^0=∅ and the random variables (X_1^i)_i∈ℐ are assumed to be mutually independent. Induction Step: Suppose that we have proved Lemma <ref> for time t. We then prove the result for time t+1. We have ^g(x_1:t+1, y_t, u_t|h_t^0) = ^g(x_t+1, y_t|x_1:t, u_t, h_t^0)^g(u_t|x_1:t, h_t^0)^g(x_1:t| h_t^0) =∏_i∈ℐ((x_t+1^i, y_t^i|x_t^i, u_t) g_t^i(u_t^i|x_1:t^i, h_t^0) ξ_t^g^i(x_1:t^i| h_t^0) ) =:∏_i∈ℐν_t^g^i(x_1:t+1^i, y_t, u_t, h_t^0) = ∏_i∈ℐν_t^g^i(x_1:t+1^i, h_t+1^0), where the induction hypothesis is utilized in (<ref>). Therefore, using Bayes rule, ^g(x_1:t+1| h_t+1^0) = ^g(x_1:t+1, y_t, u_t| h_t^0)∑_ỹ_t, ũ_t^g(x̃_1:t+1, y_t, u_t| h_t+1^0) =∏_i∈ℐν_t^g^i(x_1:t+1^i, h_t+1^0)∑_x̃_1:t+1∏_i∈ℐν_t^g^i(x̃_1:t+1^i, h_t+1^0) =∏_i∈ℐν_t^g^i(x_1:t+1^i, h_t+1^0)∏_i∈ℐ∑_x̃_1:t+1^iν_t^g^i(x̃_1:t+1^i, h_t+1^0) =:∏_i∈ℐξ_t+1^g^i (x_1:t+1^i| h_t+1^0), where ξ_t+1^g^i (x_1:t+1^i| h_t+1^0) := ν_t^g^i(x_1:t+1^i, h_t+1^0)∑_x̃_1:t+1^iν_t^g^i(x̃_1:t+1^i, h_t+1^0), establishing the induction step. Denote H_t^0=(_1:t-1, _1:t-1). Then _t^i=(H_t^0, X_t^i). Given Lemma <ref>, we have ^g(x_1:t-1^i|_t^i) = ^g(x_1:t^i|h_t^0)^g(x_t^i|h_t^0)=ξ_t^g^i(x_1:t^i|h_t^0) ∑_x̃_1:t-1^iξ_t^g^i((x̃_1:t-1^i, x_t^i)|h_t^0) =:F̃_t^i, g^i(x_1:t-1^i|_t^i). Since H_t^i=(_t^i, X_1:t-1^i), we conclude that ^g(h̃_t^i|_t^i) = F_t^i, g^i(h̃_t^i|_t^i), for some function F_t^i, g^i. Given Lemma <ref>, we have ^g(x̃_1:t^-i|h_t^i) = ^g(x̃_1:t^-i, x_1:t^i|h_t^0)^g(x_1:t^i|h_t^0)=∏_j≠ iξ_t^g^j(x̃_1:t^j|h_t^0). As a result, we have ^g(x̃_1:t^-i, _t^i|h_t^i) = 1_{_t^i = _t^i}∏_j≠ iξ_t^g^j(x_1:t^j|h_t^0) =:Φ̃_t^i,g^-i(x̃_1:t^-i|_t^i). Since (_t, H_t^-i) is a fixed function of (_1:t^-i, _t^i), we conclude that ^g(x̃_t, h̃_t^-i|h_t^i) = Φ_t^i, g^-i(x̃_t, h̃_t^-i|_t^i), for some function Φ_t^i, g^-i. Combining (<ref>) and (<ref>) while using the fact that _t^i is a function of H_t^i, we obtain ^g(x̃_t, h̃_t|_t^i) = F_t^i, g^i(h̃_t^i|_t^i)Φ_t^i, g^-i(x̃_t, h̃_t^-i|_t^i). We conclude that ^i is unilaterally sufficient information. §.§ Proof of Proposition <ref> In the game of Example <ref> belief-based equilibria do not exist. We first characterize all the Bayes-Nash equilibria of Example <ref> in behavioral strategy profiles. Then we will show that none of the BNE corresponds to a belief-based equilibrium. Let α=(α_1, α_2)∈ [0, 1]^2 describe Alice's behavioral strategy: α_1 is the probability that Alice plays U_1^A=-1 given X_1^A=-1; α_2 is the probability that Alice plays U_1^A=+1 given X_1^A=+1. Let β=(β_1, β_2)∈ [0, 1]^2 denote Bob's behavioral strategy: β_1 is the probability that Bob plays U_2^B=U when observing U_1^A=-1, β_2 is the probability that Bob plays U_2^B=U when observing U_1^A=+1. Claim: α^*=(1/3, 1/3), β^*=(1/3+c, 1/3-c), is the unique BNE of Example <ref>. Given the claim, one can conclude that a belief based equilibrium does not exist in this game: Bob's true belief b_2 on X_2 at the beginning of stage 2, given his information H_2^B = U_1^A, would satisfy b_2^-(+1) = α_1α_1+1-α_2, if α≠ (0, 1); b_2^+(+1) = α_2α_2+1-α_1, if α≠ (1, 0), where b_2^- represents the belief under U_1^A = -1 and b_2^+ represents the belief under U_1^A = +1. If Alice plays α^*=(1/3, 1/3), then b_2^- = b_2^+. Under a belief-based equilibrium concept (e.g. <cit.>), Bob's stage behavioral strategy β should yield the same action distribution under the same belief, which means that β_1=β_2. However we have β^*=(1/3+c, 1/3-c). Therefore, (α^*, β^*), the unique BNE of the game, is not a belief-based equilibrium. We conclude that a belief-based equilibrium does not exist in Example <ref>. Proof of Claim: Denote Alice's total expected payoff to be J(α, β). Then J(α, β) = 1/2c(1-α_1+α_2) + 1/2α_1 · 2β_1 + 1/2 (1-α_1)(1-β_2) +1/2 (1-α_2)(1-β_1) + 1/2α_2· 2β_2 =1/2c(1-α_1+α_2) + 1/2 (2 - α_1 - α_2) + 1/2(2α_1 + α_2 - 1)β_1 + 1/2(2α_2 + α_1 - 1)β_2. Define J^*(α) = min_β J(α, β). Since the game is zero-sum, Alice plays α at some equilibrium if and only if α maximizes J^*(α). We compute J^*(α) = 1/2c(1-α_1+α_2) + 1/2 (2 - α_1 - α_2) + +1/2min{2α_1 + α_2 - 1, 0 } + 1/2min{α_1 + 2α_2 - 1, 0 }. Since J^*(α) is a continuous piecewise linear function, the set of maximizers can be found by comparing the values at the extreme points of the pieces. We have J^*(0, 0) = 1/2c + 1 - 1/2 - 1/2 = 1/2c; J^*(1/2, 0) = 1/2c ·1/2 + 1/2·3/2 + 1/2· 0 - 1/2·1/2= 1/4c + 1/2; J^*(0, 1/2) = 1/2c·3/2 + 1/2·3/2 - 1/2·1/2 - 1/2· 0= 3/4c + 1/2; J^*(1, 0) = 1/2c · 0+ 1/2· 1 + 1/2· 0 + 1/2· 0 = 1/2; J^*(0, 1) = 1/2c · 2 + 1/2· 1 + 1/2· 0 + 1/2· 0 = c + 1/2; J^*(1/3, 1/3) = 1/2c + 1/2·4/3 + 1/2· 0 + 1/2· 0 = 1/2c + 2/3; J^*(1, 1) = 1/2c + 1/2· 0 + 1/2· 0 + 1/2· 0 = 1/2c. Since c < 1/3, we have (1/3, 1/3) to be the unique maximum among the extreme points. Hence we have max_α J^*(α) = {(1/3, 1/3) }, i.e. Alice always plays α^*=(1/3, 1/3) in any BNE of the game. Now, consider Bob's equilibrium strategy. β^* is an equilibrium strategy of Bob only if α^* ∈max_α J(α, β^*). For each β, J(α, β) is a linear function of α and ∇_α J(α, β) = (-1/2c - 1/2 + β_1 + 1/2β_2, 1/2c - 1/2 + 1/2β_1 + β_2 ), ∀α∈ (0, 1)^2. We need ∇_α J(α, β^*)|_α=α^* = (0, 0). Hence -1/2c - 1/2 + β_1^* + 1/2β_2^* = 0; 1/2c - 1/2 + 1/2β_1^* + β_2^* = 0, which implies that β^*=(1/3+c, 1/3-c), proving the claim.
http://arxiv.org/abs/2407.12525v1
20240717130802
Efficient ensemble uncertainty estimation in Gaussian Processes Regression
[ "Mads-Peter Verner Christiansen", "Nikolaj Rønne", "Bjørk Hammer" ]
physics.comp-ph
[ "physics.comp-ph", "physics.chem-ph" ]
hammer@phys.au.dk Center for Interstellar Catalysis, Department of Physics and Astronomy, Aarhus University, DK‐8000 Aarhus C, Denmark § ABSTRACT Reliable uncertainty measures are required when using data based machine learning interatomic potentials (MLIPs) for atomistic simulations. In this work, we propose for sparse Gaussian Process Regression type MLIP a stochastic uncertainty measure akin to the query-by-committee approach often used in conjunction with neural network based MLIPs. The uncertainty measure is coined “label noise" ensemble uncertainty as it emerges from adding noise to the energy labels in the training data. We find that this method of calculating an ensemble uncertainty is as well calibrated as the one obtained from the closed-form expression for the posterior variance when the sparse GPR is treated as a projected process. Comparing the two methods, our proposed ensemble uncertainty is, however, faster to evaluate than the closed-form expression. Finally, we demonstrate that the proposed uncertainty measure acts better to support a Bayesian search for optimal structure of Au_20 clusters. Efficient ensemble uncertainty estimation in Gaussian Processes Regression Bjørk Hammer July 22, 2024 ========================================================================== Machine learning interatomic potentials (MLIPs) based on density functional theory (DFT) data are currently replacing full-blown DFT studies in computational materials science and chemistry. The seminal works by Behler and Parrinello <cit.> and Bartok et. al <cit.>, led the way to this transition by introducing how MLIPs can be implemented using either a neural network (NN) or a Gaussian Process Regression (GPR) approach. Since those works, the field has seen significant improvements and nowadays simulations are rutinely made for large atomistic systems. Recent examples are MLIP-based molecular dynamics (MD) simulations of amorphous carbon and silicon<cit.>, for dissociative adsorption of hydrogen over Pt surfaces <cit.>, and for vibrational spectroscopy of molecules and nano-clusters<cit.>. MLIPs are also being used in structural search, where they significantly enhance the likelihood of identifying e.g. the shape of metal clusters supported on oxide surfaces <cit.> or where they speed up the search for crystal surface reconstructions <cit.> along with various other applications of such techniques <cit.>. MLIPs can be trained to a high level of accuracy on databases of structure-energy data points whenever given the right representation and model architecture. Early work on the topic focused on adapting machine learning regression methods for the task of fitting potential energy surfaces, such as designing effective descriptors that encode the invariances of the Hamiltonian <cit.> Other developments include message-passing or graph neural networks <cit.> and their equivariant counterparts <cit.>. Another area of interest has been the inclusion of long-range effects <cit.>. Clearly, there is substantial interest in both improvements to and applications of MLIPs. With the success of MLIPs which offer gains in computational efficiency by orders of magnitude and the application of these to increasingly complex atomistic systems, the ability to assess the quality of predictions is becoming increasingly important. As such, one recurring concern with MLIPs is their reliability when applied to structures and configurations that overlap little with the training data. While loss metrics on a test set are useful for comparing model performance, an accurate uncertainty measure allows for assessing the confidence of a model's predictions. As such, approaches for calculating uncertainties and methods of evaluating whether such uncertainty estimates are consistent are active topics of research <cit.>. Additionally, uncertainties can be used to guide data acquisition or to enhance common computational tasks such as structure optimization and exploration. One strategy is to use an uncertainty measure to determine whether to continue a MLIP based simulation or to first collect new DFT data points for refining the model <cit.>. Likewise various active learning protocols have been formulated, in which only the most promising candidate based on an MLIP search is selected for investigation at the DFT level followed by an retraining of the MLIP <cit.>. There are two frequently used uncertainty measures in atomistic simulations. For neural network based MLIPs, the ensemble method is frequently employed <cit.>, while for GPR models, the closed-form expression for the posterior variance is the natural choice. When employing the ensemble method in conjunction with neural networks, several models of the same architecture are trained on the same data. The models differ since their networks are initialized with independent random weights. When predictions are made with the ensemble method, the mean and variance of the expectations are then deduced from the spread of predictions from the models. The method is hence also referred to as query-by-committee. For GPR-based MLIP models, the uncertainty can be calculated more directly, as a closed-form expression exists for the posterior variance. This is also the case when using sparse GPR models, where not every local atomic feature present in the training data is used for the prediction. Sparse GPR models are solved as a projected process, where a subset of the atomic features in the training data are used as inducing points. Projected processes also have a closed-form expression for the posterior variance, but it involves large matrices relating to the sparsification and hence eventually become time limiting. When formulating ensembles of neural network models, the obvious means to get different models is to exploit the randomness of the initial network weights. This cannot be carried over to the GPR domain, as the models are deterministic, but ensembles of models could be obtained either * by training individual models on separate subsets of the total training set, * or by varying the hyperparameters or the form of the kernel function for each model. Such approaches have been employed <cit.>. However, both suffer from the obvious drawback that some or all aspects of their use, including sparsification, training, and prediction, would scale linearly with the ensemble size. In the sparsification, each model would identify different inducing points, in the training, each model would require solving an independent set of equations, and in the prediction, each model would require the setup of a different kernel vector. In this work, we present an elegant way of avoiding the most critical parts of this linear increase in computational demand with ensemble size. We do so by establishing each model on replicas of all data with random noise added. In this way, the models can share the sparsification step, the setting up and inversion of the kernel matrix, and the calculation of the kernel vector for a prediction. Generally, the formulation of GPR models involves assuming noisy labels, which is evidently beneficial when trained on data, such as from experiments, that have noisy labels. However, whenever DFT energies are the labels, there is no noise in the data, since new DFT calculations for the same structures would reproduce the energies exactly. Regardless, the noisy GPR formulation is typically used to ensure numerical stability. Furthermore, limitations to the model's ability to fit the data, that may arise from representation deficiencies or the limit on model complexity imposed by the kernel function, are handled by this formalism. We extend this by deliberately adding noise to the training labels in a manner that allows defining a computational efficient ensemble of GP regressors for uncertainty quantification. In practice, we add a normally distributed noise to the DFT energies and train each GPR model according to the noisy data. This can be done a number of times, and an uncertainty measure can be obtained from the distribution of predictions by the resulting ensemble of models. The article is outlined as follows: We first introduce the label noise approach and argue how the cost of having several GPR models becomes negligible when used in conjunction with sparse GPR models, where sparsification is the major computational bottleneck, rather than solving for each model. Next, we investigate the quality of the uncertainty measure for the test case of Au clusters. Finally, we demonstrate the usefulness of the label noise uncertainty measure in MLIP-enhanced structural searches for Au_20 clusters. § RESULTS §.§ Ensemble Gaussian Process Regression The central model we propose is an efficient ensemble formulation, as an alternative to the projected-process uncertainty measure that is part of the sparse GPR formalism. This model is based on a sparse GPR model, described in detail in the methods section <ref>, where a local energy prediction results from the expression ϵ(𝐱) = 𝐤_m(𝐱) C 𝐄. Here the matrix C involves the inversion of a matrix of local descriptor covariances. For neural networks an ensemble of models can be constructed simply by having multiple copies of the network with different randomly of initialized weights. However, GPR models have no randomly set initial parameters and some other way of setting up different models must be invoked. Choices include bootstrapping the training data, that is training each model with different subsets of the training set or selecting different hyperparameters (kernel parameters, inducing points). Note that both options involve calculating both a different C-matrix and a different kernel-vector 𝐤_m(𝐱) for each model, which comes at significant computational cost. Instead, if the differences between the individual models of the ensemble is limited to the 𝐄-term, then C and 𝐤_m(𝐱) will only need to be calculated once. By defining 𝐍 as the number of atoms for each energy observation 𝐄 our proposed expression for an individual model, k, of the ensemble is given by first adding normally distributed noise to the labels of the training data: Ẽ_k = 𝐄 + 𝐍⊙γ_k - ρ_k 𝐍, where γ_k ∼N(0, σ_l^2) is random noise on each label and ρ_k ∼𝒩(0,σ_p^2) is a shift for model k, ⊙ denotes element-wise multiplication, and 𝒩(μ,σ^2) represents a normally distributed stochastic variable, with mean μ and variance σ^2. With these altered labels the prediction of a local energy from model k can be expressed as: ϵ_k(𝐱) = 𝐤_m(𝐱) CẼ_k + ρ_k. The two noise terms added are dubbed label noise and prior noise, as they act directly on each label and as a common prior, respectively. The label noise term specified by σ_l is drawn independently for each structure and for each model, while the prior noise term, specified by σ_p, is drawn for each model only. The two noise terms have different functions. The label noise makes the various models in the ensemble associate uncertainty with known data and thereby influence predictions in the neighborhood of training data, while the prior noise makes the various models disagree on unknown data. These noise terms may also be thought of as representing aleatoric and epistemic uncertainties, respectively. With more data the uncertainty arising from σ_p may be reduced as such it is epistemic, whereas the uncertainty produced by σ_l will not reduce with additional data – since it is aleatoric. See <cit.> for further discussion on the distinction between these two types of uncertainty. The mean prediction of an ensemble with K models is ϵ̅(𝐱) = 1/K∑_k^K ϵ_k(𝐱). As K →∞ this converges to Eq. (<ref>) so the ensemble retains the mean prediction of a GP model without the added noise terms that we use to define an ensemble. So, when calculating the mean prediction of the ensemble we use Eq. (<ref>) rather than Eq. (<ref>). The real objective of the ensemble is to calculate an uncertainty given as the standard deviation of the predictions, i.e.: σ(X) = √(1/K∑_k^K (E_k(X) - E̅(X))^2), where E_k(X) = ∑_i ϵ_k(𝐱_i), and E̅_k(X) = ∑_i ϵ(𝐱_i), with 𝐱_i being the local descriptors of a structure with descriptor X. We illustrate the model in Figure <ref>, from which the two new hyperparameters σ_l and σ_p can be interpreted as the uncertainty at training points and at points sufficiently far from training data that each model just predicts its prior. In figure <ref> we show three different models, the projected-process sparse GPR, an ensemble GPR utilizing only the prior σ_p noise to introduce an uncertainty, and a ensemble GPR utilizing both the prior σ_p and label σ_l noise. When fitting the potential energy surface descriptors that obey translational, rotational and permutational invariance are always used. These descriptors encode invariant properties such as bond lengths and angles, which correlate distinct atomic configuration. This correlation means that a model can and should have low predicted uncertainty about configurations that may naively appear to be far from any training example. We illustrate the effect of this on the three models in Figure <ref>(d-f), by choosing features that introduce a similar transformation of feature space. This means that for most kernel length-scales no point is particularly far from a training example and as such only a small fraction of σ_p can ever be realized - whereas even a small σ_l can lead to a significant uncertainty. The form of Eq. (<ref>) is advantageous as the training, where the majority of the computational expense is computing the matrix C, only needs to be done once, rather than for each constituent model. Similarly, for making predictions the kernel vector 𝐤_m(𝐱) also only needs to be calculated once, meaning that predictions for every constituent model can be made at barely any extra computational cost. § CALIBRATION In this section, we compare the quality of the uncertainty prediction made with the standard closed-form posterior uncertainty expression of a projected process (see section <ref> for an introduction) with our proposed ensemble uncertainty method. The methodology of calibration curves and errors are presented in section <ref> A dataset of Au_x with x=[10, 12, 14, 16, 18, 20] consisting of 5838 structures has been gathered by selecting structures with unique graphs <cit.> from a set of global structure searches for each cluster size. This dataset is split into a training set, a validation set and a test set of 225, 25 and 5588 structures respectively. In Fig. <ref> we show the parity plots for a subset of the test data and calibration plots for both models with optimized hyperparameters. Both models are capable of achieving good fits with the relatively small amount of training data and both produce well calibrated uncertainties. For this comparison, we train on the training set and use the validation set to calibrate the uncertainties. This calibration amounts to finding hyperparameters. From the outset we choose the kernel form (see kernel definition in Eq. (<ref>)) and its length-scale as l = 20. Having fixed the length-scale, the regular sparse GPR has two hyperparameters that can influence the predicted uncertainties, namely the kernel amplitude θ_0 and the noise σ_n. However, if these are chosen independently the model predictions are influenced, to avoid that, we fix the ratio A/σ_n = 100. The calibration error can then be minimized wrt. these two parameters while keeping the ratio fixed. For the ensemble model, we can freely minimize the calibration error wrt. to the two additional noise parameters of the ensemble σ_l and σ_p, while fixing θ_0 = 1 and σ_n = 0.01 keeping the same ratio as for the regular sparse GPR. § GLOBAL STRUCTURE SEARCH To probe the utility of this model we employ it in an active learning global structure search algorithm. In each iteration of the algorithm several structural candidates are stochastically generated and subsequently locally optimized in the lower-confidence-bound expression: E_LCB(X) = E(X) - κσ(X), where E(X) is model total energy prediction, σ(X) is the predicted standard deviation, and κ is a hyperparameter balancing the importance of the uncertainty. Among these structural candidates the one with the lowest value of E_LCB(X) is selected for evaluation with DFT and that structure is added to the training set of the model before moving on to the next iteration. The uncertainty therefore plays a large role in efficiently exploring the search space. For the largest of the gold clusters, 20 atoms, studied in the previous section we run many independent searches. For each search we record how many iterations are required in order to find the global minimum structure, a perfect tetrahedron, as a function of the exploration parameter κ. If the uncertainty measure is suited for this task, there must exist a κ≠ 0 that increases the number of successful searches compared to searches with κ = 0. In Figure <ref> we present the success rate, that is the percentage of searches that find the GM structure, as a function of κ for both the ensemble GPR and the regular GPR using the projected process expression for the uncertainty. For the ensemble there is a clear improvement in the search performance as κ is increased – thus the uncertainty helps the algorithm explore the configuration space more efficiently. § TIMINGS The allure of machine learning models comes in large part from them being orders of magnitude faster than traditional quantum mechanical methods, such as DFT. Even so, for a task such as the global structure search described in the previous section, performing local optimizations in the model constitutes a significant fraction of the total time. When making a prediction with either type of model the matrix vector of coefficients α = C 𝐄 can be precomputed for the current training set. For the ensemble model this means that the α-vector for each model in the ensemble can be stored and the time spent calculating the uncertainty from Eq. (<ref>) mainly involves calculating the kernel vectors 𝐤_m(x). Figure <ref> shows timings for training and prediction with both discussed types of models on configurations of Au_20 clusters. Here training covers computing C𝐄 in Eq. (<ref>) and the required matrices for calculating uncertainties, such that predictions can be made as fast as possible. As a function of the number of training configurations with a fixed number of inducing points the sparsification procedure is the dominating part of the timings, with only a relatively small additional time for the projected-process model as it involves inverting additional matrices. With a fixed set of training data but a varying number of inducing points a more pronounced difference between the two methods can be observed, which again can be attributed to the projected-process uncertainty expression needing the construction and inversion of additional matrices that grow with the number of inducing points. Finally for predictions on 50 configurations, for the ensemble the timings are dominated by the pre-computing features and kernel elements between the query configuration and the set of inducing points. Whereas, for larger sets of inducing points the calculation of uncertainties and especially derivatives of the uncertainties in the projected process starts becoming a significant proportion of the time. § DISUCSSION We have investigated the efficacy of two expressions for predicting uncertainties of GPR models in the realm of atomistic simulations. It has been shown that both models can produce well-calibrated uncertainties. However, in a common material science simulation task, namely global structure search for atomic configurations we have found that the proposed ensemble GPR uncertainty expression is advantageous. Further, we document that the ensemble method has superior behavior in terms of computational cost – both when it comes to training and prediction. § METHODS §.§ Sparse Gaussian Process Regression We have previously reported on a sparse local GPR <cit.>, where the covariance between local enviroment descriptors is given by k(𝐱_i, 𝐱_j) = θ_0 exp(-|𝐱_i-𝐱_j|^2/2l^2). Here, θ_0 and l are amplitude and length-scale hyperparameters of this radial basis function kernel. With a training dataset X_n and a set of inducing points X_m, we can define covariance matrices K_mm and K_nm as the covariance between all descriptors in X_m and X_m and X_n and X_m. From that, we can define C = [K_mm + (LK_nm)^T Σ_nn^-1(LK_nm)]^-1(LK_nm)^TΣ^-1, where Σ_nn is a diagonal matrix with the noise of each environment and a local energy may be predicted using ϵ(𝐱) = 𝐤_m(𝐱)C 𝐄. Here 𝐄 are the observed total energies and L is a local energy correspondence matrix. Compared to a non-sparse GPR model, which is recovered in the limit where X_m = X_n, the sparse GPR comes at reduced computational cost as the matrix that is inverted is only of size m × m and the vector of kernel elements between a query-point and X_m does not grow with the training set size. From an intuitive point of view this sparsification also necessitates a change to the predicted variance. In the projected process form the covariance is given by <cit.>: ς(𝐱_i, 𝐱_j)^2 = k(𝐱_i, 𝐱_j) - 𝐤^T_m(𝐱_i)K_mm^-1𝐤_m(𝐱_j) +𝐤_m(𝐱_i)^T (K_mm + K_mnΣ^-1_nnK_nm)^-1𝐤_m(𝐱_j). Here ς(𝐱_𝐢, 𝐱_𝐣)^2 is the predicted covariance of two local energies. As Σ_nn involves every local environment in the training set it can become quite large, so unless a sparse matrix implementation is used that can lead to memory issues. Further, the matrix product involving Σ_nn is also much faster using a sparse matrix implementation. For the acquisition function considered in this work we are interested in the standard deviation of the total energy σ(X) = √(∑_𝐱_i ∈ X∑_𝐱_j ∈ Xς(𝐱_i, 𝐱_j)^2) that takes into account the covariances of the local energies. §.§ CUR Sparsification In order to choose the local environments in the set of inducing points X_m we employ the CUR algorithm <cit.>. Here the full feature matrix X_n, that has dimensions (n, f) where n is the number of local environments and f is the number of features, is decomposed by singular value decomposition into three matrices X_n = UΣ V, where Σ is a diagonal matrix with entries in descending order. A probability of selection is then calculated for each local enviroment, indexed by i, as P_i ∝∑_j<kU_ij^2 where k = min(n, f) which for all but the smallest datasets is equal to f. A predetermined number m of environments is finally picked, based on the calculated probabilities, in order to establish the set of inducing points X_m. §.§ Calibration In the context of uncertainty quantification calibration refers to ensuring that the true labels fall within a certain confidence interval given by the predicted standard deviation. To this end we use calibration plots, as introduced by <cit.>, to asses the quality of the uncertainty estimate of a given model. When recalibrating models we minimize their calibration error, also defined by <cit.>. These metrics have been employed by other authors in the area of MLIPs <cit.>. § CODE AVAILABILITY The code used the findings presented in this paper is publicly available as part of AGOX as of version 2.7.0 at https://gitlab.com/agox/agox under a GNU GPLv3 license. § ACKNOWLEDGEMENTS We acknowledge support from VILLUM FONDEN through Investigator grant, project no. 16562, and by the Danish National Research Foundation through the Center of Excellence “InterCat” (Grant agreement no: DNRF150). apsrev4-1
http://arxiv.org/abs/2407.11934v1
20240716172544
Code Documentation and Analysis to Secure Software Development
[ "Paul Attie", "Anas Obeidat", "Nathaniel Oh", "Ian Yelle" ]
cs.SE
[ "cs.SE", "D.2.2; D.2.3; D.2.5; D.2.6" ]
Code Documentation and Analysis to Secure Software Development Paul Attie, Anas Obeidat, Nathaniel Oh, Ian Yelle School of Computer and Cyber Sciences Augusta University Augusta, Georgia 30912 July 22, 2024 ========================================================================================================================================== § ABSTRACT We present the Code Documentation and Analysis Tool (CoDAT). CoDAT is a tool designed to maintain consistency between the various levels of code documentation, e.g. if a line in a code sketch is changed, the comment that documents the corresponding code is also changed. That is, comments are linked and updated so as to remain internally consistent and also consistent with the code. By flagging "out of date" comments, CoDAT alerts the developer to maintain up-to-date documentation. We use a large language model to check the semantic consistency between a fragment of code and the comments that describe it. Thus we also flag semantic inconsistency as well as out of date comments. This helps programers write code that correctly implements a code sketch, and so provides machine support for a step-wise refinement approach, starting with a code sketch and proceeding down to code through one or more refinement iterations. CoDAT is implemented in the Intellij IDEA IDE where we use the Code Insight daemon package alongside a custom regular expression algorithm to mark tagged comments whose corresponding code blocks have changed. CoDAT's backend is structurally decentralized to allow a distributed ledger framework for code consistency and architectural compilation tracking. § INTRODUCTION Documenting source code both inline and through external means provides essential insight and continuity for any developer regardless of time spent on the code base itself. While good comments doesn't fix bad code, it's essential to both enhance the readability and interpretability for an application<cit.>. It is important to not only comment code, but to comment code well. The goal of commented code should be allow a user to implement a program or feature successfully through simply reading the inline comments or external documentation <cit.>. Thus, it is important to comment and document a program's development in real-time rather than attempting to retroactively narrative the code's purpose and functionality. Oftentimes, good code documentation consists of the following components<cit.>: * Can plainly describe the code without needing to include the code itself. * Does not increase the complexity of the program, but rather simplifies it. * Includes any future changes or bug fixes. * Provides references to further reading or additional documentation when needed. While external documentation serves as a important central resource for developers and users alike, inline code commenting is essential for development continuity and debugging<cit.>. Thus, there remains a need to enforce documentation linking for both developers and reverse engineers. In this paper, we will cover CoDAT from both a developer's and from a reverse engineer's point of view. §.§ Code Reviews Code reviews are an important milestone in the life cycle of any major software project <cit.>. Traditional code reviews take on a more rigid and formal approach, focusing on depth and coverage rather than speed and scalability <cit.>. Historically, code reviews have been effective in finding software bugs <cit.>, <cit.>. However, as code size has increased, the effectiveness of code reviews has correspondingly decreased. Czerwonka et. al. show that the traditional approach of formally reviewing code and providing feedback is negatively correlated with the size of the code review <cit.>. Therefore, the implementation of small-scale just-in-time code reviews have shown greater probably of success than their traditional large-scale counterparts <cit.>. While there are many variables that may impact the quality of a code review, the biggest factor is the thoroughness of the feedback, the reviewer's familiarity of the code, and the quality of the code itself <cit.>. This in and of itself is important to keep in mind as this shares parallels with what makes a good inline code comment <cit.>. As we explore the crossroads between change management and comment tracking, it's important to always keep in mind that the reviewer defines the quality and the review is just a means of enforcing it. Another important aspect to discuss about code reviews is the influence code coverage has on the code review's quality <cit.>. Emphasizing code coverage is an important link that can be easily maintained through just-in-time small but frequent code reviews. With this new focus on the small-scale approach to code reviews, up-to-date and accurate inline code comments are now more important than ever. For example, code review comments and updates may be incorporated into inline comments to describe a particular change or implementation<cit.>. Yet, this effort may fall short if users are not routinely made aware of inline comments that need to be updated. Manual review of code by professional programmers tend to perform better than automated or procedural approaches<cit.>. However, programs continually grow, their code bases becoming increasingly complex and large. As such, manual code reviews are becoming harder to scale and thus more nuanced and novel approaches are needed. §.§ Code Quality A program's quality may very greatly depending on a programmer's technical background and ability to perform<cit.>. Code quality can be measured by the number of bugs found within a program<cit.>. The greater the number of bugs present, the less the program will perform as intended and will more likely be susceptible to external attacks. Thus, it is important to enforce and quantify a program's quality to prevent these attacks and reduce the number of bugs present. One of the metrics historically used to track quality of code is through the quality of its comments<cit.>. While not measured as a direct one-to-one correlation, existing research assesses code quality through regression testing<cit.> and code smells<cit.>. While approaches may vary across domain implementation, the program attributes themselves need to adhere to adequate specification in order to be considered of high quality<cit.>. §.§ Integrated Development Environments, Plugins, and Change Tracking An Integrated Development Environment, or IDE<cit.>, is traditionally defined as a software application that assists developers through various code editing, compiling, and checking techniques. However, oftentimes a particular IDE may not have specific functionality a developer wants when it comes to a niche application or problem. This is often solved through the implementation of 3rd party helper applications called plugins<cit.>. In our case, we created a plugin for the Java and Kotlin IDE, Intellij IDEA<cit.>. Our plugin integrates a hyperlink tree structure with state-aware change tracking functionality. Traditionally, change tracking is handled by programs such as Subversion<cit.> or Git<cit.>. Our approach takes the incremental change tracking and updates the user in real-time when a particular change may invalidate the affiliated code sketch or comment. § CODAT: OUR APPROACH TO CODE DOCUMENTATION We advocate code documentation at many levels of abstraction, so that documentation has a hierarchical structure. A key goal is to maintain consistency between abstraction levels. CoDAT provides automated support for this by tracking changes and alerting the developer to related documentation at higher/lower levels. Also, an LLM can be invoked to check consistency between documentation and code, and also between documentation at “adjacent” abstraction levels. The documentation structure that we propose is as follows: * Top tier of documentation: functional specification of modules, i.e., what is a class or method required to do? * Classes: Purpose of the class, the major data structures. * Methods: a functional specification, given by two clauses: * Requires clause: constraints on inputs * Effects clause: states what the module does * Second tier of documentation: code sketching, how does code work, at a “high” level. A code sketch expresses the key algorithm underlying the code without getting bogged down in coding details. * Third tier: in-line code comments, more detailed description of the code, details of data structures and algorithms. * Further levels add detail, until a level is reached that is straightforward to translate into working code. * Number of levels needed depends on the task complexity. Code sketching provides a form of a Documentation for both the code itself and the next level of code abstraction. Code sketching can be used in code reviews to better help developers understand the code. CoDAT implements the above vision of code documentation by providing the following functionality: * Automated management of documentation * Change flagging: change in code/documentation is linked to related documentation that may need to be changed, and user is alerted * Consistency checking: LLM is used to check that comments and corresponding code match w.r.t. described functionality. Serves as a form of soft verification. * Views: documentation serves as code blueprint at various levels of abstraction Implemented in the Intellij IDEA Development Environment Our proposed documentation structure helps maintain a “mental image” of how the code works, and therefore helps with the cognitive workload of: * Tracing the code * Figuring out how the code works and * How different data structures and methods relate * Makes debugging faster, easier, better CoDAT provides a way to formalize and partially automate small-scale code reviews that support a larger base of code. In particular, we see to leverage CoDAT to automate consistency and regression checking via effective code documentation. While CoDAT is functionally independent from machine learning and artificial intelligence, it does have the ability to integrate with a third-party LLM. With an LLM, CoDAT can significantly reduce the manual labor needed to perform consistency checks and regression analysis. The pinpoints for these checks are often ad-hoc and in the spur of the moment. A particular feature or file may change a countless number of times prior to a formal code review. Thus, having a more informal and dynamic approach is preferred. Given this dynamic approach, we seek to model CoDAT's behavior off of existing source control systems such as Git or Subversion. Our focus with CoDAT is to explore the junction between code documentation and stateful change tracking. Oftentimes developers may change the functionality of a program without updating the inline comments or code sketches. While wikis may serve as a substitute for some developers, we believe that inline code commenting is an essential practice in any code base, especially large projects. As shown in Figure <ref>, CoDAT provides developers with the ability to be notified in real-time when inline code comments or sketches needs updating. While CoDAT does not block the user from not updating the inline comment or sketch, CoDAT will persistently alert the user that a code change occurred for a related code sketch or comment. § CODAT IMPLEMENTATION AND FUTURE DEVELOPMENT A key aim of CoDAT is to bridge the semantic gap between what the code appears to do and its actual behavior, reducing the chances of bugs and enhancing documentation quality. CoDAT is implemented as a plugin which integrates seamlessly into IntelliJ IDEA, managing hierarchical documentation structures and linking comments to related code. The main functionality provided is: * Parse and identify comments in the source code. * Manage and store comments using a hierarchical data structure. * Provide functionalities like highlighting, navigating, and updating comments. * Interface with the IDE's text editor for displaying and interacting with comments. §.§ Overview of CoDAT Architecture and Main Data Structures The CoDAT plugin’s architecture is modular, leveraging IntelliJ’s Program Structure Interface (PSI) for seamless integration. Key components are structured into distinct layers: * IDE Environment Layer: Provides fundamental services like text editing, project management, and version control. The CoDAT plugin operates within this environment. * Plugin System Layer: Comprises various modules that handle parsing, management, and user interaction: * Comment Parsing Module: Analyzes source files, extracts comments, and categorizes them hierarchically. * Comment Management Service: Handles the lifecycle of comments across files, linking them to associated code blocks and ensuring consistency. * UI/UX Component: Offers an intuitive user interface for visualizing hierarchical documentation and navigating code. * Core Data Structure Layer: Contains the main data structures representing hierarchical comments.Figure <ref> gives a data model for the current CoDAT implementation. * CommentEntity: A container mapping files to their corresponding comment trees. Acts as the root or container for managing multiple comment threads or trees. It can hold multiple CommentNode instances, each corresponding to different files or sections within files. * CommentNode: Represents individual comments or groups of comments. Forms a hierarchical tree structure, since a CommentNode can contain several other CommentNodes, which are its children in the comment tree. This is useful for nested (multi-level) comments. * SmartPsiElementPointer: Used within CommentNode to safely reference PsiComment objects even as the underlying code changes. * CommentPatterns: Used to categorize or match comments within the nodes based on predefined patterns. * TextRange: object could be designed to encompass the start and end points of a text block, making it highly relevant in applications involving syntax highlighting, code analysis, and document editing features. public class CommentEntity private Map<String, Map<String, CommentNode>> fileComments; // Initializes an empty structure for managing multiple files. public CommentEntity() this.fileComments = new HashMap<>(); // Adds a new CommentNode to the map for a specific file. public void addComment(String fileName, String label, CommentNode node) fileComments.computeIfAbsent(fileName, k -> new HashMap<>()).put(label, node); // Retrieves all comments for a particular file. public Map<String, CommentNode> getCommentsForFile(String fileName) return fileComments.getOrDefault(fileName, new HashMap<>()); public class CommentNode private String label; private List<CommentNode> children; private List<SmartPsiElementPointer<PsiComment>> psiComments; // Initializes a comment node with the given label and empty children. public CommentNode(String label) this.label is label; this.children = new ArrayList<>(); this.psiComments = new ArrayList<>(); // Adds a child comment node. public void addChild(CommentNode child) children.add(child); // Links a PSI comment to this node. public void addPsiComment(SmartPsiElementPointer<PsiComment> psiComment) psiComments.add(psiComment); // Retrieves all children of this node. public List<CommentNode> getChildren() return children; // Gets the PSI comments linked to this node. public List<SmartPsiElementPointer<PsiComment>> getPsiComments() return psiComments; §.§ Lessons learned * Hierarchical Documentation is Crucial Organizing comments hierarchically helps improve navigation and makes code reviews more consistent. The CoDAT data structure implemented in CommentEntity and CommentNode organizes comments in a logical tree-like format, which simplifies the process of tracking changes and managing code blocks. * Change Flagging Adds Value Automatically flagging changes in code and comments ensures the documentation remains consistent with the actual implementation. However, fine-tuning this feature to balance sensitivity is crucial. Too many alerts can overwhelm developers, while too few may overlook important changes. * Deep IDE Integration is Challenging IntelliJ’s PSI (Program Structure Interface) API is powerful but complex. Integration requires a deep understanding of how IntelliJ represents program elements. While implementing the CommentLinkMarkerProvider and CommentEditListener, we found that maintaining accurate references to PSI elements is challenging because code constantly evolves. * UI/UX Design Requires Early Prototyping The UI/UX components that manage hierarchical comments need thorough user testing. During development, implementing the graphical interface for comment visualization required frequent adjustments. Early prototyping would have provided valuable user feedback to refine the design and improve the interaction flow. * Modular Code Design Facilitates Extension Designing the plugin as modular components makes it easier to extend. By separating the comment parsing, management, and UI modules, the development team can work on different aspects simultaneously and future extensions can be implemented with minimal disruptions. * Comprehensive Testing is Essential Changes in hierarchical comments can have ripple effects on the documentation structure. Implementing unit and integration tests across CommentHandler, CommentEntity, and CommentNode is necessary to ensure accurate parsing, consistent linking, and proper navigation. * Team Collaboration Enhances Documentation Standards CoDAT enables developers to collaborate on documentation across multiple abstraction layers. However, creating a unified documentation standard for the team ensures consistency in coding practices and helps identify anomalies effectively. §.§ Future Work * Advanced Signature Detection Implement a robust change detection system that evaluates code changes and their impact across the entire application. Consider integrating static analysis techniques to identify affected areas in real-time and suggest necessary documentation updates. * NLP-Based Comment Summarization Leverage natural language processing (NLP) models to analyze comments and generate accurate summaries. Develop a recommendation system that suggests changes to comments based on new code patterns. * Enhanced Visualization Develop richer visualizations and interactive components to explore the hierarchical comment structure. Implement “heat maps” to identify areas of code that frequently change and ensure their documentation aligns with recent modifications. Introduce a timeline view to visualize the evolution of comments and code over time. * Version Control System Integration Deeply integrate with source control systems (Git, Subversion) to track and compare changes in code and comments automatically. Build a dashboard that links changes to their corresponding version control history, allowing developers to audit the development process. * Machine Learning-Based Anomaly Detection Develop machine learning models to detect inconsistencies or anomalies in the documentation and code, flagging possible areas needing attention. Train models to predict common documentation pitfalls, suggesting areas that may require further reviews. * Code Review Automation Automate the code review process further by developing rule-based and AI-driven tools to review documentation and ensure comments align with project guidelines. Implement suggestion features that highlight areas where comments can be improved or expanded. * Customized Project Templates Create project templates with pre-defined documentation standards for different development teams. Allow teams to customize templates based on their unique workflows and coding guidelines. § INTEGRATING A LARGE LANGUAGE MODEL INTO CODAT CoDAT has an LLM backend to perform consistency analysis for code documentation. We use the Claude Haiku LLM <anthropic.com>. We use a Python script that takes an HTTP request through localhost to then send to the LLM as before and return the response to the HTTP request using the Python Flask and Requests libraries (claude_chat_v1.py). This requires an API key to use and communicates to Anthropic’s Claude AI. It uses the Python Flask and Requests libraries and is documented thoroughly in the program itself. We use a simple Python script that takes in an input and sends it on localhost as a POST HTTP request (test_send.py). This also uses the Python Requests library. In terms of research into LLMs and the potential for false positives and negatives for our use-case, I would say that there is a very strong possibility of false positives and negatives showing up regardless of input size or complexity. Upon further reading regarding how LLMs work beneath the hood, it seems that the potential for false results and hallucinations is not mitigated by a smaller input size necessarily as I thought, though further testing and research should be put in to make sure of this. LLMs are not perfectly reliable, however I am unsure at this moment as to the extent. I would recommend, if money was of no issue, testing a bunch of different LLMs, especially open-source ones through Ollama (since those would be local and would therefore solve the potential problem of having potentially sensitive data being sent to a third-party) and Claude Opus, as that is the highest accuracy version of Claude. As for new LLMs I didn’t get a chance to look at, there are probably hundreds out there and more than likely one has been trained on Hoare Logic and/or comment-code accuracy. The big takeaways I can give from my research is to try to see what training data was used, as that affects the accuracy of results, and the amount of parameters, as that is a generally good metric of the power of the LLM. To go into technical detail on my work, I first chose to work with Codellama and installed a local version on my machine. I did this by first downloading the Ollama API onto my computer as detailed in the Ollama website (https://ollama.com/) and then using the command ollama run codellama. Upon first run of this command it will automatically download the Codellama (or any other LLM you name in the command) onto your device for use through the ollama command in the command line. I then made a Python script that could take in an input and send it to an ollama LLM and then print the response (ollama_chat.py). I then improved upon this design by adding the functionality for it to be hosted on localhost on the machine and then be able to take a HTTP request through localhost to then send to the LLM as before and return the response to the HTTP request using the Python Flask and Requests libraries (ollama_chat_v2.py). Both of these programs should be well documented and readable. To run them, simply run the program and they will stay open as a Flask web app until you close them. I then switched my implementation to Claude using the Claude API and the same general design (claude_chat_v1.py). It runs exactly the same but requires an API key to use and communicates to Anthropic’s Claude AI (so if anthropic.com is blocked on the network the program is run, it will not work). Once again, this uses the Python Flask and Requests libraries and is documented thoroughly in the program itself. For all of these programs, I used a simple Python script that takes in an input and sends it on localhost as a POST HTTP request (test_send.py). This also uses the Python Requests library. §.§ Future work on LLM integration into CoDAT We will attempt different ways of mitigating the LLM’s tendency for false results, including but not limited to a) using multiple unique LLMs for redundancy and lessened chance of a fluke, b) using formal methods and more traditional logic to double check the LLMs output or to send formal methods proofs to the LLM rather than code and comments, or c) develop or find a more fault-tolerant LLM or a way of wording inputs that results in less false results. In addition, the use of LLMs for proving formal methods or the development of a formal methods trained LLM is not something I’ve heard about or found through my research so that might be a field worth pursuing. Hopefully this helps bring more fruitful research in the years to come. § CODAT INTERFACE Figure <ref> shows the CoDAT interface. The main features of the CoDAT interface are as follows: §.§ Click Navigation Direct Navigation to Code Click on any node within the CoDAT tool window to seamlessly navigate to the corresponding code block in the IntelliJ editor. This interaction leverages IntelliJ’s powerful navigation capabilities to ensure that developers are taken directly to the relevant section of the code. Automatic Highlighting Upon navigation, the associated comment and the entire code block are automatically highlighted. This visual cue helps developers quickly understand the context and purpose of the code segment, enhancing their ability to review changes effectively. §.§ Gutter Icons Visual Indicators Gutter icons, which appear as small symbols in the editor’s left margin, serve as visual indicators of where comments are located within the code file. Interactive Icons These icons are interactive, allowing developers to click on them to navigate directly to the corresponding node in the CoDAT tool window. This feature is particularly useful for visual learners and enhances the ability to trace relationships and dependencies across the codebase. Streamlined Workflow The integration of gutter icons simplifies the process of navigating complex code files, making it easier to manage large projects with extensive documentation. §.§ Highlighted Comment Blocks Enhanced Code Visibility Clicking on a node or gutter icon not only navigates to the associated comment but also highlights all lines of code linked to that comment. This comprehensive highlighting approach ensures that developers can see at a glance all parts of the code that are discussed or described by the comment. Impact Analysis Highlighted comment blocks help developers quickly assess the impact of specific sections of code. This is essential for understanding how changes in one part of the code might affect other parts, facilitating more effective debugging and code modification. Documentation Consistency Checks By highlighting the related code segments, developers are better equipped to verify that the documentation remains accurate and up-to-date with the code’s current functionality. This feature supports ongoing maintenance and helps prevent code decay. §.§ Annotating Code Adding Annotations For inline comments, insert inline comments using // syntax and follow a pattern like //CS1.1: to categorize it. Example: For block comments, use block comments (/* ... */) for more comprehensive descriptions. Example: We use CS1, CS2 etc. to label comments within a code sketch, and AS1, AS2, etc to label Hoare-logic assertions. Managing Annotations The CoDAT tool window organizes and displays comments hierarchically, allowing you to navigate and manage them efficiently. Right-click on a node in the tool window to edit or remove comments. § EXAMPLE: DOCUMENT SEARCH ENGINE We present an example application of CoDAT. The program to be documented and analyzed is a document search engine. There is a given collection of documents, which can be added to. A query consists of a sequence w_1,…,w_k of keywords. A document matches if it contains all the keywords. Matches are ranked by the total occurrence count: the sum of the occurrences of all the keywords. The required functionality is as follows: * Start a new query, by giving an initial keyword w_1 * Add a keyword w_k to an existing query w_1,…,w_k-1 * Add a new document to the current collection. If a query is in progress, update the query with the new document, if the new document is a match. We give the search enginre code in Appendix <ref>. The main classes are as follows: * Engine: top level class that provides the client interface. * Query: creates and updates a query object, which stores the list of currently matching documents, sorted by total occurrence count. * DocCnt: a pair consisting of a document and a count (an integer) * WordTable: maintains a mapping from words to lists of DocCnt objects. (d,c) in the list of w means that w occurs c times in document d. * Doc: represents a document; stores the title and the body of the document, both as strings. * TitleTable: maintains a mapping from titles to their corresponding documents. § CONCLUSION The CoDAT project provides a comprehensive tool for enhancing code reviews and ensuring consistent documentation. The IntelliJ plugin efficiently organizes hierarchical comments, linking them to related code and managing their lifecycle. Future work will involve improving signature detection and LLM-based analysis while integrating more deeply with source control systems to provide seamless documentation tracking. A key advantage of our approach is that it is not subject to known limitations of AI such as hallucinations, since we do not use an LLM to generate code, but only to check the consistency of code versus its documentation (code sketches and inline comments/assertions). The LLM thus acts as an assistant to the programmer, and the code summaries that it generates can be helpful in checking the code. We expect that incorrect/unexpected responses by the LLM will lead to carefull review and debugging by the programmer. § CODAT USER MANUAL §.§ Installation • Prerequisites: Make sure you have IntelliJ IDEA (Community or Ultimate Edition) installed. • Download and Install the Plugin: Manual Installation: * Download the CoDAT plugin .zip file or .jar file. * In IntelliJ IDEA, go to File > Settings > Plugins. * Click “Install Plugin from Disk” and choose the downloaded file. * Restart IntelliJ IDEA to complete the installation. §.§ Basic Usage: Navigating and Highlighting Comments §.§.§ Click Navigation * Direct Navigation to Code: Click on any node within the CoDAT tool window to seamlessly navigate to the corresponding code block in the IntelliJ editor. This interaction leverages IntelliJ’s powerful navigation capabilities to ensure that developers are taken directly to the relevant section of the code. * Automatic Highlighting: Upon navigation, the associated comment and the entire code block are automatically highlighted. This visual cue helps developers quickly understand the context and purpose of the code segment, enhancing their ability to review changes effectively. §.§.§ Gutter Icons * Visual Indicators: Gutter icons, which appear as small symbols in the editor’s left margin, serve as visual indicators of where comments are located within the code file. * Interactive Icons: These icons are interactive, allowing developers to click on them to navigate directly to the corresponding node in the CoDAT tool window. This feature is particularly useful for visual learners and enhances the ability to trace relationships and dependencies across the codebase. * Streamlined Workflow: The integration of gutter icons simplifies the process of navigating complex code files, making it easier to manage large projects with extensive documentation. §.§.§ Highlighted Comment Blocks * Enhanced Code Visibility: Clicking on a node or gutter icon not only navigates to the associated comment but also highlights all lines of code linked to that comment. This comprehensive highlighting approach ensures that developers can see at a glance all parts of the code that are discussed or described by the comment. * Impact Analysis: Highlighted comment blocks help developers quickly assess the impact of specific sections of code. This is essential for understanding how changes in one part of the code might affect other parts, facilitating more effective debugging and code modification. * Documentation Consistency Checks: By highlighting the related code segments, developers are better equipped to verify that the documentation remains accurate and up-to-date with the code’s current functionality. This feature supports ongoing maintenance and helps prevent code decay. §.§ Basic Usage: Annotating Code §.§.§ Adding Annotations * Inline Comments: Insert inline comments using // syntax and follow a pattern like //CS1.1: to categorize it. Example: //CS1.1: Initialize the database connection Database db = new Database(config); * Block Comments: Use block comments (/* ... */) for more comprehensive descriptions. Example: /*CS2.1: Module Configuration This section sets up the necessary parameters for the module to function as intended. */ §.§.§ Managing Annotations * The CoDAT tool window organizes and displays comments hierarchically, allowing you to navigate and manage them efficiently. * Right-click on a node in the tool window to edit or remove comments. §.§.§ Advanced Configuration Custom Comment Patterns: Create custom patterns to categorize comments according to your team’s requirements. For instance, patterns like //SP: or //TODO: can be added to CoDAT’s settings. § SOURCE CODE FOR THE DOCUMENT SEARCH ENGINE §.§ Source Code for the Engine Class §.§ Source Code for the Query Class §.§ Source Code for the WordTable Class §.§ Source Code for the DocCnt Class §.§ Source Code for the Doc Class §.§ Source Code for the TitleTable Class § CODE AND COMMENT ANALYSIS USING CLAUDE We used the Claude LLM to check if comments and their corresponding code are consistent, if the comment describes the code accurately. We show two queries and the responses that Claude gave. §.§ First query The first query checks a comment versus code that accurately implements the comment. The response is given verbatim from Claude with some reformatting for clarity. §.§.§ Query to Claude §.§.§ Response from Claude §.§ Second query The second query checks a comment versus code that does not accurately implement the comment. The response is given verbatim from Claude with some reformatting for clarity. We have included, along with in-line code comments in natural language, Hoare-logic style assertions. The presence of these appears to help Claude produce the correct response. We will in future work investigate principles for combining natural language and formal assertions so as to maximize the accuracy of the responses of the LLM. §.§.§ Query to Claude §.§.§ Response from Claude plain
http://arxiv.org/abs/2407.12449v1
20240717095714
Close the Sim2real Gap via Physically-based Structured Light Synthetic Data Simulation
[ "Kaixin Bai", "Lei Zhang", "Zhaopeng Chen", "Fang Wan", "Jianwei Zhang" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Diffusion of fast and slow excitons with an exchange in quasi-two-dimensional systems Vladimir V. Palyulin July 22, 2024 ===================================================================================== empty empty § ABSTRACT Despite the substantial progress in deep learning, its adoption in industrial robotics projects remains limited, primarily due to challenges in data acquisition and labeling. Previous sim2real approaches using domain randomization require extensive scene and model optimization. To address these issues, we introduce an innovative physically-based structured light simulation system, generating both RGB and physically realistic depth images, surpassing previous dataset generation tools. We create an RGBD dataset tailored for robotic industrial grasping scenarios and evaluate it across various tasks, including object detection, instance segmentation, and embedding sim2real visual perception in industrial robotic grasping. By reducing the sim2real gap and enhancing deep learning training, we facilitate the application of deep learning models in industrial settings. Project details are available at https://baikaixin-public.github.io/structured_light_3D_synthesizer/https://baikaixin-public.github.io/structured_light_3D_synthesizer/. § INTRODUCTION Data collection for computer and robotic vision tasks, particularly for object segmentation and 6D pose annotation <cit.>, is labor-intensive and challenging. Additionally, gathering industrial data for deep learning models can be problematic due to factory rules, confidentiality, and safety concerns. To address the challenges in obtaining real-world data, sim2real methods have been proposed to generate synthetic RGB images in 3D simulators for tasks such as robotic perception <cit.>, autonomous driving <cit.>, intelligent agriculture solutions <cit.>, consumer and manufacturing <cit.>, and medical treatment <cit.>, to reduce manual labor and improve the performance of deep learning models. Domain randomization <cit.> has been employed to generate photorealistic RGB images. This approach minimizes the sim2real gap by altering lighting, object material, and texture, but demands expert rendering knowledge and significant optimization within the 3D simulator. While various tools have been created to generate photo-realistic simulation datasets, they are constrained by the performance and capabilities of their respective simulation engines <cit.>. Differing strengths between game engines and film industry renderers, as well as inconsistencies like the left-handed coordinate system, present challenges for tasks like object pose estimation and robotic applications. Decreasing prices of RGBD cameras have increased their use for computer and robot vision tasks, particularly for 3D visual perception and semantic information of the environment. This has led to an increase in using depth images as inputs to neural networks or in combination with RGB images <cit.>, to improve the performance of vision tasks. Additionally, RGBD images are used as inputs for multimodal perception tasks <cit.>. Depth images have been used as inputs for neural networks to train robots for perception and grasping tasks <cit.>. Researchers have attempted to reduce the gap between real and simulated depth images by generating physically-based depth images using stereo cameras or TOF cameras in virtual environments <cit.>, or applying post-processing using neural networks such as GANs <cit.> to further align them with real ones. Robotic tasks in industrial settings require visual recognition capabilities for diverse objects in terms of location, placement direction, type, shape, and size. These tasks often require identifying a large number of objects for grasping in cluttered scenes with objects of different sizes, which typically requires matching object instance segmentation or object detection to localize the objects and pose estimation based on point cloud or texture analysis to determine the grasping pose with high accuracy. Structure light cameras are widely used in industries like automotive, and logistics. They can provide 2D and 3D information with high precision and are adaptable to various requirements such as anti-ambient light, high accuracy, high reconstruction speed, and small size. We propose a data generator for physically-based gray code structure light camera simulation. This generates photorealistic RGB and physically-based synthetic depth data, complete with 3D reconstruction algorithm noises, for robotic sim2real tasks. Our key contributions are: * A physically-based gray code structured light camera simulation data generator, built using the Blender Cycles rendering engine and Optix AI denoiser, which generates photorealistic RGB data, physically-based synthetic depth data, and ground truth annotations for object pose, 2D/3D bounding boxes, and segmentation. * A dataset with physically-realistic simulated RGBD data as a training set and real data as a test set, which can be utilized to evaluate the sim2real performance gap and generalization ability of vision perception tasks like object detection and instance segmentation. * We provide a real-world demonstration of the effectiveness of our sim2real data generation and robot perception network based on this data generation method in actual robot tasks. § RELATED WORK §.§ Synthetic Dataset Generation The trend of training robots on synthetic datasets and transferring to real-world datasets is gaining traction. Various simulation data generation tools and plug-ins have emerged, ranging from game engines like Omniverse <cit.> and Unreal Engine 4 <cit.>, to 3D simulation tools like Blender <cit.> and PyBullet <cit.>. These tools vary in supported programming languages, headless rendering capabilities, and real-time rendering performance. While game engines often excel in frame rates using raster-based methods, they may lack accurate light transport simulation due to the absence of ray tracing, which limits their rendering performance for reflective and transparent objects. §.§.§ Sim2real Gap The sim2real gap remains a major hurdle for deep learning methods trained on synthetic datasets for vision tasks and robotic manipulation. This gap arises due to disparities between synthetic RGB data and real-world conditions, influenced by environmental and camera parameters. Minimizing this gap often requires extensive optimization of object materials, lighting, and sensor characteristics. To tackle this, researchers employ domain randomization techniques to vary colors, lighting, and noise <cit.>, and domain adaptation methods to address data domain mismatches, particularly in GANs training <cit.>. §.§ Physically-based Synthetic Depth Sensor Simulation Industrial 3D cameras are commonly classified into passive stereo, active stereo, ToF, and structured light types. Physically-based sensor simulation is crucial for improving the quality of datasets for vision and robotic tasks. Depth images are increasingly favored in perception tasks due to their lower sim2real gaps. For example, Danielczuk et al.<cit.> explored object segmentation methods using depth images and non-photorealistic RGB images for feature learning. Various methods have been developed to narrow the sim2real gap in depth images. For instance, Zakharov et al.<cit.> and Danielczuk et al.<cit.> utilize ground truth depth images to train robots for perception tasks. Others like Planche et al.<cit.> and Gschwandtner et al.<cit.> generate synthetic depth images using virtual stereo or TOF cameras. To further align synthetic and real depth images, Zakharov et al.<cit.> and Yuan et al.<cit.> employ neural networks like GANs for post-processing. The significance of depth image simulation across different 3D cameras is thus evident. § METHOD To close the simulation-reality gap for gray code structured light cameras, we've created a data simulator using Blender. Using physically-based rendering, we simulate scenes and apply gray code patterns. Realistic depth images are then generated through 3D reconstruction. Our system employs NVIDIA's OptiX engine and GeForce RTX 3070 Ti GPU for rendering and includes OptiX's AI-accelerated denoiser <cit.> to enhance image render speed. Fig. <ref> outlines the physically-based rendering pipeline for robotic pick-and-place tasks, while Fig. <ref> shows pattern projection and 3D reconstruction. We focus on ray tracing's acknowledged benefits, without comparing it to other rendering techniques like rasterization rendering. §.§ Physically-Based Rendering In our study, we employ physically-based rendering to simulate real-world scenes featuring a wooden box with densely arranged objects. This involves dividing the space above the box into voxel grids, sampling these grids based on object count, and then dropping the objects into the box while considering realistic physics like collisions. The final renders include synthetic RGB and ground-truth depth images, as well as instance segmentation. Real-world RGBD images and point clouds from an empty wooden box are incorporated for domain adaptation. Structured light cameras often suffer from decoding errors due to varying light paths caused by material properties and lighting conditions. This introduces noise into depth maps. To capture this realistically, our simulator uses ray tracing for gray code pattern projection, enabling accurate depth maps with noise characteristics similar to real-world applications. This approach bypasses the shortcomings of rasterization-based methods, which neglect light path calculations, resulting in less realistic images. §.§ Pattern Projection To simulate the rendering process based on the physical principles of structured light cameras in the real world, we present a projector based on a spotlight with textured light in Blender to simulate the pattern projection of structured light cameras. During pattern projection, the gray code pattern images are separately projected onto the top of the object scene using our proposed projector. The corresponding gray images are then rendered, as shown in Fig. <ref>. The rendering of high-quality images is typically time-consuming in the film industry. In synthetic dataset generation, using the same workflow as film rendering is not cost-effective for vision tasks and robotic manipulation. In recent years, AI algorithms have been applied to reduce the rendering time for high-fidelity images <cit.>. For example, the AI denoiser with Optix integration in Blender can speed up rendering for pixels with a limited number of samples. It is even possible to achieve real-time rendering after just two frames while maintaining the quality of the rendered images <cit.>. To speed up our dataset generation, we use the AI denoiser to render both RGB images and projected pattern images after rendering 20 frames. The parameter is chosen based on our qualitative comparison experiment. §.§ 3D Reconstruction with Gray Code Pattern To reconstruct the scene from images rendered with gray code pattern projection, we first generate binary images, as illustrated in Fig. <ref>. Each point within the projected area experiences at least one brightness shift, achieved by incorporating both fully black and fully white images. For every pixel, we compute its temporal maximum and minimum values to establish a binarization threshold. A pixel is assigned a value of 1 if the threshold exceeds 0.5, and 0 otherwise. We then use 3D reconstruction structured-light techniques, as described in<cit.>, to reconstruct the scene. In this process, the projector is modeled as a pinhole camera. We obtain decoding images based on the gray code encoding, and the object's imaging position on the camera plane in pixels is represented by (u_c, v_c). The virtual imaging position under the projector camera model is indicated by (u_p,v_p). Using this information, we can model the structured-light system with one projector and one camera using the following formula: s_c[ u_c; v_c; 1 ]=K_c[R_c|t_c][ X; Y; Z; 1 ], s_p[ u_p; v_p; 1 ]=K_p[R_p|t_p][ X; Y; Z; 1 ] where K_c and K_p are the intrinsic parameters of the camera and the projector. When the camera coordinate system coincides with the world coordinate system, rotation matrix R_c can be formulated as unit matrix and translation vector t_c of camera can be formulated as all zero matrix. By decoding the Gray code we can obtain the correspondence of each pixel under the camera and projector model. Finally, we can reconstruct the 3D scene with the following equation: M_c= K_c[R_c| t_c]=[ m_11^c m_12^c m_13^c m_14^c; m_21^c m_22^c m_23^c m_24^c; m_31^c m_32^c m_33^c m_34^c ] M_p= K_p[R_p| t_p]=[ m_11^p m_12^p m_13^p m_14^p; m_21^p m_22^p m_23^p m_24^p; m_31^p m_32^p m_33^p m_34^p ] [ X; Y; Z ]=M[ m_34^cu_c-m_14^c; m_34^cv_c-m_24^c; m_34^pu_p-m_14^p ] M= [ m_11^c-m_31^cu_c m_12^c-m_32^cu_c m_13^c-m_33^cu_c; m_21^c-m_31^cv_c m_22^c-m_32^cv_c m_23^c-m_33^cv_c; m_11^p-m_31^pu_p m_12^p-m_32^pv_c m_13^p-m_33^pu_p ]^-1 § EXPERIMENTS To validate the efficacy of our synthetic data simulator designed for gray code structured light cameras, we assess its performance on tasks like instance segmentation and object detection. For this, we employ a synthetic training dataset and a real-world testing dataset. Initially, we introduce an object database specifically crafted for generating cluttered environments. Subsequently, we produce an extensive, photorealistic dataset featuring single-class, multi-instance scenes, suitable for tasks such as instance segmentation and object detection. A real-world dataset, aligned with our object database, serves as the testing set. §.§ Object Database In our object database, we have gathered two metal parts in a dark gray color from industry, as well as three household objects from the KIT object models database <cit.>. These objects encompass common items found in pick and place scenarios, including industrial components and supermarket merchandise frequently encountered in robotic tasks. §.§ Synthetic and Real-world Datasets To explore vision tasks and robotic operations in industrial settings, we create an extensive synthetic dataset featuring single-class, multi-instance scenes from our object database. This dataset comprises rendered RGB and depth images, synthetic depth images using gray code structured light reconstruction, along with ground truths for object poses, 2D bounding boxes, and instance segmentation masks. For the objects in our database, we employ Run-Length Encoding (RLE) for ground truth representation in synthetic data and use polygon encoding for real-world data to streamline manual annotations. In our physics-based rendering simulation, each scene is a single class with multiple instances. For the two types of metal parts, there are a maximum of 255 objects in the physics simulation and rendering in each scene, and for objects in the KIT dataset, there are a maximum of 10 objects in each bin box. In the synthetic domain adaption dataset, we generate 1,000 scenes for each object in the database after simulating pattern projection and 3D reconstruction. For each scene we generate colored image, ground truth depth image, our proposed depth image and instance segmentation map. For the real-world dataset, we collect 100 sets of data for each object as the test set. In addition, we generated a domain randomization dataset using Isaac Sim <cit.> for comparison. §.§ Object Detection and Instance Segmentation Experiments To validate our synthetic data pipeline, we separately examine object detection and instance segmentation tasks using both real and synthetic RGB and depth images. We benchmark using YOLOv3 and SOLOv2 for these tasks, and test on real-world datasets. Quantitative results are summarized in Table <ref>. We opted for YOLOv3 due to its lack of data augmentation like Mosaic used in later versions, ensuring fairness in validating dataset efficacy. To gauge the real-world efficacy of our domain-adapted dataset, we employ YOLOv7 to compare performance against a domain-randomized dataset in perception tasks. YOLOv7's data augmentation features align well with industrial scenarios. The randomized dataset, created with Isaac Sim, varies scene backgrounds, object quantities and poses, and lighting as shown in Fig. <ref>. §.§ Robotic grasping experiment To evaluate our vision perception model in robotic grasping experiments, we build the setup of model-based and half-model-based grasping experiments. Both setups are designed to address the sim2real performance gap. For the model-based grasping experiment, we employ the Diana7 robot and Photoneo L industrial part. Meanwhile, the half-model grasping experiment involves the UR5e robot and a Photoneo M camera as shown in Fig. <ref>. In our experiments, we employ the objects from our dataset to execute bin-picking tasks, adhering to the half-model-based grasping procedure as detailed in <cit.> and Dex-Net3.0 <cit.>. The performance of visual perception has a profound influence on the computational burden of the grasping algorithm and directly affects the success rate of grasping. Our approach involves initially detecting the object in a depth image via an object detection technique, and then applying a grasping detection method. We further examine the variations in grasping success rate and manipulation speed following the integration of our visual perception approach. Both setups are devised with the aim of demonstrating that the use of sim2real in visual perception tasks can significantly enhance the performance metrics, such as success rate and algorithm runtime, in robotic grasping applications. § RESULTS §.§ Qualitative Study Assessing the disparity between reconstructed depth data from our synthetic data simulator and real-world scene data, we conduct a qualitative analysis via localized visualizations. Fig. <ref> a) depicts a real-world scene's local depth image of cluttered metal workpiece 1, while Fig. <ref> b) displays synthetic data from a structured-light camera (green) and rendered 3D model data (blue). Our generator can simulate shadow and sharp noise of structured-light cameras based on the difference between its synthetic data and 3D model renderings. Fig. <ref> b) further illustrates the efficacy of our proposed structured light-based data simulator. The depicted point cloud noise from our simulator closely matches the noise from a real structured-light camera, suggesting minimal data disparity between our simulated depth images and point clouds and the real counterparts. This infers that our simulator likely incurs less performance loss in visual perception tasks using depth images or point clouds. §.§ Quantitative Studies §.§.§ Object Detection and Instance Segmentation Table <ref> details the quantitative results of our data simulation methods in object detection and instance segmentation tasks, with varying input types (RGB or depth images) and test datasets (synthetic or real-world). Initially, a sim2real gap emerges when evaluating the trained model on real-world data across all objects. This gap is measured by subtracting the average precision results of intersection over union (IoU) in real-world data from the synthetic data results. This sim2real gap is positive for selected household objects, while qualitative results suggest its presence for two metal industrial objects, attributed to issues like shadows and sharp noise. Depth images are less sensitive to lighting and appearance changes, offering robustness against environmental variations. Using depth images as input can lessen a model's computational load, often containing less information than RGB images. They allow the distinction between object shape and texture, enhancing certain perception tasks' performance. For instance, object recognition often emphasizes shape over texture, and using a depth map as input enables model focus on the object's shape. Moreover, depth images provide additional information about camera-object distances, beneficial for tasks like 3D reconstruction and robot navigation. In addition, we conducted tests on the YOLOv7 model using datasets based on both our domain adaptation approach and domain randomization to ascertain the potential performance of our proposal in real-world projects. The experiments demonstrated that the domain adaptation-based RGB dataset achieved an IoU of 0.628 on the real dataset for the Metal Workpiece 1 object, whereas the model based on our proposed depth image achieved an IoU of 0.648. Both exceeded the IoU of 0.584 derived from the domain adaptation-based RGB dataset. The analysis suggests that in industrial grasping scenarios, which are usually static, the performance boost brought by domain adaptation surpasses that of domain randomization. In conclusion, utilizing depth images as input in deep learning perception tasks can bolster model robustness, efficiency, and performance, providing a more durable representation of 3D structure and facilitating shape-texture separation. Moreover, they exhibit less sensitivity to lighting and appearance changes. §.§ Robotic Grasping Experiment Incorporating sim2real perception into our half-model-based grasping pipeline resulted in an uptick in successful grasping rates from 95.6% to 98.0% with Dex-Net3.0. The introduction of precise object detection concentrates the sampling of grasp points more on a single object, ultimately increasing the probability of grasp points being centered on the object's plane. As a result, the success rate of grasping also improves. The sim2real approach significantly improved the model-based grasping task, which traditionally relies on error-prone and limited manual data annotation for object detection training. This method not only reduced project development time from two weeks to two days but also enhanced detection precision, thus bolstering system robustness. It mitigated issues related to oversized bounding boxes increasing pose estimation time and undersized ones compromising estimation accuracy and grasping success. The vision perception model in our experiment, trained within a simulated environment and leveraging our proposed synthetic structured light-based depth images, performed on par in real-world scenarios, successfully completing the robotic grasping task. The design of our data generation pipeline, mindful of sensor noise generation, enables effective domain adaptation in real-world applications. § CONCLUSION Despite the progress in deep learning for perception, its industrial application remains limited due to the high cost and time required for data annotation and model adaptation. To address this, we introduce a sim2real data generation tool designed for 3D structured light cameras, commonly used in industrial robotics. The tool uses physics-based simulations to generate realistic depth maps and annotations, enabling efficient sim2real transfer learning with minimal performance loss. This innovation is crucial for integrating deep learning into industrial contexts. Our quantitative analysis validates the tool's efficacy in perception tasks, highlighting that our physically realistic synthetic depth inputs accelerate domain adaptation and improve network performance. This significantly reduces the time needed for domain randomization, a common bottleneck in prior works. Looking ahead, our roadmap includes expanding the Fraunhofer IPA Bin-Picking dataset and optimizing pose estimation and robotic grasping algorithms using our generated dataset, catering to a broader range of industrial applications. § ACKNOWLEDGMENT This research has received funding from the German Research Foundation (DFG) and the National Science Foundation of China (NSFC) in project Crossmodal Learning, DFG TRR-169/NSFC 61621136008, partially supported by ULTRACEPT (778602). IEEEtran
http://arxiv.org/abs/2407.12887v1
20240717070426
Self-Adaptive Robust Motion Planning for High DoF Robot Manipulator using Deep MPC
[ "Ye Zhang", "Kangtong Mo", "Fangzhou Shen", "Xuanzhen Xu", "Xingyu Zhang", "Jiayue Yu", "Chang Yu" ]
cs.RO
[ "cs.RO" ]
Self-Adaptive Robust Motion Planning for High DoF Robot Manipulator using Deep MPC Ye Zhang^1,*, Kangtong Mo^1, Fangzhou Shen^1, Xuanzhen Xu^1, Xingyu Zhang^2, Jiayue Yu^2 and Chang Yu^2 ^1,*University of Pittsburgh, PA 15213, USA ^1University of Illinois Urbana-Champaign, IL 61820, USA ^1San Jose State University at San Jose, CA 95192, USA ^1Snap Inc. at Seattle, WA 98121, USA ^2George Washington University, DC 20052, USA ^2Warner Bro. Discovery at Culver City, CA 90232, USA ^2Northeastern University at Boston, MA 02115, USA ^1,*yez12@pitt.edu, ^1mokangtong@gmail.com, ^1fangzhou.shen0922@gmail.com ^1xuanzhenxu@gmail.com, ^2xingyu_zhang@gwmail.gwu.edu, ^2jiy048@gmail.com, ^2chang.yu@northeastern.edu July 22, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In contemporary control theory, self-adaptive methodologies are highly esteemed for their inherent flexibility and robustness in managing modeling uncertainties. Particularly, robust adaptive control stands out owing to its potent capability of leveraging robust optimization algorithms to approximate cost functions and relax the stringent constraints often associated with conventional self-adaptive control paradigms. Deep learning methods, characterized by their extensive layered architecture, offer significantly enhanced approximation prowess. Notwithstanding, the implementation of deep learning is replete with challenges, particularly the phenomena of vanishing and exploding gradients encountered during the training process. This paper introduces a self-adaptive control scheme integrating a deep MPC, governed by an innovative weight update law designed to mitigate the vanishing and exploding gradient predicament by employing the gradient sign exclusively. The proffered controller is a self-adaptive dynamic inversion mechanism, integrating an augmented state observer within an auxiliary estimation circuit to enhance the training phase. This approach enables the deep MPC to learn the entire plant model in real-time and the efficacy of the controller is demonstrated through simulations involving a high-DoF robot manipulator, wherein the controller adeptly learns the nonlinear plant dynamics expeditiously and exhibits commendable performance in the motion planning task. MPC, robust control, self-adaptive control, robotics motion planning. § INTRODUCTION In the past decade, adaptive control strategies have gained considerable prominence within control theory and robotics application <cit.>. Their capacity to rectify modeling inaccuracies and mitigate unforeseen disturbances has rendered them highly sought after as more intelligent and resilient control methodologies are pursued. A particularly potent and widely embraced adaptive control variant involves deploying a deep learning method <cit.>. With their remarkable ability to approximate nearly any function within an acceptable margin of error, these methods have been extensively applied beyond control theory, notably in data science, for managing large and intricate data sets. Within traditional control theory, deep learning primarily functions as controller estimation mechanisms, enabling real-time learning of uncertainties in plant dynamics or disturbances <cit.>, thereby progressively enhancing controller performance. For instance, in the works of <cit.>, adaptive dynamic inversion controllers were formulated using a neural network state observer—referred to as a modified state observer—within an auxiliary estimation loop to accommodate modeling uncertainties or unknown disturbances in non-affine <cit.>, non-square nonlinear systems <cit.> extended this control scheme into an event-triggered control framework. The amalgamation of MPC with neural networks leverages the predictive capabilities of MPC while enhancing its adaptive nature through neural network-based learning. This synergy allows for real-time adjustments to control strategies based on the continuous influx of new data <cit.>, thereby maintaining optimal performance despite uncertainties and external perturbations. The ability to anticipate future states and control inputs within MPC, combined with the adaptive learning prowess of neural networks, culminates in a sophisticated control methodology that is both robust and highly adaptable to dynamic environments. Recent strides in deep learning have culminated in the advancement of MPC, which stands as more potent and intrinsically more intricate iterations of the classical control method, typically characterized by substantially larger architectures. A thorough analysis of MPC, encompassing their evolution, predominant models <cit.>, applications, and prospective research avenues, is presented in <cit.>. Within the sphere of control theory, MPC are extensively harnessed for reinforcement learning applications <cit.>. The deep reinforcement learning framework is comprehensively delineated in <cit.>, accompanied by discussions on its practical applications and extant implementations. In another investigation, combine MPC and reinforcement learning to regulate a quadcopter drone in  <cit.> show successful implementation and high performance. Furthermore, <cit.> explored MPC with DRL for UAV trajectory tracking, employing a baseline controller that concurrently performs initial control while training the deep MPC controller <cit.>. Upon the MPC controller achieving sufficient policy generation<cit.>, control is transitioned to the traditional controller <cit.>, with online training continuing for varied trajectories not included in the initial training set. However, few studies focus on self-adaptive control with deep learning methods on motion planning tasks for high-DoF robot manipulation with unknown environment disturbance. This paper focuses on combining the reinforcement learning method with MPC for high-DOF robot manipulators in motion planning tasks with external disturbance on its different parts. The contributions of the paper are: * we develop a novel deep MPC algorithm concerning the unknown disturbance from external environment. The robustness of our method are examine through our simulation work. * Our deep MPC method shows high performance and strong robustness not only for the high-DoF robot manipulator, but also in other robot platform. We examine our method on different types of robot in simsultion environment, which shows strong extensity. § METHODOLOGY Consider the dynamics of an uncertain, nonlinear system: ẋ_k = f(x_k, u_k) + Bu_k+ξ(u_k) where f(x_k) is the nonlinear, known system dynamics, B is the control matrix, u_k is the control signal, and d(x_k) is the state-dependent uncertainty. The system can be simplified by distributing B: ẋ_k = f(x_k) + f̌(x_k) + Bu_k f̌(x_k) = Bξ(x_k) The objective of performance is to maneuver the uncertain system towards a target system characterized by the specified dynamics is: ẋ^d_k = f^*(x^d_k,u_L) where u_L is some nominal control signal. To achieve this, we utilize a dynamic inversion controller to substitute the original system's dynamics with the desired ones. Nevertheless, a notable drawback of dynamic inversion controllers is their susceptibility to modeling inaccuracies in the system dynamics; consequently, the inherent uncertainty of the plant presents a substantial challenge. To remedy this, past literature <cit.> have relied on learning the uncertainty using a modified state observer. Given that the primary objective of the controller is to steer the actual system state x_k towards the desired system state x^d_k, we examine the following equation: ẋ_k - ẋ^d_k + Γ(x_k-x^d_k) = 0 which ensures first-order asymptotically stable error dynamics <cit.>. It is important to note that Γ is a user-defined, positive-definite gain matrix. By substituting the system dynamics into equation (<ref>): f̌(x_k) + Bu_k - f^*(x^d_k,u_L) + Γ(x_k-x^d_k)=0 Let's reorganize above and then we can get: Bu_k = f^*(x^d_k,u_L) + f̌(x_k))-Γ(x_k-x^d_k) Assuming that the uncertainty estimate, f̂(x_k), is available and f̂(x_k) ≈f(x_k) + f̌(x_k), this can be substituted for the system dynamics as: Bu_k = f^*(x^d_k,u_L)-f̂(x_k)-Γ(x_k-x^d_k) Then, if B is invertible, the dynamic inversion controller can be completed by multiplying by the inverse as: u_k =B^-1(f^*(x^d_k,u_L)-f̂(x_k)-Γ(x_k-x^d_k)) If matrix B is non-invertible, slack variables can be seamlessly integrated <cit.>. This involves augmenting the formulation by introducing and subtracting the slack matrix B_s and the slack control signal u_s, thus transforming the control matrix into an invertible configuration <cit.>. First, the tracking error, estimation error, and estimate tracking error are defined as follows: e_r ≜x_k - x^d_k The tracking error can now be expressed as a combination of the estimation error and the estimate tracking error as follows: e_r = x_k - x̂_k+X̂_k -x^d_k e_r=e_a+ê_r Here let's expand the estimate tracking error derivative as: [ ê̇_r= f̂(x_k)+Bu_k -Λ(x_k-x̂_k) ] Now using Bu_k = f^*(x^d_k,u_L)-f̂(x_k)-Γ(x_k-x^d_k): ê̇_r= -(Γ(x_k-x^d_k)+Λ(x_k-X̂_k)) Finally, we can get: ė_r=f̃(x_k) - Γ(x_k-x^d_k) The proposed method integrates concepts from <cit.>, extending the framework to accommodate robust scenarios. The following optimization problem characterizes the proposed Robust Model Predictive Control scheme for setpoint tracking <cit.>, effectively eliminating the necessity to specify m_x and m_u: 𝒮_N(x_k,y^d_k) = min J_N(x_k,y^d_k;v(·|k),y^ζ,x^ζ,v^ζ) subject to x(0|k) = x_k, x_k+1|k = f(x_k+1|k, v_k+1|k), g_j,κ(x_k+1|k,v_k+1|k) + c_η1-ρ_d^k/1-ρ_dw_d ≤0, x^s = f(x^ζ,v^ζ), y^ζ= o_κ(x^ζ,v^ζ), x_L|k ∈𝒳_f(x^ζ,α), k = 0,...,L-1, η= 1,...,n, with the objective function J(x_k,y^d_k;v(·|k),x^ζ,v^ζ) := ∑_k=0^N-1(||x_k+1|k-x^ζ||_ℚ^2 + ||v(k+1|k)-v^ζ||_ℝ^2), ℚ,ℝ≻ 0. The terminal set is given as 𝒱(x^ζ,α) := {x ∈ℝ^n | √(𝒱_δ(x,x^ζ)) + 1-ρ_d^N/1-ρ_dw_d≤α}. The optimization problem (<ref>) is solved at time k with the initial state x_k. The optimal input sequence is denoted by v^*(·|k) with the control law denoted as ℒ(x_k,y_k^d)=v^*(0|k). The predictions over the horizon N are carried out with respect to the nominal system description in (<ref>). § EXPERIMENT RESULTS Through the subsequent simulation experiments, we aim to validate the theoretical insights and intuitive conjectures previously discussed <cit.>. Specifically, our objectives are to demonstrate the algorithm's efficacy in managing a sparse binary reward structure <cit.>, to establish that incorporating the running cost facilitates superior convergence, and to underscore the critical significance of the Model Predictive Control (MPC) time horizon during the learning process <cit.>. We did 6 experiments based on the UR5 robot arm in the MuJoCo <cit.> simulation platform, following 6 different paths in different motion planning tasks. Fig.<ref> shows all the tracking errors for each scenario. From Fig.<ref> to Fig.<ref>, the results show that our method shows strong robustness and the error will converge to zero with time growing on. The training process entailed utilizing images distributed uniformly across each class, with the training regimen employing batches comprising 10 samplings each. The model's performance evaluation was conducted by calculating cross-entropy loss and assessing accuracy for each batch over successive epochs, as depicted in Fig. <ref>. Figure <ref> illustrates the efficacy of incorporating regularization techniques during the training of our model: the validation accuracy remains constrained throughout the training process. § CONCLUSION This paper introduced a self-adaptive deep MPC algorithm, which builds upon prior research integrating trajectory optimization with global value function learning. Initially, we demonstrate that leveraging a value function within the heuristic framework yields a temporally global optimal solution. Furthermore, we illustrate that employing the running cost enhances the convergence of value function estimation through an importance sampling scheme and effectively accounts for system uncertainties. We intend to expand the value function for future research to encapsulate additional information. For instance, conditioning the value function on a local robot-centric map could enable decision-making in dynamic environments, facilitating obstacle avoidance. IEEEtran
http://arxiv.org/abs/2407.12548v1
20240717132934
Exact path-integral representation of the Wright-Fisher model with mutation and selection
[ "David Waxman" ]
q-bio.PE
[ "q-bio.PE", "cond-mat.stat-mech" ]
Emergence of second-order coherence in superfluorescence Arno Rauschenbeutel July 22, 2024 ======================================================== empty § INTRODUCTION The Wright-Fisher model describes a biological population containing a finite number of individuals<cit.>. It represents a fundamental model (or class of models) within population genetics that continues to be of relevance <cit.>. Such a model, at heart, describes the stochastic fluctuations of allele frequencies that occur in finite populations. The fluctuations arise because the parents of one generation do not all make the same contribution to the next generation. The Wright-Fisher model thus describes random genetic drift. In addition to random genetic drift, the model can also incorporate the dynamical effects of selection, mutation and other evolutionary forces, and has been used in many different situations. For example, the model lies at the heart of forward simulations<cit.>, as well as in inference<cit.>, in evolutionary games<cit.>, and it connects intimately with the coalescent<cit.>. The Wright-Fisher model is a discrete state, discrete time, Markov chain. The discrete states correspond to possible allele frequencies (or sets of allele frequencies), and the discrete time corresponds to generations. The model has been analysed under the diffusion approximation<cit.>, where states and times are treated as taking continuous values. Recently, the Wright-Fisher model of a biallelic locus, subject to multiplicative selection but in the absence of mutation, has been considered under the diffusion approximation, and the time-dependent transition density has been represented as a path-integral<cit.>. Path integrals were introduced into quantum theory by Feynman<cit.>, and into the mathematics of diffusion by Wiener<cit.>. Generally, a probability distribution (or closely related object), associated with a diffusion-like process, is represented as a `sum' of contributions over all paths or trajectories between two points or states. The `sum' over paths is often an integral, and this is the origin of the name `path integral'. An alternative name is `functional integral', since integration over paths/trajectories can also be thought of as integration over functions - namely the trajectories themselves. Related approaches, termed functional methods, have a large variety of applications<cit.>. The introduction of path integrals in physics has provided alternative calculational routes to problems. Beyond this, path integrals, because they focus on visualisable trajectories, may be a source of new intuition, and may suggest new ways to proceed and new approximations<cit.>. In the present paper we work fully within the framework of a Wright-Fisher model where both states and time are discrete; we do not employ the diffusion approximation, although we relate some results to this approximation. We consider a randomly mating sexual population, where a general scheme of selection acts at an unlinked locus, which is also subject to mutation. We derive an exact path-integral representation of the time-dependent transition probability of this model. We first consider a locus with two alleles, and then generalise to the locus having n alleles, where n can take the values 2, 3, 4, ... . There are numerous examples of studies of loci with two alleles, and there are increasing numbers of examples where multiple (>2) alleles exist at a locus<cit.>. § THEORETICAL BACKGROUND FOR THE CASE OF TWO ALLELES Consider a population that is diploid and sexual, and reproduces by random mating. We assume there is an equal sex ratio and no sexual dimorphism. Time is measured in generations, and labelled by t=0,1,2,3,…. The organisms in the population are subject to mutation and selection at a single unlinked locus. The locus has two alleles that we refer to as A_1 and A_2. We shall focus on just one of the two alleles, say A_1, and often call it the focal allele. With two alleles, the sum of their frequencies is unity. Thus specification of the frequency of the focal allele at any time determines the frequency of the other allele at this time. The state of the population can thus be described just in terms of the frequency of the focal allele, which we will often just call the frequency. §.§ Effectively infinite population We first consider a very large (effectively infinite) population. We take a generation to begin with adults, who sexually reproduce via random mating and then die. Each mating yields the same very large number of offspring. If the frequency of the focal allele in adults is x, then the frequency of the focal allele in offspring is x^∗=f_mut(x) where f_mut(x) takes into account frequency changes caused by mutation. This function is given by f_mut(x)=( 1-u) x+v(1-x) where u is the probability that an A_1 allele in a parent mutates to an A_2 allele in an offspring, and v is the corresponding A_2 to A_1 mutation probability. In the absence of mutation f_mut(x)=x which is equivalent to x^∗=x. We assume viability selection determines the probability of different offspring surviving to maturity. If the frequency of the focal allele in offspring (i.e., after mutation) is x^∗, then the frequency of this allele after viability selection has acted is f_sel(x^∗), where f_sel(x^∗) takes into account frequency changes of x^∗ due to selection. Some non selective thinning may occur at this point, but providing the population size remains very large, this does not cause any further changes in allele frequencies. Selection acts on variation in the population, and when there is no variation there are no effects of selection. There is no variation when carriers of only one allele are present in the population, which corresponds to x=0 and x=1. We take f_sel(x)=x+σ(x)x(1-x) where the function σ(x) (with |σ(x)|<∞) is determined by the particular scheme of selection that is operating, and the effect of selection in f_sel(x), namely σ(x)x(1-x), has the required property of vanishing at both x=0 and x=1. A few examples of σ(x) are as follows. * If the relative fitnesses of the three genotypes A_1A_1, A_1A_2 and A_2A_2 are 1+s, 1+hs and 1, respectively, then σ(x)=s×[ ( 1-2h) x+h] /[1+sx^2 +2hsx(1-x)]. To leading order in s (assuming |s|≪1), we have σ(x)=s×[ ( 1-2h) x+h], in which case any h≠1/2 will lead to σ(x) varying with x. * If selection is additive, and the relative fitnesses of the three genotypes are 1+2s, 1+s and 1, respectively, then σ(x)=s/(1+2sx), and to leading order in s we have σ(x)=s, i.e., a constant. * If selection is multiplicative, and the relative fitnesses of the three genotypes are ( 1+s) ^2, 1+s and 1, respectively, then σ(x)=s/(1+sx), and to leading order in s we have σ(x)=s., i.e., a constant. Thus weak multiplicative selection is very similar, in effect, to weak additive selection. We note that while small selection coefficients (i.e., small values of s, and more generally small values of σ(x)) are common in nature <cit.>, strongly selected alleles do sometimes occur, for example alleles that are appreciably deleterious <cit.>. Accordingly, we will not make the assumption that |σ(x)| is small, and will simply assume that σ(x) follows, without any approximation, from a selection scheme. The frequency of the A_1 allele in offspring, after selection and mutation have acted, can be expressed in terms of the frequency, x, in adults, as f_sel(x^∗)≡ f_sel(f_mut(x)) and we write F(x)=f_sel(f_mut(x)). Let X(t) denote the frequency of the A_1 allele in the adults of generation t. Then in a very large population, the frequency obeys the deterministic equation X(t+1)=F(X(t)). §.§ Finite population We now consider a finite population, where N adults are present at the start of each generation. The processes of reproduction, mutation and viability selection occur as in an effectively infinite population. However, after viability selection there is a round of non selective sampling/number regulation of the mature offspring, that leads to N individuals being present in the population. These become the N adults of the next generation. The behaviour of this population can be described by a Wright-Fisher model, as is shown in textbooks<cit.>. We will now use such a model (which can, like the diffusion approximation, incorporate an effective population size<cit.>). For the population under consideration, let 𝐌 denote the transition matrix of the Wright-Fisher model. We write the (i,j) element of 𝐌 as M_i,j, where i and j can take the values 0,1,2,…,2N. Then for a population where the focal allele has the frequency j/(2N) in one generation, the probability that the focal allele will have the frequency i/(2N) in the next generation is M_i,j. With ab=a!(a-b)!b! a binomial coefficient, we have<cit.> M_i,j=2Ni[ F( j/2N) ] ^i[ 1-F( j/2N) ] ^2N-i. The transition matrix is always normalised in the sense that ∑_i=0^2NM_i,j=1 for all j, and invoking normalisation can resolve ambiguities (for example, when F(x)=0, normalisation ensures M_0,0=1 so ∑_i=0^2NM_0,j=1 for all j). The transition matrix is key to many calculations. If 𝐏(t) is a column vector containing the probabilities of all 2N+1 possible frequency states of the population in generation t, i.e., the probability distribution for generation t, then using the transition matrix we can determine the probability distribution for generation t+1, namely 𝐏 (t+1)=𝐌𝐏(t). Furthermore, using the elements of the transition matrix, we can determine the probability that the population passes through a particular set of frequency states over time, i.e., displays a particular frequency trajectory. For example, if the population starts with frequency l/(2N) in one generation, then the probability that in the next 3 generations the population will have the frequencies k/(2N), j/(2N) and i/(2N), respectively, is given by M_i,jM_j,kM_k,l. §.§.§ Alternative notation for the transition matrix We shall now write elements of 𝐌 in a different notation that will be useful for our purposes. We introduce the notion of an allowed frequency of an allele which is given by allowed frequency=integer/2N where the integer can take any of the values 0,1,2,…,2N. To keep the notation as simple as possible, we shall, for a locus with two alleles, reserve the use of a, x, x^', x(r) (for various integral r) and z, as values of allowed frequencies. In terms of the allowed frequencies x^' and x, we write the elements of 𝐌 as M(x^'|x)=2N2Nx^'[ F(x)] ^2Nx^'[ 1-F(x)] ^2N(1-x^') which gives the probability of a transition from the population state (i.e., frequency) x, in one generation, to state x^' in the next generation. Thus if x^'=i/(2N) and x=j/(2N), with i and j any two of the integers 0,1,2,…,2N, then M(x^'|x) coincides with M_i,j in Eq. (<ref>). We shall refer to a locus that is not subject to selection (but which may be subject to mutation), as a neutral locus. The transition matrix of a neutral locus, written M^(0)(x^'|x), is obtained from M(x^'|x) by setting σ(x) to zero for all x. With x^∗=f_mut(x) this leads to M^(0)(x^'|x)=2N2Nx^'( x^∗) ^2Nx^'( 1-x^∗) ^2N(1-x^'). A central aspect of the analysis we present is that the form of F(x) in Eq. (<ref>) allows us to write the transition matrix M(x^'|x) of Eq. (<ref>) as the exact product of two factors: M(x^'|x)=M^(0)(x^'|x)× e^C(x^'|x) where M^(0)(x^'|x) is the neutral result given in Eq. (<ref>), while, with x^∗ given by Eq. (<ref>), we have C(x^'|x)=2N×[x^'ln(1+σ(x^∗)( 1-x^∗) )+(1-x^')ln(1-σ(x^∗)x^∗)] - see the first subsection of Methods for details. The factorisation in Eq. (<ref>) says the transition matrix M(x^'|x) depends on a `core' neutral/mutational part, M^(0) (x^'|x), and a factor e^C(x^'|x) that is `selectively controlled' in the sense that if there is no selection (i.e., if σ(x) vanishes for all x) then C(x^'|x) vanishes and the factor e^C(x^'|x) is simply unity. We note that while C(x^'|x) depends on selection, and precisely vanishes in the absence of selection, it also depends on the population size, N, and the mutation rates u and v. §.§ Trajectories and path integral We now consider a trajectory of the frequency, which starts at frequency x(0) in generation 0, has frequency x(1) in generation 1,…, and frequency x(t) in generation t. To represent such a trajectory, which runs from time 0 to time t, we use the notation x]_0^t=( x(0),x(1),...,x(t)). This expresses the trajectory as a row vector with t+1 elements, all of which are allowed frequencies. The probability of occurrence of the trajectory [x]_0^t in Eq. (<ref>) is obtained by multiplying together the appropriate M(x^'|x) and is given by ∏_r=1^tM(x(r)|x(r-1)=∏_r=1^tM^(0)(x(r)|x(r-1))× e^∑_r=1^tC(x(r)|x(r-1)). We write this as probability of [x]_0^t=W^(0)( [x]_0^t) × e^C( [x]_0^t) where W^(0)( [x]_0^t) =∏_r=1^tM^(0) (x(r)|x(r-1)) and C( [x]_0^t) =∑_r=1^tC(x(r)|x(r-1)). Equation (<ref>) says that the trajectory [x]_0^t, in the presence of selection, has a probability which we can write as the product of the probability of the trajectory under neutrality, W^(0)( [x]_0^t), and the factor e^C( [x]_0^t) which is selectively controlled. The presence of e^C( [x]_0 ^t) in Eq. (<ref>) indicates how non-zero selection (in combination with other forces, via its N, u and v dependence), modifies the probability of occurrence of an entire trajectory under neutrality. Let K(z,t|a,0) denote the overall probability of going from an initial allowed frequency of a at time 0 to the allowed frequency of z at time t. In conventional (Markov chain) language, K(z,t|a,0) is determined from matrix elements of 𝐌^t, where 𝐌 is the transition matrix that was introduced above (with elements given in Eq. (<ref>)). By contrast, in trajectory language, all possible trajectories, between the end-points (a at time 0 and z at time t) contribute to K(z,t|a,0). We can thus write K(z,t|a,0) as sum of the trajectory probability, W^(0)[𝐱]× e^C( [x]_0^t), over all possible trajectories. That is K(z,t|a,0)=x(t)=zx(0)=a∑⋯∑W^(0)( [x]_0^t) e^C( [x]_0^t) . The notation in Eq. (<ref>) denotes a sum over all trajectories whose end points, x(0) and x(t), have the specific allowed frequencies a and z, respectively, while x(1), x(2), ...,x(t-1), which give the state of the population at intermediate times, take values that cover all allowed frequencies. In Figure 1 we illustrate two trajectories that contribute to a transition probability. Equation (<ref>) is an exact `path integral' or `sum over paths' representation of the finite time transition probability in a two allele Wright-Fisher model where states and times are discrete, and the model incorporates mutation and a general form of selection. Since, by construction, W^(0)( [x]_0^t) is independent of selection, and since C( [x]_0^t) vanishes when selection vanishes, the transition probability corresponding to that in Eq. (<ref>), when there is no selection, namely the neutral probability, is written as K^(0)(z,t|a,0) and given by K^(0)(z,t|a,0)=x(t)=zx(0)=a∑⋯∑W^(0)([x]_0^t). The quantity K(z,t|a,0) in Eq. (<ref>) can also be interpreted as the probability distribution of the frequency (of the A_1 allele) at time t, which is a random variable that we write as X(t). In particular, K(z,t|a,0) is the value of the distribution of X(t), when evaluated at frequency z, given that the frequency X(0) had the definite value a. Thus, for example, the expected value of X(t), given X(0)=a, is E[X(t)|X(0)=a]=∑_zz× K(z,t|a,0) where the sum runs over all allowed values of z. §.§ Approximation when there is no mutation and selection is weak We now consider a special case of the distribution K(z,t|a,0). We proceed under the following assumptions. * There is no mutation (u=v=0). Equation (<ref>), with no mutation, entails replacing x^∗ by x and we obtain the no mutation, neutral (no selection) form of the transition matrix that we write as M^(0,0)(x^'|x)=2N2Nx^'x^2Nx^'( 1-x) ^2N(1-x^'). * Selection is multiplicative. We take the A_1A_1, A_1A_2 and A_2A_2 genotypes to have relative fitnesses of ( 1+s) ^2, 1+s and 1, respectively. We then have σ(x)=s/(1+sx) and from Eq. (<ref>) obtain C(x^'|x)=2N[ x^'ln( 1+s) -ln( 1+sx) ] . * Selection is weak (|s|≪1) In terms of the scaled strength of selection R=2Ns which, unlike s need not be small, the expansion of C(x^'|x) in s is given by C(x^'|x)≃ R·( x^'-x) -R^2/4N( x^'-x^2) with corrections of order s^3. This yields C( [x]_0^t)≡∑_r=1^tC(x(r)|x(r-1) ≃(R-R^2/4N)[ x(t)-x(0)] -∑_r=0^t-1U( x(r)) where U( x) =R^2/4Nx( 1-x) . Thus in the absence of mutation, but with weak selection, we have the approximation K(z,t|a,0)≃ e^[ R-R^2/(4N)] ( z-a) x(t)=zx(0)=a∑⋯∑W^(0,0)( [x]_0^t) e^-∑_r=0^t-1U( x(r)) . The path integral representation of the transition probability density, under the diffusion approximation, which involves continuous frequencies and continuous time, can be written as K_diffusion(z,t|a,0)=e^R·( z-a) ∫_x(0)=a ^x(t)=zP([x]_0^t)e^-∫_0^tU(x(r))drd[x] where the integration is over all continuous trajectories that start at frequency a at time 0 and end at frequency z at time t, with P([x]_0^t) a `weight' associated with neutral trajectories, and d[x] the measure of the path integral<cit.>. A comparison of the approximate Wright-Fisher transition probability in Eq. (<ref>) and the diffusion transition probability density in Eq. (<ref>) indicates that the two results are similar. In particular, corresponding to the expressions e^[ R-R^2/(4N)](z-a) and e^-∑_r=1^tU(x(r)) that are present in the Wright-Fisher result are, respectively, the expressions e^R·(z-a) and e^-∫_0^tU(x(r))dr in the diffusion result. The analogue of the Wright-Fisher neutral, mutation-free, trajectory probability, W^(0,0)( [x]_0 ^t), that is present in Eq. (<ref>), is the neutral weight, P([x]_0^t), that is present in Eq. (<ref>). § THEORETICAL BACKGROUND FOR MULTIPLE ALLELES We shall now generalise the above. We again consider a population that is diploid, reproduces sexually by random mating, has an equal sex ratio, exhibits no sexual dimorphism, and evolves in discrete generations. Selection again occurs at a single unlinked locus, but now there are n alleles at the locus, where n is arbitrary (i.e., n=2,3,4,...) and we write allele i (for i=1,2,...,n) as A_i. When there are three or more alleles, the difference, compared with two alleles, is that knowledge of the frequency of one allele is not enough to specify the state of the population. In fact, we need to follow the behaviour of n-1 allele frequencies, while one allele frequency can be treated as being determined by all other allele frequencies (since allele frequencies sum to unity). However, we shall not proceed in this way; we shall treat all alleles as being on an equal footing, and follow the behaviour of all n allele frequencies. §.§ Effectively infinite population We first consider a very large (effectively infinite) population. In what follows, we shall use 𝐱 to denote an n component column vector whose i'th element, x_i, is the frequency of allele A_i in adults (i=1,2,...,n). The frequency of all alleles in offspring is then 𝐱^∗ =𝐐𝐱 where 𝐐 is an n× n matrix whose (i,j) element, Q_i,j, is the probability that an A_j allele mutates to an A_i allele. Elements of 𝐐 are non-negative, and satisfy ∑_i=1^nQ_i,j=1 for all j (so the sum of all mutated frequencies is unity). We next assume that viability selection acts and determines the probability of different offspring surviving to maturity. The frequencies, after viability selection, are given by 𝐟_sel(𝐱^∗), where 𝐟_sel (𝐱^∗) takes into account frequency changes of 𝐱 ^∗ due to selection, and is an n component column vector. We shall shortly exploit a property of 𝐟_sel(𝐱), that follows because selection acts on variation in a population. In particular, if the vector of allele frequencies, 𝐱, has an i'th element which is zero (x_i=0), then the i'th element of 𝐟_sel(𝐱), which we write, as f_sel,i(𝐱), also vanishes, since selection alone cannot salvage an allele after its absence from a population. This motivates us to take f_sel,i(𝐱) in the form f_sel,i(𝐱)=x_i× [1+G_i(𝐱)] where G_i(𝐱) is finite (| G_i(𝐱)| <∞) and is determined by the specific form of selection acting. Generally, ∑_i=1^nx_iG_i(𝐱)=0 and G_i(𝐱)≥-1 (ensuring that after selection, the sum of all allele frequencies is unity, and all alleles frequencies are non-negative). The set of allele frequencies in offspring, after selection and mutation have acted, can be expressed in terms of the set of frequencies 𝐱, in adults, as 𝐟_sel(𝐐𝐱) and we write 𝐅(𝐱)=𝐟_sel(𝐐𝐱). We now consider dynamics. Let 𝐗(t) denote an n component column vector containing the set of allele frequencies in generation t. Because we have an effectively infinite population, 𝐗(t) obeys the deterministic equation 𝐗(t+1)=𝐅(𝐗(t)). §.§ Finite population Consider now a finite population, where N adults are present in each generation. The quantity 𝐱 is still an n component vector whose i'th element, x_i, is the frequency of allele A_i in adults, but it has the added feature that all elements have values which are allowed frequencies (Eq. (<ref>)). That is, x_i≥0, ∑_i=1^n x_i=1, and each x_i is an integer divided by 2N. We shall call a vector that has this property an allowed set of allele frequencies. In the multiallele case we shall reserve the use of 𝐚, 𝐱, 𝐱^', 𝐱(r) (for various r), and 𝐳, for allowed sets of allele frequencies. We now write the transition matrix element for the probability of a transition from state 𝐱 to state 𝐱^' as M(𝐱^'|𝐱)=2N2N𝐱^'∏_i=1^n[ F_i(𝐱)] ^2Nx_i^' where 2N𝐦, with 𝐦 an n component column vector with integer elements, denotes a multinomial coefficient for n categories. We note that the transition matrix element, M(𝐱^'|𝐱), in its conventional matrix form, is an element of a matrix with vector indices, not scalars<cit.>. The transition matrix of a neutral locus has elements which are the zero selection limit of M(𝐱^'|𝐱), which we write as M^(0)(𝐱^'|𝐱), and which is given by M^(0)(𝐱^'|𝐱)=2N2N𝐱^'∏_i=1^n( x_i^∗) ^2Nx_i^' where 𝐱^∗ is given by 𝐱^∗=𝐐𝐱. As for the case of two alleles, a factorisation is possible; the form of 𝐟_sel(𝐱) in Eq. (<ref>) allows us to write the transition matrix, M(𝐱^'|𝐱) of Eq. (<ref>), as the exact product of two factors: M(𝐱^'|𝐱)=M^(0)(𝐱^'|𝐱 )× e^C(𝐱^'|𝐱) where M^(0)(𝐱^'|𝐱) is given in Eq. (<ref>) and C(𝐱^'|𝐱)=2N∑_i=1^nx_i^'ln(1+G_i(𝐱^∗)) - see the second subsection of Methods for details. §.§ Trajectories and path integral We now write a trajectory as 𝐱]_0^t=( 𝐱(0),𝐱(1),...,𝐱 (t)) in which each 𝐱(r) is an n component column vector containing an allowed set of allele frequencies, which gives the state of the population at time r. It follows that the trajectory [𝐱]_0^t in Eq. (<ref>), is an n×(t+1) matrix. The probability of this trajectory is ∏_r=1^t M(𝐱(r)|𝐱(r-1)= ∏_r=1^t M^(0)(𝐱(r)|𝐱(r-1))×exp( ∑_r=1 ^tC(𝐱(r)|𝐱(r-1))). We write this as probability of [𝐱]_0^t=W^(0)( [𝐱 ]_0^t) × e^C( [𝐱]_0^t) where W^(0)( [𝐱]_0^t) = ∏_r=1^t M^(0)(𝐱(r)|𝐱(r-1)) and C( [𝐱]_0^t) =∑_r=1^tC(𝐱 (r)|𝐱(r-1)). Let K(𝐳,t|𝐚,0) denote the overall probability of going from an initial state of the population corresponding to the allowed set of frequencies, 𝐚 at time 0, to state 𝐳 at time t, which is an another allowed set of frequencies. All possible trajectories between these end-points contribute to K(𝐳,t|𝐚,0). We thus write K(𝐳,t|𝐚,0) as sum of the probabilities W^(0)( [𝐱]_0^t) × e^C( [𝐱 ]_0^t) over all possible trajectories. That is K(𝐳,t|𝐚,0)=𝐱(t)=𝐳𝐱(0)=𝐚∑⋯∑W^(0)( [𝐱]_0^t) e^C( [𝐱]_0^t) . The notation in Eq. (<ref>) denotes a sum over all trajectories whose end points, 𝐱(0) and 𝐱(t), have the specific (allowed set) values 𝐚 and 𝐳, respectively, while 𝐱(1), 𝐱(2), ...,𝐱(t-1), which give the state of the population at intermediate times, take values that cover all allowed sets of frequencies. Equation (<ref>) is an exact `path integral' representation of the finite time transition probability in a multiple (n) allele Wright-Fisher model where states and times are discrete. Since, by construction, W^(0)( [𝐱]_0^t) is independent of selection, and since C( [𝐱]_0^t) vanishes when there is no selection, the probability of going from state 𝐚 at time 0 to state 𝐳 at time t, when there is no selection, is K^(0)(𝐳,t|𝐚,0)=𝐱 (t)=𝐳𝐱(0)=𝐚∑⋯∑ W^(0)( [𝐱]_0^t). §.§ Approximation when there is no mutation and selection is weak We now consider a special case of the distribution K(𝐳 ,t|𝐚,0) of Eq. (<ref>), when there is no mutation and selection is multiplicative and weak. When there is no mutation the matrix 𝐐 becomes the n× n identity matrix. Under multiplicative selection, we take the A_iA_j genotype to have a fitness proportional to (1+s_i)(1+s_j). It follows that F_i (𝐱)=x_i( s_i-∑_j=1^ns_jx_j) /( 1+∑_j=1^ns_jx_j) hence G_i (𝐱)=( s_i-∑_j=1^ns_jx_j) /[ 1+∑_j=1^ns_jx_j] and C(𝐱^'|𝐱)=2N[ ∑_i=1^nx_i^'ln( 1+s_i) -ln( 1+∑_i=1^ns_ix_i) ] . We take weak selection to correspond to |s_i|≪1 for all i, then similar to the case of two alleles, we obtain approximate results by expanding C(𝐱^'|𝐱) in the s_i, and discarding third and higher order terms. We shall express results in terms of scaled selection strengths that are given by 𝐑=2N𝐬 or R_i=2Ns_i where 𝐬 is a column vector of the s_i. With δ_i,j denoting a Kronecker delta, a T superscript denoting the transpose of a vector, and 𝐕(𝐱) denoting an n× n matrix with elements V_i,j(𝐱)=x_iδ_i,j-x_ix_j we obtain C( [𝐱]_0^t) ≡∑_r=1^tC(𝐱 (r)|𝐱(r-1) ≃𝐑^T[ 𝐱(t)-𝐱(0)] +ϕ( 𝐱(t)) -ϕ( 𝐱(0)) -∑_r=0 ^t-1U( 𝐱(r)) where ϕ( 𝐱) =-∑_i=1^nR_i^2/4N x_i and U( 𝐱) =1/4N𝐑^T𝐕 (𝐱)𝐑. Using Eqs. (<ref>), (<ref>) and (<ref>) in Eq. (<ref>), combined with W^(0,0)( [𝐱]_0 ^t), which denotes the probability of trajectory [𝐱 ]_0^t in the absence of mutation and selection (W^(0,0)([𝐱]_0^t) is constructed from a product of terms of the form 2N2N𝐱^'∏_i=1 ^nx_i^2Nx_i^' - cf. Eq. (<ref>)). we obtain the approximation K(𝐳,t|𝐚,0)≃ e^𝐑^T( 𝐳 -𝐚) +ϕ( 𝐳) -ϕ( 𝐚) 𝐱(t)=𝐳𝐱 (0)=𝐚∑⋯∑W^(0,0)( [𝐱]_0^t) e^-∑_r=0^t-1U( 𝐱(r)) . § DISCUSSION In this work we have derived an exact `path integral' representation of the time-dependent transition probability in a Wright-Fisher model. We have explicitly considered the case of two alleles, where the population's description is in terms of a focal allele, and the case of an arbitrary number of n alleles, where the description is in terms of all n allele frequencies, with all frequencies treated as having the same status. For the case of two alleles, we have compared the Wright-Fisher transition probability with a path integral representation of the corresponding quantity (a transition density) under the diffusion approximation. The result for the diffusion approximation result was derived for multiplicative selection, in the absence of mutation, and we have established the relation of this with the exact Wright-Fisher result in this case. The Wright-Fisher path integral, derived in this work for two alleles, applies for a wider class of fitness functions than just multiplicative fitness, and can incorporate mutation. The general form of the path integral, for two alleles is given in Eq. (<ref>), and takes the form of a sum over trajectories of a product the two terms: (i) a `weight' W^(0)( [𝐱]_0^t) which gives the probability of a trajectory under neutrality, i.e., when only random genetic drift and mutation are operating, and (ii) the factor e^C( [𝐱]_0^t) which while depending on parameters such as mutation rates, is primarily determined by selection - this factor incorporates all effects of selection, and C( [𝐱]_0^t) vanishes in the absence of selection. This separation into two factors represents an underlying property of the transition probability, K(z,t|a,0), that we know from other analyses, namely that at long times (t→∞) the quantity K(z,t|a,0) is a smooth function of selection, but the long time properties are very different for zero and non-zero mutation rates. For non-zero mutation rates, the long-time form of K(z,t|a,0) is non-zero for all possible values of z (i.e., all allowed frequencies), and independent of the initial frequency, a. By contrast, for vanishing mutation rates, only the terminal frequency classes (0 and 1) have non-zero probabilities at long times, and furthermore, these probabilities depend on the initial frequency, a. Thus K(z,t|a,0), as t→∞, behaves discontinuously, as a function of mutation rates, in the sense that allowing mutation rates to tend to zero, and having mutation rates exactly equal to zero, yield different results. A diffusion analysis shows this most clearly, where singular spikes (Dirac delta functions) at the terminal frequencies are generally present in the transition probability density when mutation rates are zero, and are absent when mutation rates are non-zero <cit.>. The separation of a probability of a trajectory into the product of W^(0)( [𝐱]_0^t) and e^C( [𝐱]_0^t) is thus natural and a reflection of different behaviours arising from different features of the dynamics. On the matters of fixation and loss, we note that since a Wright-Fisher model can describe these phenomena (in the absence of mutation), an exact path integral representation associated with this model can also, generally, describe features such as fixation and loss. This will also carry over to a path integral representation, based on the diffusion approximation, since the diffusion approximation is also known to encompass fixation and loss, albeit in a singular form <cit.>. Such singular behaviour seems likely to make the analysis of the path integral representation, based on the diffusion approximation, to be more complex, than in its absence. As an elementary illustration of how fixation is incorporated into the path integral representation of the transition probability, K(z,t|a,0), we note the when all mutation rates are zero, the probability of ultimate fixation of the focal allele is lim_t→∞K(1,t|a,0). Let us revisit the case considered above where there is no mutation and weak multiplicative selection acting. We can expand K(1,t|a,0) in s by first expanding K(1,t|a,0) in C( [ x_0^t] ), and then expanding C( [ x_0^t] ) in s. To linear order in s we obtain (from Eq. (<ref>)) K(1,t|a,0)≃[1+2Ns(1-a)] ×x(t)=1x(0)=a∑⋯∑W^(0,0)( [x]_0^t). Since lim_t→∞x(t)=1x(0)=a∑⋯∑W^(0,0) is the probability of fixation ultimately occurring, under neutrality, this limit thus coincides with the initial frequency, a. In this way, we arrive at a fixation probability of P_fix(a)≃ a+2Nsa(1-a), which contains the neutral result and a term which is first order in s, which is the leading correction due to selection. Expansion of K(z,t|a,0) (and related quantities) to higher order in s, can be achieved, again by exploiting the factorisation between drift/mutation and selection that occurs in Eq. (<ref>). Expansions in s beyond linear order involve more complicated calculations than that of the linear case. In the case of two alleles, we have seen the relation between the path integral of the `fully discrete' Wright-Fisher model and the path integral of the diffusion approximation, for this model. For the case of an arbitrary number of n alleles there is, at the present time, no such path integral for the diffusion approximation. However, from the lessons learned for two alleles we can infer this some of the properties of the general n case, under the diffusion approximation. In particular, when selection is multiplicative, and in the absence of mutation. we infer from Eq. (<ref>) that K_diffusion(𝐳,t|𝐚,0) = e^𝐑^T( 𝐳 -𝐚) ∫_𝐱(0)=𝐚 ^𝐱(t)=𝐳 P([𝐱]_0^t)e^-∫_0^tU( 𝐱(r)) drd[𝐱] where 𝐑 is a column vector containing the set of scaled selection strengths (Eq. (<ref>)), the quantity P([𝐱]_0^t) is the analogue of the neutral, mutation-free, probability of a trajectory in a Wright-Fisher model, W^(0,0)( [𝐱]_0^t), while U(𝐱) is given by Eq. (<ref>). An interesting feature is the way selection enters Eq. (<ref>), in both the prefactor, e^𝐑^T( 𝐳-𝐚) and within U( 𝐱) in forms that involve the vectors and matrices that occur in the problem. Additionally, a diffusion analysis would suggest that all occurrences of the population size, N, are replaced by the effective population size, N_e. In the special cases considered above, of no mutation and weak selection, the `selectively controlled' quantities C([x]_0^t) and C( [𝐱]_0^t), for two and n alleles, respectively, both naturally split into two terms (see Eqs. (<ref>) and (<ref>)). One of the terms has dependence on only the initial and final frequencies of the transition probability, and has no dependence of the frequencies taken by trajectories at intermediate times; it is natural to call this a boundary term. To leading order in selection coefficients, the boundary term changes sign when the sign of all selection coefficients are reversed (for two alleles reversal entails s→-s; for n alleles, 𝐬→-𝐬). The boundary term is thus the primary place that the deleterious or beneficial effect of a mutation manifests itself. The other term (U([x]_0^t) and U([𝐱]_0^t), respectively) depends on the frequencies taken by trajectories at all times, from the initial time to the final time. The U terms, when large, have the effect of suppressing the contribution of a trajectory. They are a manifestation of the `probabilistic cost of selection' of an entire trajectory. Interestingly, the U terms cannot take negative values and remain unaltered when the sign of all selection coefficients are reversed. In summary, we have presented an exact representation of the transition probability of a Wright-Fisher model in terms of a path integral (in reality a sum over paths/trajectories). Let us conclude with some possible ways that the path integral representation may be of use. We shall restrict our considerations to the case of two alleles, where the main result is given in Eq. (<ref>), since very similar considerations apply to the n allele case in Eq. (<ref>). * The path integral representation may make it easy to carry out an expansion in a small parameter, such as a selection coefficient. This has been carried out for the transition density at intermediate frequencies, under the diffusion approximation<cit.>. In the present work we have shown that expansion in selection coefficients can also be applied to phenomena such as fixation and loss. There may be many other applications of expansion in a small parameter.. * A path integral involves trajectories whose contributions generally have different probabilities of occurrence. A possible approximation is where the most probable trajectory, along with trajectories that have with small fluctuations around the most probable trajectory, are used to estimate the path integral. The most probable trajectory may be of interest in its own right, since it may typify the way the population makes a transition between two states of the population over time. * The path integral representation involves a fundamental separation of mutation and drift from the process of what is primarily selection, as manifested by the two factors W^(0)( [x]_0^t) and e^C([x]_0^t) in Eq. (<ref>). To exploit this separation, we note that while, in this work, we have implicitly assumed that all parameters are independent of the time, a straightforward generalisation of the exact results allows parameters to be time dependent. Then one immediate case of application occurs when just selection fluctuates over time, with selection coefficients drawn each generation from a given distribution, or generated by a random process. In the absence of further knowledge, it is plausibly the case that the relevant transition probability follows from an average over all such selection coefficients. With the average denoted by an overbar, the average of Eq. (<ref>) reads K(z,t|a,0) =x(t)=zx(0)=a∑⋯∑W^(0)( [x]_0^t) e^C( [x]_0^t). Thus only the selectively controlled factor is averaged, and this may lead to an effective theory that has new/modified selective terms, compared with the case where selection coefficients are simply set equal to the time-averaged value<cit.>. * A different approach, compared to the above three approaches, is to rewrite Eq. (<ref>) in the form K(z,t|a,0)=K^(0)(z,t|a,0)× D(z,t|a,0) where K^(0)(z,t|a,0) is the neutral result (Eq. (<ref>)) and D(z,t|a,0)=. x(t)=zx(0)=a∑⋯∑ W^(0)( [x]_0^t) e^C( [x]_0^t) / x(t)=zx(0)=a∑⋯∑W^(0)( [x]_0 ^t) . We can interpret D(z,t|a,0) as an average of the quantity e^C( [x]_0^t) over all neutral trajectories that start at allowed frequency a at time 0 and end at allowed frequency z at time t. Applying Jensen's inequality<cit.> to Eq. (<ref>) yields D(z,t|a,0)≥ D_J(z,t|a,0) with D_J(z,t|a,0)=exp( . x(t)=zx(0)=a∑⋯∑W^(0)( [x]_0^t) C( [x]_0^t) / x(t)=zx(0)=a∑⋯∑W^(0)( [x]_0^t) ) . Thus we find K(z,t|a,0)≥ K^(0)(z,t|a,0)× D_J(z,t|a,0), where the exponent of D_J(z,t|a,0) is a conditional average of C( [x]_0 ^t) over all neutral trajectories that start at frequency a at time 0 and end at frequency z at time t § METHODS Here we give details of the calculations underlying Eqs. (<ref>) and (<ref>). §.§ Factorisation of the transition matrix: two alleles For the case of two alleles, the transition matrix can be expressed as a product of two factors, one of which is independent of selection. We begin with Eq. (<ref>) for the transition matrix, which we reproduce here for convenience. We have M(x^'|x)=2N2Nx^'[ F(x)] ^2Nx^'[ 1-F(x)] ^2N(1-x^') where F(x)=f_sel(f_mut(x)). In the absence of selection, F(x) reduces to f_mut(x) and M(x^'|x) reduces M^(0)(x^'|x), as given by M^(0)(x^'|x) =2N2Nx^'[ f_mut(x)] ^2Nx^'[ 1-f_mut(x)] ^2N(1-x^')≡2N2Nx^'( x^∗) ^2Nx^'( 1-x^∗) ^2N(1-x^') where we have set x^∗=f_mut(x). To establish factorisation we use the adopted form of selection in Eq. (<ref>), namely f_sel(x)=x+σ(x)x(1-x) to write F(x)=x^∗[ 1+σ(x^∗)(1-x^∗)]. Similarly we have 1-F(x)=( 1-x^∗) [ 1-σ(x^∗)x^∗]. These allow us to write Eq. (<ref>) as M(x^'|x) =2N2Nx^'{ x^∗[ 1+σ(x^∗)(1-x^∗)] } ^2Nx^'{( 1-x^∗) [ 1-σ(x^∗)x^∗] } ^2N(1-x^') =2N2Nx^'( x^∗) ^2Nx^'( 1-x^∗) ^2N(1-x^')×[ 1+σ(x^∗)(1-x^∗)] ^2Nx^'[ 1-σ(x^∗)x^∗] ^2N(1-x^') ≡ M^(0)(x^'|x)× e^C(x^'|x) where C(x^'|x)=2N×[x^'ln(1+σ(x^∗)( 1-x^∗) )+(1-x^')ln(1-σ(x^∗)x^∗)]. Equation (<ref>) represents an exact decomposition of the transition matrix, M(x^'|x), into the product of a transition matrix, M^(0)(x^'|x), that is independent of selection, and a factor e^C(x^'|x) which depends on selection, and is unity in the absence of selection. §.§ Factorisation of the transition matrix: n alleles For the case of n alleles, the transition matrix can again be expressed as a product of two factors, one of which is independent of selection. We begin with Eqs. (<ref>) and (<ref>), which we reproduce here for convenience: M(𝐱^'|𝐱)=2N2N𝐱^'∏_i=1^n[ F_i(𝐱)] ^2Nx_i^' and F_i(𝐱)=x_i^∗× [1+G_i(𝐱^∗)] in which 𝐱^∗=𝐐𝐱. In the absence of selection, 𝐅(𝐱) reduces to 𝐱^∗and M(𝐱^'|𝐱) reduces to M^(0)(𝐱^'|𝐱), as given by M^(0)(𝐱^'|𝐱)=2N2N𝐱^'∏_i=1^n( x_i^∗) ^2Nx^'. To establish a factorisation we note that Eq. (<ref>) allows us to write Eq. (<ref>) as M(𝐱^'|𝐱) =2N2N𝐱^'∏_i=1^n{ x_i^∗[ 1+G_i(𝐱^∗)] } ^2Nx_i^' =2N2N𝐱^'∏_i=1^n( x_i^∗) ^2Nx_i^'×∏_i=1^n[ 1+G_i(𝐱^∗)] ^2Nx_i^' =M^(0)(𝐱^'|𝐱)× e^C(𝐱^'|𝐱) where M^(0)(𝐱^'|𝐱)=2N2N𝐱^'∏_i=1^n( x_i^∗) ^2Nx_i^' and C(𝐱^'|𝐱)=2N∑_i=1^nx_i^'ln( 1+G_i(𝐱^∗)) . Equation (<ref>) represents an exact decomposition of the transition matrix, M(𝐱^'|𝐱), into the product of a transition matrix, M^(0)(𝐱^'|𝐱), that is independent of selection, and a factor e^C(𝐱^'|𝐱) which depends on selection, and is unity in the absence of selection.
http://arxiv.org/abs/2407.12258v1
20240717020134
Facial Affect Recognition based on Multi Architecture Encoder and Feature Fusion for the ABAW7 Challenge
[ "Kang Shen", "Xuxiong Liu", "Boyan Wang", "Jun Yao", "Xin Liu", "Yujie Guan", "Yu Wang", "Gengchen Li", "Xiao Sun" ]
cs.CV
[ "cs.CV" ]
Abbreviated paper title Hefei University of Technology Hefei, China {shenkang,liuxuxiong, yaojun, wangboyan,2023170653}@mail.hfut.edu.cn {320522038,2312789664,}@mail.hfut.edu.cn,ligengchen6599@163.com sunx@hfut.edu.cn Facial Affect Recognition based on Multi Architecture Encoder and Feature Fusion for the ABAW7 Challenge Kang Shen1 Xuxiong Liu1 Jun Yao Boyan Wang Yu Wang Yujie Guan Xin Liu Gengchen Li Xiao Sun July 22, 2024 ======================================================================================================== § ABSTRACT In this paper, we present our approach to addressing the challenges of the 7th ABAW competition. The competition comprises three sub-challenges: Valence Arousal (VA) estimation, Expression (Expr) classification, and Action Unit (AU) detection. To tackle these challenges, we employ state-of-the-art models to extract powerful visual features. Subsequently, a Transformer Encoder is utilized to integrate these features for the VA, Expr, and AU sub-challenges. To mitigate the impact of varying feature dimensions, we introduce an affine module to align the features to a common dimension. Overall, our results significantly outperform the baselines. § INTRODUCTION Sentiment analysis, as a key area in pattern recognition, is driving human-computer interaction to a deeper emotional dimension. Sentiment behavior analysis, through in-depth study of non-verbal signals such as human facial expressions, speech and body language, aims to improve the emotion understanding ability of AI systems and achieve more natural and empathetic human-computer communication. With the deepening research on sentiment behavior analysis in natural scenarios, we are gradually developing more advanced AI technologies, and these play a key role in a variety of fields such as technical human-computer interaction, healthcare, driving safety, advertising optimization, education and learning, and virtual reality. Facial Expression Recognition (FER) shows great potential in the aforementioned areas, and despite significant advances in facial recognition technology, the fine-grained nuances of emotion understanding are still an unsolved challenge. Facial information is the most intuitive and real element of emotional expression, therefore, we focus on facial emotion recognition and comprehensively present a solution aimed at addressing three major challenges of affective behavior analysis in a wild environment (ABAW): first, valence-arousal (VA) estimation, i.e., accurately determining positive and negative emotions and their activation levels; and second, expression (Expr) recognition, which is aimed at identifying, e.g., six basic (Anger, Disgust, Fear, Happiness, Sadness, Surprise) and Neutrality; and third, Action Unit (AU) Detection, which analyzes facial muscle movements to capture subtle facial gestures and then decode complex emotional expressions. This comprehensive solution is dedicated to improving the accuracy and utility of sentiment analysis. Based on the rich multimodal nature of the Aff-Wild 2 dataset, we are committed to mining the sentiment information embedded in it and enhancing the real-world utility of the analysis method. To achieve this goal, we adopt three key strategies. First, a self-supervised Masked Auto Encoder (MAE) model is utilized to learn high-quality sentiment feature representations from large facial image datasets to optimize subsequent task performance. Second, through the Transformer-based model architecture, we effectively fuse information from multimodal data to facilitate the interaction of different modalities such as audio, video, and text. Finally, using an integrated learning strategy, we subdivide the dataset into multiple sub-datasets, assign them to different classifiers, and integrate the outputs of these classifiers in order to obtain more comprehensive and accurate sentiment analysis results. Our proposed method is innovative and breakthrough in three aspects: (1) We integrate and optimize a large-scale facial expression dataset, and through fine-tuned processing, we successfully construct an efficient facial expression feature extractor, which significantly improves the performance of our model on specific facial expression recognition tasks. (2) We introduced a Transformer-based multimodal integration model to address the three sub-challenges of the ABAW competition. This model effectively facilitates complementarity and fusion between different modal data, thus enhancing the extraction and analysis of expression features. (3) We adopted an integrated learning strategy to address the need for sentiment analysis in different scenarios. This approach trains classifiers independently on multiple sub-datasets with unique contexts, and improves the overall accuracy and generalization ability by integrating the prediction results of these classifiers, which ensures the stable operation of our model in diverse application scenarios. § RELATED WORK §.§ Multimodal Features In prior editions of the ABAW<cit.>, a plethora of multimodal features, encompassing visual, auditory, and textual characteristics, have been extensively employed. By extracting and analyzing these multifaceted features, the performance of affective behavior analysis tasks can be significantly enhanced. In the visual modality, facial expressions constitute a crucial aspect for understanding and analyzing emotions. The human face is represented by a specific set of facial muscle movements, known as Action Units (AUs)<cit.>, which have been widely adopted in the study of facial expressions. Within the context of affective computing, auditory features—typically encompassing energy characteristics, temporal domain features, frequency domain features, psychoacoustic features, and perceptual features—have been extensively utilized. They have demonstrated promising performance in tasks such as expression classification and VA estimation<cit.>. These features can be extracted using pyAudioAnalysis<cit.>, and akin to visual features, deep learning has also been extensively applied in the extraction of acoustic characteristics. In the previous iterations of the ABAW competition<cit.>, numerous teams have leveraged multimodal features. Meng et al.<cit.> proposed a model that utilized both auditory and visual features, ultimately securing the top position in the vocal affect estimation track. To fully exploit affective information in the wild, Zhang et al.<cit.> harnessed multimodal information from images, audio, and text, and proposed a unified multimodal framework for Action Unit detection and expression recognition. This framework achieved the highest scores in both tasks. These methodologies have convincingly demonstrated the efficacy of multimodal features in the realm of affective behavior analysis tasks. §.§ Multimodal Structure In early studies, <cit.> employed connected multimodal features to train Support Vector Machine (SVM) models, which were found to be deficient in effectively modeling multimodal information. Recent research on multimodal affective analysis has primarily utilized deep learning models to simulate the interaction of information within and between modalities. <cit.> developed a neural network model known as Visual Aspect Attention Network (VistaNet), which leverages visual information as a sentence-level alignment source. This multimodal architecture enables the model to focus more on these sentences when classifying emotions. Presently, the use of transformers for multimodal learning has become the mainstream in multimodal algorithms. In the domain of image-text matching, ALBEF<cit.>, to some extent inspired by the CLIP<cit.> model, has introduced the concept of multimodal contrastive learning into multimodal models, achieving a unification of multimodal contrastive learning and multimodal fusion learning. In previous iterations of the ABAW<cit.>, have harnessed the transformer architecture and achieved remarkable results. § FEATURE EXTRACTION We fuse features from different neural networks to obtain more reliable emotional features and utilize these fused features for downstream tasks. By combining information from various feature extraction models such as ResNet and POSTER, we achieve a more comprehensive and accurate representation of emotions. §.§ Resnet-18 ResNet<cit.>(He et al. 2016) is a deep convolutional neural network (CNN) architecture designed to address the common issues of vanishing gradients and exploding gradients during the training of deep neural networks. Its core idea is the introduction of residual blocks, which incorporate skip connections, making the network easier to train. Instead of directly learning the mapping of each layer, the output of the residual block learns the residual between the input and output. This structure effectively mitigates the vanishing gradient problem. The pre-trained model of ResNet-18 can be used as a feature extractor, which first pretained on the MS-Celeb-1M<cit.>, and finally obtain a 512-dimensional visual feature vector. transforming images into high-dimensional feature vectors for use in other machine learning tasks, such as image retrieval and image similarity computation. §.§ POSTER The two-stream Pyramid crOss-fuSion TransformER network (POSTER)<cit.> is a novel deep learning model specifically designed for video understanding tasks, such as action recognition and video classification. POSTER combines a pyramid structure with a two-stream architecture, leveraging cross-layer fusion and transformer networks to enhance video understanding performance. Extensive experimental results demonstrate that POSTER outperforms SOTA methods on RAF-DB with 92.05%, AffectNet<cit.> (7 cls) with 67.31%, and AffectNet (8cls) with 63.34%, respectively . The dimension of the visual feature vectors is 768. §.§ POSTER2 The proposed POSTER2<cit.> aims to improve upon the complex architecture of POSTER, which focus primarily on improvements in cross-fusion, dual-stream design, and multi-scale feature extraction.In cross-fusion, POSTER2 replaces traditional cross-attention mechanisms with window-based cross-attention mechanisms. The dual-stream design eliminates the branch from images to landmarks. Regarding multi-scale feature extraction, POSTER2 integrates multi-scale features of images and landmarks, replacing POSTER's pyramid design to alleviate computational burden. Experimental results demonstrate that POSTER2 achieves state-of-the-art Facial Expression Recognition (FER) performance on multiple benchmark datasets with minimal computational cost. It retains the same visual feature dimensionality of 768 as POSTER. §.§ FAU Facial Action Units (FAU), originally introduced by Ekman and Friesen<cit.>, are strongly associated with the expression of emotions .In the fields of computer vision and human-computer interaction, Facial Action Units (FAUs) are widely employed in the development of facial expression analysis and facial recognition systems. By detecting and recognizing combinations of individual FAUs, it is possible to infer overall facial expressions and their emotional meanings. We utilize the OpenFace<cit.> framework (Baltrusaitis et al., 2018) for FAU feature extraction, resulting in a 17-dimensional feature vector. § METHOD The 7th ABAW encompasses a total of two challenges, and we have participated in the first challenge. Drawing upon the design of classical transformer models, our entire process consists of four stages. As illustrated in <ref> Initially, we utilize existing pre-trained models or toolkits to extract visual and auditory features from each frame of the video. Subsequently, each sequence of visual or auditory features is fed into an affine module to achieve features of the same dimension. Thirdly, these features are concatenated and then input into the transformer encoder to simulate temporal relationships. Lastly, the output of the encoder is fed into the output layer to obtain the corresponding outputs. The figure illustrates the overall framework of our proposed method. §.§ Affine Module In our experiments, the input consists of one or more visual features, but their feature dimensions often differ, and the discrepancies can be quite substantial. It can be observed that the EAC features span 2048 dimensions, while FAU has only 17 dimensions. We posit that excessively large dimensional disparities may diminish the effectiveness of the useful features. To address this, we have designed an affine module. For the first three challenges, we employ a linear layer to affinely transform features of varying dimensions to a uniform dimension. Furthermore, adhering to the setup of classical transformer<cit.> models, we add positional encoding (PE) to each feature sequence to convey its contextual temporal information. The formula is as follows: f̂_i = (Kf_i + c) + PE where f_1,f_2,...,f_n denote all the features, n is the number of features. §.§ Transformer Encoder Vectors from different feature extraction models may contain redundant or irrelevant information. To combine these features and construct a more suitable vector that retains more useful information for downstream classification tasks, we use a basic transformer encoder<cit.>. In the context of classification tasks, the transformer<cit.> model typically employs only the encoder part. The final vector is processed to capture the contexts and interdependencies of the data components. The output from the encoder is then typically passed through one or more dense layers to perform classification tasks such as Arousal-Valence and Action Units. f̂= [ f_1 ; f_2;⋯ ; f_n ] where [ ; ; ] denotes the concatenation operation. T=TE(f̂) where T denotes the temporal feature. P=Mdl(T) where Mdl represents the invoked model, and p represents the resulting probability distribution. §.§ Loss Function For the distinct tasks within Challenge 1, we employ various loss functions tailored to the specific requirements of each task. We utilize the Mean Squared Error (MSE) and CCC loss to handle VA analysis, the Cross-Entropy loss for Expression Recognition, and a Weighted Asymmetric Loss for the Action Unit (AU) problem. L(p,p̂) = 1/N∑_i=1^N (p_i-p̂_i)^2 which p_i and p̂_i is the label and prediction of valence or arousal, N is the number of frames in a batch. L(p,p̂) = -∑_i=1^N ∑_j=1^C p_ijlogp̂_ij which p_ij and p̂_ij is the label and prediction of expression, N is the number of frames in a batch and C=8 which denotes the number of expressions. L(p,p̂) = -1/N∑_i=1^N w_i[p_ilogp̂_i+(1-p_i)p̂_ilog(1-p̂_i)] which p̂_i, p_i and w_i are the prediction (occurrence probability), ground truth and weight of the i^th AU. By the way, w_i is defined by the occurrence rate of the i^th AU in the whole training set. § EXPERIMENTS §.§ Dataset The upcoming challenge will utilize the s-Aff-Wild2 database. s-Aff-Wild2 is the static version of the Aff-Wild2 database; it comprises frames selected from Aff-Wild2. In total, approximately 221K images will be employed, which include annotations on valence arousal; six basic expressions, along with a neutral state, plus an "other" category (encompassing emotional states not included in other categories); and 12 action units, namely AU1, AU2, AU4, AU6, AU7, AU10, AU12, AU15, AU23, AU24, AU25, and AU26. Regarding the pre-trained visual feature extractors, two datasets are mentioned: RAF-DB: A large-scale database containing around 30,000 facial images from numerous individuals, each image annotated approximately 40 times, followed by refinement using the EM algorithm to ensure the reliability of the annotations. AffectNet: A substantial facial expression recognition dataset comprising over one million images sourced from the internet. Approximately half of these images are manually annotated with seven discrete facial expressions, as well as the intensity levels of emotional value and arousal. §.§ Experimental Results The results, as presented in the table, indicate that the combination of POSTER2, ResNet18, and FAU features excels across various metrics, including emotional Valence-Arousal, expression recognition, and Action Unit (AU) detection, thereby underscoring their robustness. Specifically, the ResNet18 architecture has a notable influence on the recognition of facial expressions, while the FAU features are particularly impactful for the detection of Action Units. § CONCLUSION In this paper, we introduce the methodologies presented at the 7th ABAW competition, encompassing three distinct sub-challenges: Valence Arousal (VA) estimation, Expression (Expr) classification, and Action Unit (AU) detection. We leveraged a robust suite of visual feature extractors and designed an affine module to standardize the varying impacts of individual feature sets. Our comprehensive experimental protocol demonstrates that our approach significantly surpasses the benchmarks, ensuring remarkable performance across all sub-challenges. splncs04
http://arxiv.org/abs/2407.12553v1
20240717133405
End-to-end Stroke imaging analysis, using reservoir computing-based effective connectivity, and interpretable Artificial intelligence
[ "Wojciech Ciezobka", "Joan Falco-Roget", "Cemal Koba", "Alessandro Crimi" ]
cs.LG
[ "cs.LG", "cs.CV" ]
End-to-end Stroke imaging analysis, using reservoir computing-based effective connectivity, and interpretable Artificial intelligence. Wojciech Ciezobka1,2, Joan Falcó-Roget2, Cemal Koba2, Alessandro Crimi 1,* 1 AGH University of Krakow, Kraków, Poland 2 Sano, center for computational medicine, Kraków, Poland * alecrimi@agh.edu.pl § ABSTRACT In this paper, we propose a reservoir computing-based and directed graph analysis pipeline. The goal of this pipeline is to define an efficient brain representation for connectivity in stroke data derived from magnetic resonance imaging. Ultimately, this representation is used within a directed graph convolutional architecture and investigated with explainable artificial intelligence (AI) tools. Stroke is one of the leading causes of mortality and morbidity worldwide, and it demands precise diagnostic tools for timely intervention and improved patient outcomes. Neuroimaging data, with their rich structural and functional information, provide a fertile ground for biomarker discovery. However, the complexity and variability of information flow in the brain requires advanced analysis, especially if we consider the case of disrupted networks as those given by the brain connectome of stroke patients. To address the needs given by this complex scenario we proposed an end-to-end pipeline. This pipeline begins with reservoir computing causality, to define effective connectivity of the brain. This allows directed graph network representations which have not been fully investigated so far by graph convolutional network classifiers. Indeed, the pipeline subsequently incorporates a classification module to categorize the effective connectivity (directed graphs) of brain networks of patients versus matched healthy control. The classification led to an area under the curve of 0.69 with the given heterogeneous dataset. Thanks to explainable tools, an interpretation of disrupted networks across the brain networks was possible. This elucidates the effective connectivity biomarker's contribution to stroke classification, fostering insights into disease mechanisms and treatment responses. This transparent analytical framework not only enhances clinical interpretability but also instills confidence in decision-making processes, crucial for translating research findings into clinical practice. Our proposed machine learning pipeline showcases the potential of reservoir computing to define causality and therefore directed graph networks, which can in turn be used in a directed graph classifier and explainable analysis of neuroimaging data. This complex analysis aims at improving stroke patient stratification, and can potentially be used with other brain diseases. § INTRODUCTION Stroke is one of the leading causes of morbidity and mortality worldwide. Accurate classification can aid in effective treatment and management. Magnetic resonance imaging (MRI) has emerged as a powerful tool for stroke diagnosis, providing detailed images of brain structures and abnormalities. However, the analysis of MRI data poses significant challenges due to its complexity and the need for efficient and reliable classification algorithms, especially when we want to understand the dynamics of the brain. The classification of stroke using medical images has been the primary focus of previous studies <cit.>. However, most of the approaches carried out so far are focused on the extent of lesions and limited correlation to functional damages such as aphasia and motor deficits <cit.>. Recent studies have started investigating the brain's inner functioning from the point of view of the influence of one brain region on another one, and how lesions compromise those interactions <cit.>. Indeed, brain connectivity encompasses the complex interactions between neurons and their intricate network of connections. It is a broad term that encompasses connections between neurons at various levels of granularity and with different connection characteristics. Within this domain, three distinct types of connectivity have emerged: structural (SC), functional (FC), and effective connectivity (EC). Each of these holds clinical and predictive value, offering valuable insights into the brain's intricate workings <cit.>. Effective connectivity investigates the causal link between the time series of two regions of the brain and can be represented as directed graphs. Classification and explanation of directed graphs have not been fully investigated and the study of stroke with those tools provides the opportunity to create a pipeline exploring all those elements. More specifically, local ischemia damages neurons and structural neural connections at the site of injury. This affects primarily subcortical regions, subsequently altering long-range functional connectivity between cortical areas. Decreases in functional connectivity alterations suggest deficits but cannot reveal the directionality or time scale of the information flow, leaving several open questions related to the directionality and functioning of the brain after a non-traumatic injury such as a stroke. Allegra and colleagues carried out previous studies where this transfer of information view of the brain of stroke patients was investigated through Granger Causality (GC) analyses <cit.>, where they observed a significant decrease in inter-hemispheric information transfer in stroke patients compared to matched healthy controls. GC has been used largely in computational neuroscience studies due to its low computational costs compared to other methods <cit.>. Practically, the method estimates autoregressor variables relating to different time series which are then further validated by F-statistics to establish causality. Yet, due to the potential confounding characteristics that each autoregressor may generate <cit.>), there are still ongoing disagreements on whether this can help define causal interaction between brain regions <cit.> using this framework, and some authors consider GC as just a relation measures <cit.>. To overcome these limitations, researchers have explored the use of reservoir computing in a completely detached paradigm to extract causality <cit.>. Reservoir computing is a computational framework that leverages the dynamics of recurrent neural networks to process and classify complex temporal data effectively, by exploiting the inherent memory and non-linear dynamics of reservoirs <cit.>. It has also been used to classify electroencephalography data from stroke patients <cit.>, though as a classifier itself, not to estimate the structure of the human brain. Finally, capturing both spatial and temporal patterns can help understand stroke beyond traditional voxel-based lesion-symptom mapping <cit.> to consider specific information transfer and interactions in the brain <cit.>. Technically, this will produce a directed graph representation that can be classified and explored with explainable AI tools. In summary, using reservoir computing we i) defined causality in stroke patients, and, given the generated representation of causality as directed graphs, investigated ii) the value of the resulting directed maps together with their classification, and iii) the explainability of the classification to provide insights into the overall brain network disruption in stroke patients (Fig. <ref>). To our knowledge, no study has classified directed graphs and explained their significance in computational neuroscience and neurology. Thus, incorporating these features into classification algorithms could improve stroke diagnosis accuracy and efficiency. § METHODS §.§ Data and preprocessing The dataset was previously collected by the School of Medicine of the Washington University in St. Louis and complete procedures can be found in <cit.>. They collected MRI data and behavioral examinations of stroke patients and healthy controls. The imaging data comprise structural and functional MRI from controls and patients suffering from hemorrhagic and ischemic stroke. Acquisitions were done within the first two weeks of the stroke onset (i.e., acute). Structural scans include T1-weighted, T2-weighted, and diffusion tensor images. Functional images include a resting state paradigm. Scanning was performed with a Siemens 3T Tim-Trio scanner. Briefly, we closely followed the pre-processing steps outlined in <cit.>. Following a quality control of fMRIPrep outputs, 104 stroke subjects and 26 control subjects were qualified for further analysis. For our purposes, it suffices to say that structural scans were used in combination with functional acquisitions to co-register all participants into a common template. Gray matter signal was finally obtained after artifact removal and parcellated into 100 regions of interest (ROIS) <cit.>. For every subject and patient, these 100 time series (i.e., one for each ROI) were fed into the pipeline outlined below to obtain the subject-specific effective connectivity maps. The dataset is not public but it is available upon request to the original authors <cit.>. The used code is instead available at the URL <https://github.com/Wotaker/Effective-Connectivity-Reservoir-Computing>. §.§ Reservoir computing Reservoir computing networks (RCN), despite being known for more than two decades, have been largely eclipsed by other frameworks. A reservoir network is a set of artificial neurons that are randomly connected between themselves thus forming a recurrent architecture <cit.>. Sometimes this is also called echo-state network since the internal dynamics of the reservoir (or "echo state") maintain information about the system's input history. In this framework, an input series 𝐮_t is fed into this high dimensional dynamical system of N units through a non-linear activation function, 𝐫^in_t = f^in(𝐖^in𝐮_t), where 𝐖^in is an N x (N_in+1) matrix of random weights including biases, N_in is the dimensionality of the multivariate input, and f_in is the non-linearity. At each time step t the former projection is used to drive the reservoir units 𝐫_t. The current state of each unit is a combination of the past states as well as the current input, 𝐫_t = (1-λ) 𝐫_t-1 + λ f (𝐫^in_t + 𝐖𝐫_t-1), where 𝐖 is an N x N matrix of random weights, and λ is the leakage that controls the importance of the reservoir's history to the current time stamp t. The final component of the reservoir is a set of readout weights 𝐖^out that extract information from the hidden state and map onto specific predictions. That is, 𝐲_t = 𝐖^out𝐫_t. The predictions 𝐲_t might be of arbitrary dimension N^out and, importantly, are linear w.r.t. to the reservoir states. Within this paradigm, only that readout weights 𝐖^out are trained via incremental linear regression optimization <cit.>, 𝐖^out = (𝐑𝐑^T + α𝐈)^-1 (𝐘𝐑^T), with α being a regularization parameter, 𝐑 is the matrix obtained after concatenating all the reservoir states, and 𝐘 contains the outputs. Once again, the readout weights contain a set of N_out biases. Noteworthy, as opposed to other architectures suited for time series forecasting, only a reduced set of output weights needs to be trained, thus increasing its computational efficiency. The random weights 𝐖^in are drawn from a uniform distribution bounded between -1 and 1. The recurrent connections are drawn from a standard normal distribution and are later scaled by the spectral radius. The latter largely ensures that the network possesses the echo-state property, although there is recent evidence disagreeing with this aspect <cit.>. Briefly, the main idea behind reservoir-like computing is that a given input pushes the reservoir to specific locations in a high-dimensional manifold <cit.>; the output weights are then optimized to retrieve information from the nearby regions. Were the input to move the reservoir away to other points, the output weights would not be able to recover meaningful information hence completely missing the prediction. Further evidence suggests that RCNs supersede deep learning-based models for temporal series prediction even on the verge of chaos <cit.>. Richer approaches aim to train the reservoir connections themselves and have been proven to be useful in understanding the dynamical properties of cortical networks <cit.>, offering an interesting framework for similar use cases. The parameter values used in our experiments can be found in Table <ref>. §.§ Reservoir computing networks to map causal interactions in lesioned brains Traditionally, effective connectivity in neuroimaging can be estimated in different ways, as dynamic causal modeling <cit.>, GC <cit.>, continuous-time implementations <cit.>, or information theory <cit.>. Granger-like interpretations are often preferred due to their relative computational costs and implementation, though they are not exempt from controversy <cit.> thus justifying alternative approaches. An unrelated proposal relies on the properties of the state-space of the dynamical system to reconstruct asymmetric mappings between delayed embeddings of each component of the system <cit.>. That is, it leverages Taken's theorem to find the optimal neighborhood as well as the exact delay at which the reconstruction is optimal. Recent extensions <cit.> have incorporated non-linear methods as well as reducing the number of ad-hoc parameters. Most prominently, reservoir computing has proven to be an efficient and accurate alternative to automatize the process almost in its entirety <cit.>. Let's consider the relationship between two one-dimensional variables, x and y, where it hypothesizes that the delay at which interactions take place is not smaller than the sampling rate (e.g., Time of Repetition in functional MRI). The prediction skill, denoted by ρ_x → y(τ), is defined as the Pearson correlation between the true time series, 𝐲(t+τ), and the predicted series 𝐲̂(t+τ) from the input 𝐱(t). ρ_x → y(τ) := corr [𝐲(t+τ), 𝐲̂(t+τ)]. Noteworthy, the Pearson correlation between the true and reconstructed series (ρ) is used to estimate directedness, though other metrics like mean squared error could also be used. Directionality can still be assessed using the same hypothesis testing mechanisms <cit.>. Moreover, the time series are fed into the reservoir all-at-once, letting the network project all of them. The neighboring points in the variable's embedding are then remapped to the target embedding via the training of the output weights. It should noted that this represents a deviation from more canonical usages <cit.>. To investigate the causal relationship between variables, we first calculate both ρ_x → y(τ) and ρ_y → x(τ) in a given temporal domain. We then examine the values of τ at which either ρ_x → y(τ) or ρ_y → x(τ) reaches its peak value <cit.>. To streamline the subsequent description, we introduce the following notation: τ_x → y := _τρ_x → y(τ) τ_y → x := _τρ_y → x(τ). Empirically, directionality is then defined as follows <cit.>: * if τ_x → y is positive, and τ_y → x is negative, we say that x causes y; * if τ_x → y is negative, and τ_y → x is positive, we say that y causes x; * if both τ_x → y and τ_y → x are negative, we say that x and y causes each other. Despite seeming counterintuitive, information of 𝐲 is present in earlier observations of 𝐱 and, consequently, that current information of the cause 𝐱 is useful to predict future observations of the consequence 𝐲 (see <cit.> for a comprehensive explanation). In certain systems, predictability scores peak at negative lags τ<0 for both directions, being the height of the peaks informative of the coupling strength <cit.>. However, the existence of this bidirectionality does not necessarily invalidate the former statements <cit.>. It was quickly noted that in large and noisy networks, such as the brain, it is unlikely that the predictability scores in Eq. <ref> reach clear and distinct peaks. Functional signals are notoriously noisy <cit.>, and indeed prediction with this approach is challenging <cit.>. A solution to this issue relies on assessing the minimal requirements that are needed to suggest causal interactions <cit.>. For that, the difference between prediction scores should be evaluated and contrasted with proper surrogate predictions <cit.>. That is, Δ_x → y(τ) := ρ_x → y(τ) - ρ_y → x(τ), which can be interpreted as an indication of the potential causality direction (Table <ref>). The scores in Eqs. <ref> and <ref> can be contrasted against the 95% confidence interval obtained from a surrogate testing procedure <cit.>. It has been shown that the requirements to define causality can be compressed into a reduced set of δ-scores <cit.>, δ_x → y(τ) := (1-p_ρ_x → y(τ) > 0)(1-p_Δ_x → y(τ) > 0) if τ>0 (1-p_ρ_y → x(τ) > 0)(1-p_Δ_y → x(τ) > 0) if τ<0 for directed interactions, and δ_y ↔ x(τ) := (1-p_ρ_x → y(τ) > 0)(1-p_ρ_y → x(τ) > 0)p_Δ_x → y≠ 0 for bidirectional interactions. p_H_1 is the p-value after testing the alternative hypothesis H_1 against the surrogate data (Fig. <ref>). For instance, p_ρ_x → y is a p-value for the hypothesis that x influences y. The values of the δ-scores range from 0 to 1, with higher values indicating greater confidence in the existence of a causal relationship with a coupling delay of τ between the examined variables. Then, for a given lag τ, a matrix 𝐀_τ collects the δ-scores, where each element [x,y] represents the causal relationship from node signal x to ROIs signal y, A_τ[x, y] = δ_x → y(τ) unidirectional δ_x → y(τ) + δ_x ↔ y(τ) bidirectional. The effective connectivity (RC) matrix 𝐀_τ is a final representation of the effective connectivity network of every subject; it is directed, non-symmetric, and can incorporate bidirectional causality connections. For our experiments, for every possible interaction x → y, we trained 20 different reservoirs and tested against 100 shuffled targets, strictly following what was outlined in <cit.>. Furthermore, only unidirectional connections were kept from the adjacency matrix in Eq. <ref>. In our experiments, we investigated the classification of pathological groups with the effective connectivity matrices used as features (Fig. <ref> TOP), and we also compared those to the effective connectivity matrices obtained by Granger causality, representing one of the state-of-art approaches. As a last step, for each entry A_τ[x,y], we standardized all samples by subtracting the mean connectivity of the control group and dividing by the standard deviation. Finally, these standardized causal relationships (i.e., directed graphs) were fed into two simple graph classifiers to explore and explain the most informative nodes and links to detect stroke occurrence. §.§ Graph convolutional neural networks Graph convolutional neural networks (GNNs) are a variation of traditional convolutional neural networks which capitalize on graph data representations and can learn non-trivial representations by leveraging the complex topological organization of the data <cit.>. Intuitively, a graph constitutes a non-Euclidean geometric space where complex relationships between data points can be embedded and forwarded as inputs into a GNN <cit.>. More formally, a graph 𝒢 = (𝒱, ℰ) is defined as a set of nodes 𝒱 = {1, …, n} and a set of edges ℰ = {(i,j) | i,j ∈𝒱} where (i,j) represents a link or interaction between the i-th and j-th nodes. Initially, each node i ∈𝒱 is associated with a column feature vector 𝐡_i^(0)∈ℝ^d^(0). Every layer l of a GNN updates the hidden representation of each node by aggregating information from the neighborhoods: 𝐡_i^(l+1) = f_θ( 𝐡_i^(l), F( {𝐡^(l)_j | j ∈𝒩_i }) ), where h_i^(l+1)∈ℝ^d^(l+1) are the new node representations, 𝒩_i is the neighborhood of the i-th node, f_θ denotes a nonlinearity, and F is a permutation-invariant aggregator. Several proposals exist for the aggregation operator, determining the expressive power, interpretability, learning stability, and scalability of the network <cit.>. The non-symmetric effective connectivity maps derived are also non-attributed, that is, there are no node features to be aggregated in Eq. <ref>. Although non-attributed graphs are classifiable, they dramatically increase the problem's difficulty. Fortunately, the Local Degree Profile (LDP) method effectively decreases the challenge by setting the attributes of each node to local neighboring properties <cit.>. Thus, we computed the in and out degree of each node as well as the minimum, maximum, mean, and standard deviation of the in/out degree of its neighbors. This created a feature vector 𝐡_i^(0) of dimension 10 that was propagated through the directed adjacency matrix for every subject. The neural network consisted of l=2 hidden layers and was trained for 150 epochs with a learning rate of 0.005 to minimize the binary cross entropy between the predicted and true classes (Fig. <ref> BOTTOM). The metrics were computed with a balanced class weight to account for the different number of samples in each class. The model was tested in a 10-fold cross-validation scheme and used a validation set to test for overfitting. §.§ Local Topology Profile A recent extension of the LDP attribution outlined before incorporates other local properties to the already-mentioned descriptors. This Local Topology Profile <cit.> has been shown to improve the accuracy over its parent version, namely LDP. Following the original proposal, we extended the feature vector 𝐡_i^(0) with the edge betweenness centrality <cit.>, the overlap between node neighborhoods (i.e., Jaccard index), and the local degree score <cit.>. However, as an attempt to further reduce the complexity of the workflow, we used the 13 LTP features with a random forest classifier of max depth 2 and a maximum number of features equal to 5. As in the GCN classifier, we used class weights to balance the dataset and used a 10-fold cross-validation scheme. The architecture used in practice is summarized in Figure <ref>. §.§ Local Interpretable Machine-Agnostic Explanations To explain the features allowing the classification we used the LIME (Local Interpretable Model-agnostic Explanations) approach. This technique explains the prediction of any classifier by learning the model locally around the prediction <cit.>. In our case, this was used to highlight the edges that contributed to the classification performance the most. LIME assigns a coefficient to each edge on the EC matrix based on the contribution to the final classification score. Positive values were useful in identifying the stroke group, whereas negative values were consistent in identifying the control group. The total explainability values of each ROI were calculated for both groups separately. These values were thresholded with the arbitrary threshold of 0.02 for the stroke group and -0.02 for the control group (because these directions helped the correct decisions). Edges associated with wrong decisions were not studied due to their lack of meaning in neurological terms. § RESULTS §.§ Effective connectivity maps derived from Reservoir Computing EC maps were not readily interpretable given the complex interactions expected to occur at different spatial and temporal scales. Consensus stipulates that information transfer is obscured by the hemodynamic response function, which effectively masks the corresponding temporal delay between cause-consequence associations. We computed effective connectivity maps between 100 ROIs at two different delays (Time of Repetition = 1 and 2; see Fig. <ref>). The average maps showed clear patterns of hemispheric segregation while at the same time exhibiting strong connectivity between homotopic regions. In canonical functional connectivity studies, this a priori segregated structure can be considered as an initial quality assessment of the resulting maps, forming the basis for an accurate description of the functional relationships expected to occur in brain disease. Even though stroke occurrence is not entirely random <cit.>, their exact morphologies and functional disconnection patterns are highly variable. We further examined the properties of the directed networks by computing the average directed connectivity for controls, subjects suffering from right-hemispheric stroke, and subjects suffering from a stroke located on the left hemisphere (Fig. <ref>). Global hemispheric connectivity was computed by averaging the EC maps within and between hemispheres. That is, averaging the values in each on of the 4 visible squares in the average EC maps (Fig. 4). Briefly, intra- and inter-hemispheric connectivity was severely altered in all patients, showing a clear break of symmetric communication w.r.t. the control group, especially for right-impaired subjects <cit.>. §.§ Classification results The results of the classification are reported in Tables <ref> and <ref> respectively for the GCN and LTP classifiers. Results are reported for both the proposed method and Granger Causality: Average AUC, accuracy, precision, recall, and F1 are reported. As expected, the LTP (augmented with a random forest classifier) generally increased the classification metrics, although both models are comparable. It should be noted that classifying effective connectivity graphs is a complicated task due to sample heterogeneity <cit.>, and that very similar scores compared to the chance levels (e.g., an increase of 0.2-0.3) are found in the literature <cit.>. §.§ Node and edge importance in stroke detection We used the LIME explainability framework on the LTP classifier due to its slightly better performance and higher computational efficiency to highlight the most descriptive ROIs and edges related to stroke onset. Importantly, the explanations were done on top of the EC matrices obtained with the reservoir method and not the granger one. For each node in the EC networks, we summed all the explainability coefficients to assess the contribution of each connection arising in each node to the correct classification (i.e., sum over all columns). Lastly, binarized and thresholded explainability values were projected back to the surface mesh (Fig. <ref>; see also Methods and <cit.>). The resulting maps show that regions in visual, dorsal, and ventral attention have the most contribution to the classification performance for stroke subjects, while ventral attention and frontoparietal networks contributed the most to the detection of control subjects. § DISCUSSION This study addresses the critical need for precise diagnostic tools in stroke management, highlighting the complexity and variability of MRI data and the limitations of conventional machine learning approaches in capturing dynamic network disruptions. The proposed pipeline begins by employing reservoir computing to define effective connectivity of the brain <cit.>. Effective connectivity using reservoir computing has been recently proposed to unravel more precise interactions in large neural systems <cit.>. However, studies that thoroughly assess the quality of the resulting causal mappings remain unseen. We propose to evaluate them by first studying existing asymetries in brain information transfer. These maps lead to directed graph representations, which have been loosely explored by graph convolutional network classifiers. Later, we used these directed maps in a AI classification and explainability paradigm; that is, disentangling regions and connections that are important for each control or stroke group. Functional and effective connectivity asymmetries have been previously characterized in two different formats. Using a Granger-based methodology, Allegra and colleagues <cit.> described a connectivity imbalance between lesioned and healthy hemispheres. With the maps obtained with the whole brain reservoir computing causality methodology, we observed a similar pattern which was exacerbated in subjects suffering from right-sided lesions (Figs. <ref> and <ref>). Furthermore, upon examining the connectivity between hemispheres, the same type of broken balance was significantly visible as well. Future work could assess how this asymmetry relates to subject behavior. With respect to this, Koba and colleagues <cit.> explored hemispheric asymmetry in functional connectivity gradients <cit.> finding a slightly higher correlation between behavior and functional aberrancy in subjects with right-sided lesions. Hence, our findings agree with the fact that the location of the stroke conveys different functional and effective information at a connectomic scale strengthening the need for a more accurate characterization of the expected behavioral dysfunctions and prognosis <cit.>. Regarding the classification paradigm, graph-structured data is ubiquitous across various disciplines, yet the use of specific graph convolutional neural networks is relatively recent (see <cit.> for an extensive review). Extensions of methods for directed graph analysis have also been proposed <cit.>, modifying the architecture to perform node classification or link prediction. In this study instead, we focused on overall directed graph classification which was achieved by using conventional graph convolutions with directed adjacency matrices. We are then aggregating these Local Degrees and Topological Profiles based on the message passing across these directed connections. The pipeline achieves promising results, yielding an area under the curve of 0.69, superior to the state-of-art method (GC) using the GCN classification model. This should be considered a promising result given the highly heterogeneous dataset (stroke lesions were present in different parts of the brain), where similar scores relative to chance levels are often observed <cit.>. Furthermore, it was also possible to employ explainable AI tools to interpret disrupted networks despite these diversified lesions across brain networks. This elucidates the contribution of effective connectivity biomarkers that can capture aspects at a general level despite those individual differences, offering insights into disease mechanisms and treatment responses. Previous studies on structural connectome of stroke patients highlighted network dysfunctions <cit.>. Stroke-related modulations in inter- and intra-hemispheric coupling were recently investigated highlighting asymmetry and inter-areal interactions after stroke, related to broad changes in inter-areal communication and resulting in several deficits <cit.>. Moreover, Erdogan and colleagues argued that the global fMRI signal is affected by the stroke lesion generating a delay of the blood-oxygen-level-dependent (BOLD) signal depending on the lesions <cit.>. Our results were in line with those previous analyses. We found inter-hemispheric connectivity was severely altered in all patients, showing a clear break of symmetric communication w.r.t. the control group. The differences were particularly pronounced in the case of stroke lesions in the right hemisphere. This can hypothesized as the integrity of the within-hemispheric networks is sustained through language-related connections, as the right hemisphere is less involved in speech generation and suffers more from the injury. <cit.>. Indeed, the explainability maps of the control subjects resemble the vision and language networks. It is possible that the algorithm abused the connections from/to the language network to detect control subjects. Aphasia is a common symptom in the case of ischemic stroke, therefore the connections of the language network in the stroke group may show different characteristics. A similar hypothesis can be suitable also for stroke subjects because the supplementary motor area, which plays an important role in language processing, was also useful for accurate classification. Importantly, alterations in the ventral and dorsal attention networks are often present in stroke <cit.>, which are in line with our explainable maps in Fig. <ref>. Nevertheless, these claims should be confirmed with larger datasets. Undoubtedly, there are several ways to discriminate control subjects from stroke patients which are less computationally demanding <cit.>, and previous studies also showed a correlation between functional and effective connectivity with the first being easier to compute than the latter <cit.>. Here, we emphasized the use of a classification task for two reasons 1) to further assess the effective connectivity maps and 2) to provide a strong basis for which to implement explainability pipelines. With this we also propose an approach to classify directed graphs. However, we showed the need to use further mapping into anatomical atlases to allow acceptable explainability. Although, in conclusion, this proposes an end-to-end pipeline for studying effective connectivity brain disorders, capitalizing on a specific approach for directed graph and explainability. This analytical framework enhances clinical interpretability but also can inspire confidence in decision-making processes, crucial for translating research findings into clinical practice as it can translate complex neuroimaging features into simple visualizations. The study lays the groundwork for improved patient stratification in other brain diseases as well, with the ultimate goal of assisting doctors, demonstrating also the potential of reservoir computing causality, graph convolutional networks, and explainable analysis. § ACKNOWLEDGEMENT Authors thank Prof. Maurizio Corbetta for sharing the dataset used in this study. The publication was created within the project of the Minister of Science and Higher Education "Support for the activity of Centers of Excellence established in Poland under Horizon 2020" on the basis of the contract number MEiN/2023/DIR/3796. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 857533. This publication is supported by Sano project carried out within the International Research Agendas programme of the Foundation for Polish Science, co-financed by the European Union under the European Regional Development Fund. This research was supported in part by the PLGrid infrastructure. Computations have been partially performed on the ARES supercomputer at ACC Cyfronet AGH. abbrv
http://arxiv.org/abs/2407.13263v1
20240718081448
Regularisation for the approximation of functions by mollified discretisation methods
[ "Camille Pouchol", "Marc Hoffmann" ]
math.NA
[ "math.NA", "cs.NA", "math.ST", "stat.TH" ]
fig/ ⌊⌋ -1cm 21cm 15cm 1cm ℝ ℕ ℤ
http://arxiv.org/abs/2407.11932v1
20240716172329
Impossibility of latent inner product recovery via rate distortion
[ "Cheng Mao", "Shenduo Zhang" ]
math.ST
[ "math.ST", "cs.IT", "cs.SI", "math.IT", "stat.ML", "stat.TH", "62B10" ]
Supplementary Material: Gated Temporal Diffusion for Stochastic Long-Term Dense Anticipation Olga Zatsarynna1*Emad Bahrami1*Yazan Abu Farha2 Gianpiero Francesca3 Juergen Gall1,4 July 22, 2024 ============================================================================================ § ABSTRACT In this largely expository note, we present an impossibility result for inner product recovery in a random geometric graph or latent space model using the rate-distortion theory. More precisely, suppose that we observe a graph A on n vertices with average edge density p generated from Gaussian or spherical latent locations z_1, …, z_n ∈^d associated with the n vertices. It is of interest to estimate the inner products ⟨ z_i, z_j ⟩ which represent the geometry of the latent points. We prove that it is impossible to recover the inner products if d ≳ n h(p) where h(p) is the binary entropy function. This matches the condition required for positive results on inner product recovery in the literature. The proof follows the well-established rate-distortion theory with the main technical ingredient being a lower bound on the rate-distortion function of the Wishart distribution which is interesting in its own right. § INTRODUCTION Random graphs with latent geometric structures comprise an important class of network models used across a broad range of fields <cit.>. In a typical formulation of such a model, each vertex of a graph on n vertices is assumed to be associated with a latent location z_i ∈^d where i=1,…,n. With A ∈{0,1}^n × n denoting the adjacency matrix of the graph, each edge A_ij follows the Bernoulli distribution with probability parameter κ(z_i,z_j), where κ : ^d ×^d → [0,1] is a kernel function. In other words, the edges of the graph are formed according to the geometric locations of the vertices in a latent space. Given the graph A, the central question is then to recover the latent geometry, formulated as estimating the inner products ⟨ z_i, z_j ⟩[One can also formulate the problem as as estimating the pairwise distances {z_i - z_j_2}_i,j=1^n which is essentially equivalent to inner product estimation. The problem is not formulated as estimating the latent locations {z_i}_i=1^n themselves, because the kernel function κ is typically invariant under an orthogonal transformation of z_1, …, z_n, making them non-identifiable.]. In the study of this class of random graphs, a Gaussian or spherical prior is often imposed on the latent locations z_1, …, z_n, including in the early work on latent space models <cit.> and in the more recent work on random geometric graphs <cit.>. In particular, the isotropic spherical or Gaussian prior allows the latter line of work to use the theory of spherical harmonics to analyze spectral methods for estimating the latent inner products. For a class of kernels including the step function κ(z_i, z_j) = {⟨ z_i, z_j ⟩≥τ} for a threshold τ, it is known (see Theorem 1.4 of <cit.>) that the inner products can be estimated consistently if d ≪ n h(p) where p is the average edge density of the graph and h(p) is the binary entropy function. However, a matching negative result was not established (as remarked in Section 1.3 of <cit.>). In this largely expository note, we close this gap by proving in Corollary <ref> that it is information-theoretically impossible to recover the inner products in a random geometric graph model if d nh(p), thereby showing that d ≍ n h(p) is indeed the recovery threshold[Another related statistical problem is testing a random geometric graph model against an Erdős–Rényi graph model with the same average edge density <cit.>. This testing threshold, or detection threshold, is conjectured to be d ≍ (n h(p))^3, and the lower bound is still largely open. See <cit.>.]. In fact, it is not difficult to predict this negative result from entropy counting: It is impossible to recover the geometry of n vectors in dimension d from n2 binary observations with average bias p if nd ≳n2 h(p) since there is not sufficient entropy. And this argument does not rely on the specific model (such as the kernel function κ) for generating the random graph A. To formalize the entropy counting argument, the rate-distortion theory <cit.> provides a standard approach (see also <cit.> for a modern introduction). The key step in this approach is a lower bound on the rate-distortion function of the estimand, i.e., X ∈^n × n with X_ij := ⟨ z_i, z_j ⟩ in our case. If z_1, …, z_n are isotropic Gaussian vectors, then X follows the Wishart distribution. Therefore, our main technical work lies in estimating the rate-distortion function for the Wishart distribution (and its variant when z_1, …, z_n are on a sphere), which has not been done explicitly in the literature to the best of our knowledge. See Theorem <ref>. The technical problem in this note is closely related to a work <cit.> on low-rank matrix estimation. To be more precise, Theorem VIII.17 of <cit.> proves a lower bound on the rate-distortion function of a rank-d matrix X = Z Z^⊤ where Z ∈^n × d. Our proof partly follows the proof of this result but differs from it in two ways: First, the result of <cit.> assumes that Z is uniformly distributed on the Stiefel manifold, i.e., the columns of Z are orthonormal, while we assume that Z has i.i.d. Gaussian or spherical rows. Without the simplification from the orthonormality assumption, our proof requires different linear algebraic technicalities. Second, the result of <cit.> focuses on d ≤ n, while we also consider the case d > n which requires a completely different proof. Finally, as a byproduct of the lower bound on the rate-distortion function of X, we present in Corollary <ref> an impossibility result for one-bit matrix completion. While one-bit matrix completion has been studied extensively in the literature <cit.>, less is known for the Bayesian model where a prior is assumed on the matrix X to be estimated <cit.>. Similar to inner product estimation from a random geometric graph, the goal of one-bit matrix completion is to estimate a (typically low-rank) matrix X from a set of binary observations. It is therefore plausible that many techniques for random graphs can be used for one-bit matrix completion, and vice versa. This note provides such an example. § MAIN RESULTS In this section, we study the rate-distortion function for the Wishart distribution and its spherical variant. Let I(X;Y) denote the mutual information between random variables X and Y. The rate-distortion function is defined as follows (see Part V of <cit.>). Let X be a random variable taking values in ^ℓ, and let P_Y | X be a conditional distribution on ^ℓ given X. Let L be a distortion measure (or a loss function), i.e., a bivariate function L : ^ℓ×^ℓ→_≥ 0. For D>0, the rate-distortion function of X with respect to L is defined as R_X^L(D) := inf_P_Y | X : L(X,Y) ≤ D I(X;Y). The main technical result of this note is the following lower bound on the rate-distortion function of a Wishart matrix. For positive integers n and d, let Z := [z_1 … z_n]^⊤∈^n × d where the i.i.d. rows z_1, …, z_n follow either the Gaussian distribution (0, 1/d I_d) or the uniform distribution on the unit sphere ^d-1⊂^d. Let X := Z Z^⊤. Define a loss function[The normalization in the definition of L is chosen so that the trivial estimator X = I_n of X has risk L(X,X̂) = 1 in the case of Gaussian z_i, since [X_ij^2] = [z_i,z_j^2] = 1/d for i≠ j and [(X_ii - 1)^2] = [(z_i,z_i - 1)^2] = 2/d.] L(X,X̂) := d/n(n+1)X-X̂_F^2. Let n d := min{n,d}. There is an absolute constant c>0 such that for any D ∈ (0, c), we have R_X^L(D) ≥ c n (n d) log1/D . For d < n, the n × n matrix X is rank-deficient and is a function of Z ∈^n × d, so we expect the order nd for the rate-distortion function; for d ≥ n, we expect the order n^2 considering the size of X. The matching upper bound on the rate-distortion function can be obtained using a similar argument as that in Section <ref> for small d and through a comparison with the Gaussian distribution for large d (see Theorem 26.3 of <cit.>). Since it is in principle easier to obtain the upper bound and only the lower bound will be used in the downstream statistical applications, we do not state it here. Moreover, at the end of this section, we discuss the best possible constant c in the above lower bound. The bulk of the paper, Section <ref>, will be devoted to proving Theorem <ref>. With this theorem in hand, we first establish corollaries for two statistical models via entropy counting. Fix positive integers n, d and a parameter p ∈ (0,1). Suppose that we observe a random graph on n vertices with adjacency matrix A with average edge density p, i.e., ∑_(i,j) ∈[n]2[A_ij] = n2 p. Suppose that A is generated according to an arbitrary model from the latent vectors z_1, …, z_n given in Theorem <ref>, and the goal is to estimate the inner products X_ij := ⟨ z_i, z_j ⟩ in the norm L defined in (<ref>). If d ≥ c n h(p) where c > 0 is any absolute constant and h(p) := -p log p - (1-p) log (1-p) is the binary entropy function, then for any estimator X̂ measurable with respect to A, we have L(X, X̂) ≥ D for a constant D = D(c) >0. The estimand X, the observation A, and the estimator X̂ form a Markov chain X → A →X̂. By the data processing inequality, we have I(X; X̂) ≤ I(A; X̂) ≤ H(A) , where H(A) denotes the entropy of A. Since ∑_(i,j) ∈[n]2[A_ij] = n2 p, by the maximum entropy under the Hamming weight constraint (see Exercise I.7 of <cit.>), we get H(A) ≤n2 h(p) . If L(X,X̂) ≤ D, then combining the above inequalities with Theorem <ref> gives c n(n d) log1/D≤ R_X^L(D) ≤ I(X, X̂) ≤n2 h(p) . Taking D > 0 to be a sufficiently small constant, we then get n d < c n h(p), i.e., d < c n h(p). As a second application of Theorem <ref>, we consider one-bit matrix completion with a Wishart prior. Fix positive integers n, d and a parameter p ∈ (0,1). Suppose that X ∈^n × n is a rank-d matrix to be estimated. Assume the prior distribution of X as given in Theorem <ref>. For each entry (i,j) ∈ [n]^2, suppose that with probability p_ij, we have a one-bit observation A_ij∈{0,1} according to an arbitrary model, and with probability 1-p_ij, we do not have an observation, denoted as A_ij = ∗. Let p be the average probability of observations, i.e., ∑_i,j=1^n p_ij = n^2 p. Let L be the loss function defined in (<ref>). If d ≥ c n (h(p) + p) where c>0 is any absolute constant and h(p) := -p log p - (1-p) log (1-p), then for any estimator X̂ measurable with respect to A, we have L(X, X̂) ≥ D for a constant D = D(c) > 0. The argument is the same as the proof of Corollary <ref>, except the bound on the entropy of A. Let Z ∈{0,1}^n × n have Bernoulli(p_ij) entries such that Z_ij = {A_ij∗}. Then we have the conditional entropy H(Z | A) = 0. Conditional on any value of Z, the entropy of A is at most log 2^Z_1. As a result, H(A | Z) ≤_Z log 2^Z_1 = n^2 p log 2 . We therefore obtain H(A) = H(A | Z) + I(Z; A) = H(A | Z) + H(Z) ≤ n^2 (h(p) + p log 2). The rest of the proof is the same as that for the random geometric graph model. Open problems. Several interesting problems are left open. * Sharp constant: Recall that the lower bound on the rate-distortion function of the Wishart distribution in Theorem <ref>. While the order n (n d) log1/D is believed to be optimal, we did not attempt to obtain the sharp constant factor. In the case d ≥ n, the rate-distortion function can be bounded from above by that of a Gaussian Wigner matrix, and the best leading constant is 1/4 (see Theorems 26.2 and 26.3 of <cit.>). Indeed, the end result of Section <ref> indeed shows a lower bound with the constant 1/4 in the leading term if D → 0. In the case d/n → 0, Lemma <ref> suggests that the best constant may be 1/2, but we did not make the effort to obtain it as the end result. The most difficult situation appears to be when d < n = O(d), in which case our techniques fail to obtain any meaningful constant factor. * Optimal rate: Combined with the work <cit.>, Corollary <ref> gives the recovery threshold d ≍ n h(p) for random geometric graphs with Gaussian or spherical latent locations. However, it remains open to obtain an optimal lower bound on L(X,X̂) as a function of d,n,p in the regime d ≪ n h(p). We believe the simple approach of entropy counting is not sufficient for obtaining the optimal rate and new tools need to be developed. * General latent distribution: Existing positive and negative results for estimation in random geometric graph models are mostly limited to isotropic distributions of latent locations, such as Gaussian or spherical in <cit.> and this work. It is interesting to extend these results to more general distributions and metric spaces; see <cit.> for recent work. Even for random geometric graphs with anisotropic Gaussian latent points, while there has been progress on the detection problem <cit.>, extending the recovery results to the anisotropic case remains largely open. § PROOF OF THEOREM <REF> Let ∈ (0,1) be some absolute constant to be determined later. We first consider the Gaussian model where z_i ∼(0, 1/d I_d). The proof is split into three cases d ≤ n, d ≥ n, and n < d < n, proved in Sections <ref>, <ref>, and <ref> respectively. We then consider the spherical model in Section <ref>. §.§ Case d ≤ n To study the rate-distortion function of X = ZZ^⊤, we connect it to the rate-distortion function of Z in the distortion measure to be defined in (<ref>). The strategy is inspired by <cit.>, but the key lemma connecting the distortion of X to that of Z is different. For Z, Ẑ∈^d × d, define a loss function for recovering Z up to an orthogonal transformation ℓ(Z,Ẑ) := 1/ninf_O∈(d)Z-Ẑ O _F^2 , where (d) denotes the orthogonal group in dimension d. The normalization is chosen so that ℓ(Z, Z) = ℓ(Z,0) = 1. We start with a basic linear algebra result. Let A, B ∈^n × d. For the loss functions L and ℓ defined by (<ref>) and (<ref>) respectively, we have ℓ(A,B) ≤√(n+1/n L(AA^⊤, BB^⊤)) . Consider the polar decompositions A = (A A^⊤)^1/2 U and B = (B B^⊤)^1/2 V where U, V ∈(d). Then we have ℓ(A, B) = 1/ninf_O∈(d)A - B O _F^2 ≤1/n(A A^⊤)^1/2 U - (B B^⊤)^1/2 V (V^⊤ U) _F^2 = 1/n(A A^⊤)^1/2 - (B B^⊤)^1/2_F^2 . The Powers–Størmer inequality <cit.> gives (A A^⊤)^1/2 - (B B^⊤)^1/2_F^2 ≤A A^⊤ - B B^⊤_* , where ·_* denotes the nuclear norm. In addition, A A^⊤ and B B^⊤ are at most rank d, so ℓ(A,B) ≤1/nA A^⊤ - B B^⊤_* ≤√(d)/nA A^⊤ - B B^⊤_F = √(n+1/n L(AA^⊤, BB^⊤)) . Next, we relate the rate-distortion function of X = ZZ^⊤ in the loss L to the rate-distortion function of Z in the loss ℓ. Let Z and X be defined as in Theorem <ref>, and let L and ℓ be defined by (<ref>) and (<ref>) respectively. Recall the notation of the rate-distortion function in Definition <ref>. For D > 0, we have R_X^L(D) ≥ R_Z^ℓ(√(8D)) . Fix a conditional distribution P_Y | X such that L(X,Y) ≤ D. Define Z̃ = _W ∈^n × dY - W W^⊤_F , where the non-unique minimizer Z̃ is chosen arbitrarily. Then we have Z Z^⊤ - Z̃Z̃^⊤_F ≤Z Z^⊤ - Y_F + Y - Z̃Z̃_F ≤ 2 Z Z^⊤ - Y_F . In other words, L(Z Z^⊤, Z̃Z̃^⊤) ≤ 4 L(X,Y) . By Lemma <ref>, ℓ(Z, Z̃) ≤√(2 L(Z Z^⊤, Z̃Z̃^⊤))≤√(8 L(X,Y)) . Jensen's inequality then yields ℓ(Z, Z̃) ≤√(8 L(X,Y))≤√(8 L(X,Y))≤√(8D) . Let O be a uniform random orthogonal matrix over (d), independent from everything else. In view of the definition of ℓ, we have ℓ(Z O, Z̃) = ℓ(Z, Z̃) ≤√(8D) . Therefore, by the definition of the rate-distortion function R_Z^ℓ (see Definition <ref>), I(Z O; Z̃) ≥ R_ZO^ℓ(√(8D)) = R_Z^ℓ(√(8D)) , where the equality follows from the orthogonal invariance of the distribution of Z. Next, we note that I(Z O; Z̃) ≤ I(Z Z^⊤; Z̃) . (In fact, equality holds because the reverse inequality is trivial by data processing.) To see this, given Z Z^⊤, take any A ∈^n × d such that Z Z^⊤ = A A^⊤, and let Q be a uniform random orthogonal matrix over (d) independent from everything else. Since A = Z P for some P ∈(d), we have (A Q, Z̃) = (Z P Q, Z̃) d= (Z O, Z̃), where d= denotes equality in distribution. Hence, the data processing inequality gives I(Z Z^⊤; Z̃) ≥ I(AQ; Z̃) = I(ZO; Z̃). Combining the above two displays and recalling that Z̃ is defined from Y, we apply the data processing inequality again to obtain I(X; Y) ≥ I(ZZ^⊤; Z̃Z̃^⊤) ≥ R_Z^ℓ(√(8D)) . Minimizing P_Y | X subject to the constrain L(X,Y) ≤ D yields the the rate-distortion function R_X^L(D) on the left-hand side, completing the proof. Let Z be defined as in Theorem <ref>, let ℓ be defined by (<ref>), and let R_Z^ℓ be given by Definition <ref>. There is an absolute constant C>0 such that for any D ∈ (0,1/4), we have R_Z^ℓ(D) ≥nd/2log1/4D - d^2/2logC/D . Fix a conditional distribution P_Ẑ| Z such that ℓ(Z,Ẑ) ≤ D. Let O = O(Z,Ẑ) ∈(d) be such that 1/nẐ O - Z_F^2 = ℓ(Z,Ẑ). Then we have Ẑ O - Z_F^2 ≤ nD. Let N((d),ϵ) be an ϵ-net of (d) with respect to the Frobenius norm, where ϵ^2 = nD/Z_2^2∧ d. For O = O(Z, Ẑ), choose Ô = Ô(Z, Ẑ) ∈ N((d),ϵ) such that Ô - O_F^2 ≤ϵ^2. Define W := ẐÔ. We have W - Z_F^2 = ẐÔ - Z _F^2 = Ẑ - ZÔ^-1_F^2 ≤ 2Ẑ - Z O^-1_F^2 + 2 Z O^-1 - ZÔ^-1_F^2 ≤ 2Ẑ O - Z_F^2 + 2 Z^2O^-1-Ô^-1_F^2 ≤ 2nD + 2 ϵ^2 Z^2 = 4nD , where · denotes the spectral norm. By Theorem 26.2 of <cit.> (with d replaced by nd and σ^2 replaced by 1/d), the rate-distortion function of Z with respect to the Frobenius norm L_0(Z,W) := Z-W_F^2 is R_Z^L_0(D) = nd/2logn/D . Since W-Z_F^2 ≤ 4nD, we obtain I(Z;W) ≥ R_Z^L_0(4nD) = nd/2log1/4D . Moreover, we have I(Z;W) ≤ I(Z;Ẑ,Ô) = I(Z;Ẑ) + I(Z;Ô|Ẑ) ≤ I(Z;Ẑ) + H(Ô) , where the three steps follow respectively from the data processing inequality, the definition of conditional mutual information I(Z;Ô|Ẑ), and a simple bound on the mutual information by the entropy. The above two inequalities combined imply I(Z; Ẑ) ≥ I(Z; W) - H(Ô) ≥nd/2log1/4D - H(Ô) . Since Ô∈ N((d),), the entropy H(Ô) can be bounded by the metric entropy of (). By Theorem 8 of <cit.>, there is an absolute constant C_0>1 such that the covering number of (d) with respect to the Frobenius norm is at most √(C_0 d)/ϵ^d^2 for any ϵ∈ (0,√(d)). We have = √(nD/Z^2)√(d)≥ c_1 √(dD) for an absolute constant c_1 > 0, where the bound follows from the concentration of Z at order O(√(n) + √(d)/√(d)) (see, e.g., Corollary 5.35 of <cit.>) and that d ≤ n. Therefore, H(Ô) ≤log |N((d)| ≤d^2/2logC_0 d/^2≤d^2/2logC_0/c_1^2 D . Putting it together, we obtain I(Z; Ẑ) ≥nd/2log1/4D - d^2/2logC_0/c_1^2 D , finishing the proof in view of the definition of R_Z^L(D). Combining Lemmas <ref> and <ref>, we conclude that R_X^L(D) ≥nd/2log1/4√(8D) - d^2/2logC/√(8D) ≥nd/8log1/D provided that D ∈ (0,c^*) and d ≤ c^* n for a sufficiently small constant c^* > 0. §.§ Case d ≥ n In the case d ≥ n, the Wishart distribution of X = ZZ^⊤ has a density on the set of symmetric matrices ^n(n+1)/2, and we can apply the Shannon lower bound <cit.> on the rate-distortion function. See Equation (26.5) and Exercise V.22 of the book <cit.> (with the norm taken to be the Euclidean norm and r=2) for the following result. Let Y be a continuous random vector with a density on ^N. For D>0, let R_Y^L_0(D) be the rate-distortion function of Y with respect to the Euclidean norm L_0(Y, Ŷ) := Y - Ŷ_2^2. Let (Y) denote the differential entropy of Y. Then we have R_Y^L_0(D) ≥(Y) - N/2log2 π e D/N . As a result, for the loss L defined by (<ref>) and the random matrix X distributed over ^n(n+1)/2, we have R_X^L(D) ≥(X) - n(n+1)/4log4 π e D/d . The differential entropy (X) of the Wishart matrix X is known. For X defined in Theorem <ref>, we have (X) = n(n+1)/2log2/d + logΓ_nd/2-d-n-1/2ψ_n d/2+nd/2, where Γ_n is the multivariate gamma function and ψ_n is the multivariate digamma function. The above two results combined give the lower bound R_X^L(D) ≥n(n+1)/2log2/d + logΓ_nd/2-d-n-1/2ψ_n d/2+nd/2 - n(n+1)/4log4 π e D/d = nd/2 + n(n+1)/4log1/π e D d + logΓ_nd/2-d-n-1/2ψ_n d/2 . We now analyze the functions Γ_n and γ_n. By Stirling's approximation for the gamma function (see Equation 6.1.40 of <cit.>), we have logΓ(x+1/2) ≥ x log (x+1/2) - x - 1/2 + log(2π)/2 for x ≥ 0. Together with the definition of the multivariate gamma function Γ_n, this gives logΓ_nd/2 = n(n-1)/4logπ + ∑_i=1^n logΓd+1-i/2 ≥n(n-1)/4logπ + ∑_i=1^n ( d-i/2logd+1-i/2 - d+1-i/2 + log(2π/e)/2) ≥n^2/4log(π e) - nd/2 + ∑_i=1^n ( d-i/2logd+1-i/2) - O(n) . Moreover, by Equation (2.2) of <cit.>, the digamma function satisfies log x - 1/x < ψ(x) < log x for x>0. Combining this with the definition of the multivariate digamma function ψ_n, we obtain d-n-1/2ψ_n d/2 = d-n-1/2∑_i=1^n ψ( d+1-i/2) ≤d-n-1/2∑_i=1^n logd+1-i/2 + O(n) , where we note that the O(n) term is only necessary in the case that d=n and d-n-1/2 is negative. Plugging the above two estimates into (<ref>), we see that R_X^L(D) ≥n(n+1)/4log1/D d + ∑_i=1^n ( n+1-i/2logd+1-i/2) - O(n) . If d ≥ 2n, then R_X^L(D) ≥n(n+1)/4log1/D d + ( logd+1-n/2) ∑_i=1^n n+1-i/2 - O(n) = n(n+1)/4log1/D + n(n+1)/4logd+1-n/2d - O(n) ≥n(n+1)/4log1/D - O(n^2) . For n ≤ d < 2n, we first note that the term n+1-i/2logd+1-i/2 with i=n can be dropped from the sum in (<ref>), because n+1-n/2logd+1-n/2 < 0 only if d=n, in which case the negative quantity 1/2log1/2 is subsumed by the -O(n) term. Furthermore, since the function x ↦n+1-x/2logd+1-x/2 is decreasing on [1,n], we have ∑_i=1^n-1( n+1-i/2logd+1-i/2) ≥∫_1^n n+1-x/2logd+1-x/2 dx = 2dn - d^2/4logd/d+1-n + n^2-1/4log (d + 1 - n) + O(n^2) , where the integral can be evaluated explicitly but we suppress O(n^2) terms for brevity. Plugging this back into (<ref>), we obtain R_X^L(D) ≥n(n+1)/4log1/D + 2dn - d^2 - n^2 + 1/4logd/d+1-n - O(n^2) . Since 2dn - d^2 - n^2 ≤ 0 and logd/d+1-n≤n-1/d+1-n≤n-1/d-n, it holds that 2dn - d^2 - n^2 + 1/4logd/d+1-n≥2dn - d^2 - n^2/4·n-1/d-n = -1/4 (d-n)(n-1) . (While the above argument relied on d>n due to the presence of d-n in the denominator, the conclusion clearly holds for d=n.) Consequently, we again have R_X^L(D) ≥n(n+1)/4log1/D - O(n^2) . This readily implies the desired lower bound. §.§ Case c^* n < d < n This case can be easily reduced to the case d ≥ n. Fix a conditional distribution P_Y | X such that L(X,Y) ≤ D. Let X_d be the top left d × d principal minor of X and define Y_d similarly. Then X_d clearly has the Wishart distribution as X in Theorem <ref> with n replaced by d. Let L_d be the loss L in (<ref>) with n replaced by d. Then we have L_d(X_d, Y_d) = d/d(d+1)X_d - Y_d_F^2 ≤d/(c^*)^2 n(n+1)X - Y_F^2 = 1/(c^*)^2 L(X,Y) , so L_d(X_d,Y_d) ≤ D/(c^*)^2. Applying the result for the case d = n, we get I(X_d; Y_d) ≥d(d+1)/4log(c^*)^2/D - O(d^2) ≥c^* nd/4log(c^*)^2/D - O(nd) . Since I(X;Y) ≥ I(X_d;Y_d), to complete the proof, it remains to take D ≤ c for a sufficiently small constant c>0 depending only on c^* and the hidden constant in O(nd). §.§ Spherical case We now consider the case Z = [z_1 … z_n]^⊤ and X = Z Z^⊤ where z_1, …, z_n are i.i.d. uniform random vectors over the unit sphere ^d-1⊂^d. The proof is via a reduction from the Gaussian case. Let w_1, …, w_n be i.i.d. (0, 1/d I_d) vectors and let β_i := w_i_2, so that z_i = w_i/β_i and w_i = β_i z_i. Let B ∈^n × n be the diagonal matrix with β_1, …, β_n on its diagonal. Let Y = BXB. Then Y has the distribution of X in the case where z_1, …, z_n are Gaussian vectors, so the result of the Gaussian case gives R_Y^L(D) ≥ c n (n d) log1/D . Fix a conditional distribution P_X̂| X such that L(X, X̂) ≤ D. Let g_1, …, g_n be i.i.d. (0, δ^2) random variables independent from everything else, where δ > 0 is to be chosen. Define β̂_i := β_i + g_i, and let B̂∈^n × n be the diagonal matrix with β̂_1, …, β̂_n on its diagonal. Define Ŷ := B̂X̂B̂. Since z_i is independent from β_i, we see that (X, X̂) is independent from (B, B̂). Hence, I(Y; Ŷ) ≤ I(X,B; X̂, B̂) = I(X; X̂) + I(B; B̂). For the term I(B; B̂), the independence across the pairs (β_i, β̂_i) for i = 1, …, n implies I(B; B̂) = ∑_i=1^n I(β_i; β̂_i) = n I(β_1 ; β̂_1). We have (β_1) = (w_i_2) = 1/d (d - 2 Γ((d+1)/2)^2/Γ(d/2)^2) ≤ 1/(2d) using the variance of the χ_d distribution and basic properties of the gamma function. Let g' ∼(0,1/(2d)). Then the Gaussian saddle point theorem (see Theorem 5.11 of <cit.>) gives I(β_1; β̂_1) ≤ I(g';g'+g_1) = 1/2log( 1 + 1/2dδ^2) . The above three displays combined yield I(X; X̂) ≥ I(Y; Ŷ) - n/2log( 1 + 1/2dδ^2). It remains to bound I(Y; Ŷ) from below. To this end, note that Ŷ - Y_F^2 = B̂X̂B̂ - BXB_F^2 ≤ 2 B̂X̂B̂ - B̂ X B̂_F^2 + 2 B̂ X B̂ - B X B_F^2 = 2 ∑_i,j=1^n β̂_i^2 β̂_j^2 (X̂_ij - X_ij)^2 + 2 ∑_i,j=1^n X_ij^2 (β̂_i β̂_j - β_i β_j)^2 . Since β̂_i = β_i + g_i, we have [β̂_i^2] = [β_i^2] + [g_i^2] = 1+δ^2. Moreover, we have [X_ii^2] = [(z_i^⊤ z_i)^2] = 1 and [X_ij^2] = [(z_i^⊤ z_j)^2] = 1/d for i j. Finally, [(β̂_i β̂_j - β_i β_j)^2] = [(β_i g_j + β_j g_i + g_i g_j)^2] = 2 δ^2 + [g_i^2 g_j^2] + 2 [β_i β_j] [g_i g_j] so [(β̂_i^2 - β_i^2)^2] = 4 δ^2 + 3 δ^4 and [(β̂_i β̂_j - β_i β_j)^2] = 2 δ^2 + δ^4 for i j. Since β̂_1, …, β̂_n are independent and B, B̂, X are mutually independent, we conclude that Ŷ - Y_F^2 ≤ 2 (1+δ^2)^2 X̂ - X_F^2 + 2 n (4 δ^2 + 3 δ^4) + 2 n(n-1)/d (2 δ^2 + δ^4) ≤ 8 n(n+1)/d D + 14 n/d D + 6 n(n-1)/d^2 D , where we used that L(X,X̂) ≤ D for the loss L defined in (<ref>) and chose δ^2 = D/d < 1. Hence, we have L(Y, Ŷ) ≤ 28 D. This together with (<ref>) implies that I(Y; Ŷ) ≥ c n (n d) log1/28D. Plugging this bound into (<ref>), we obtain I(X; X̂) ≥ c n (n d) log1/28D - n/2log( 1 + 1/2 D) . The above bound completes the proof if d ≥ C for some constant C > 0 depending only on c. For the case d ≤ C (in fact, for the entire case d ≤ c^* n), it suffices to note that the proof in Section <ref> also works for the spherical model. To be more precise, there are only three places where the Gaussianity assumption is used. First, the proof of Lemma <ref> uses the orthogonal invariance of the distribution of the rows of Z, which is also true for the spherical model where z_i is uniform over ^d-1. Second, (<ref>) uses the rate-distortion function of the entrywise Gaussian matrix Z. In the case where Z have i.i.d. rows distributed uniformly over ^d-1, it suffices to replace this formula by a lower bound: By Theorems 27.17 and 24.8 of <cit.>, we have R_Z^L_0(D) ≥n(d-1)/2log1/D - n C_2 for an absolute constant C_2 > 0, which is sufficient for the rest of the proof. Third, the proof of Lemma <ref> also uses that Z^2 is of order n+d/d, which is obviously true if d is of constant size and the rows of Z are on the unit sphere. § ACKNOWLEDGMENTS This work was supported in part by NSF grants DMS-2053333, DMS-2210734, and DMS-2338062. We thank Shuangping Li, Eric Ma, and Tselil Schramm for generously sharing their different approach to a similar result on random geometric graphs; the two works were developed concurrently and independently. We thank Yihong Wu and Jiaming Xu for helpful discussions on the rate-distortion theory. alpha
http://arxiv.org/abs/2407.13099v1
20240718021609
Asymmetric Hard X-ray Radiation of Two Ribbons in a Thermal-Dominated C-Class Flare
[ "Guanglu Shi", "Li Feng", "Jun Chen", "Beili Ying", "Shuting Li", "Qiao Li", "Hui Li", "Ying Li", "Kaifan Ji", "Yu Huang", "Weiqun Gan", "the LST team" ]
astro-ph.SR
[ "astro-ph.SR", "physics.plasm-ph" ]
addressref=aff1,aff2]G.L.Guanglu Shi0000-0001-7397-455X addressref=aff1,aff2,corref,email=lfeng@pmo.ac.cn]L.Li Feng0000-0003-4655-6939 addressref=aff1]J.Jun Chen0000-0003-3060-0480 addressref=aff1]B.L.Beili Ying0000-0001-8402-9748 addressref=aff1,aff2]S.T.Shuting Li0000-0003-2694-2875 addressref=aff1,aff2]Q.Qiao Li0000-0001-7540-9335 addressref=aff1,aff2]H.Hui Li0000-0003-1078-3021 addressref=aff1,aff2]Y.Ying Li0000-0002-8258-4892 addressref=aff4]K.F.Kaifan Ji0000-0001-8950-3875 addressref=aff1,aff2]Y.Yu Huang0000-0002-0937-7221 addressref=aff1,aff2]Y.P.Youping Li0000-0001-5529-3769 addressref=aff1]J.W.Jingwei Li addressref=aff1]J.Jie Zhao0000-0003-3160-4379 addressref=aff1]L.Lei Lu0000-0002-3032-6066 addressref=aff1]J.C.Jianchao Xue0000-0003-4829-9067 addressref=aff1]P.Ping Zhang addressref=aff1]D.C.Dechao Song0000-0003-0057-6766 addressref=aff1,aff2]Z.Y.Zhengyuan Tian0000-0002-2158-0249 addressref=aff1,aff2]Y.N.Yingna Su0000-0001-9647-2149 addressref=aff1]Q.M.Qingmin Zhang0000-0003-4078-2265 addressref=aff1]Y.Y.Yunyi Ge addressref=aff1,aff2]J.H.Jiahui Shan0009-0001-4778-5162 addressref=aff1,aff2]Y.Yue Zhou0000-0002-3341-0845 addressref=aff1,aff2]J.Jun Tian0000-0002-1068-4835 addressref=aff1,aff2]G.Gen Li addressref=aff1,aff2]X.F.Xiaofeng Liu0000-0002-3657-3172 addressref=aff1,aff2]Z.C.Zhichen Jing0000-0002-8401-9301 addressref=aff1]S.J.Shijun Lei addressref=aff1,aff3]W.Q.Weiqun Gan0000-0001-9979-4178 [id=aff1]Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210023, China [id=aff2]School of Astronomy and Space Science, University of Science and Technology of China, Hefei 230026, China [id=aff4]Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216, China [id=aff3]University of Chinese Academy of Sciences, Nanjing 211135, China Shi et al. Asymmetric HXR Radiation in a C4.4 Flare § ABSTRACT The asymmetry in hard X-ray (HXR) emission at the footpoints (FPs) of flare loops is a ubiquitous feature closely associated with nonthermal electron transport. In this study, we analyze the asymmetric HXR radiation at two flare ribbons which is thermal-dominated during a long-duration C4.4 flare that occurred on March 20, 2023, combining multi-view and multi-waveband observations from the Advanced Space-based Solar Observatory (ASO-S), Solar Orbiter (SolO), and Solar Dynamics Observatory (SDO) spacecraft. We find that the Lyman-alpha () emission captures similar features to the λ304 emission in both light curve and spatio-temporal evolution of a pair of conjugate flare ribbons. The spectra and imaging analysis of the HXR emission, detected by Spectrometer Telescope for Imaging X-rays (STIX) in 4-18 keV, reveal that the two-ribbon flare radiation is thermal dominated by over 95%, and the radiation source mainly concentrates on the northern ribbon, leading to an asymmetric distribution. To understand the underlying reasons for the HXR radiation asymmetry, we extrapolate the magnetic field within the active region using the nonlinear force-free field (NLFFF) model. For 78% of the magnetic field lines starting from the northern flare ribbon, their lengths from the loop-tops (LTs) to the northern FPs are shorter than those to the southern FPs. For 62% of the field lines, their magnetic field strengths at the southern FPs exceed those at the northern FPs. In addition, considering the larger density, ≈1.0×10^10 cm^-3, of the low-lying flare loops (< 32 Mm), we find that the shorter path from the LT to the northern FP enables more electrons to reach the northern FP more easily after collisions with the surrounding plasma. Therefore, in this thermal-dominated C-class flare, the asymmetric location of the flare LT relative to its two FPs plays a dominant role in the HXR radiation asymmetry, while such asymmetry is also slightly influenced by the magnetic mirror effect resulting in larger HXR radiation at the FPs with weaker magnetic strength. Our study enriches the understanding of particle transport processes during solar flares. § INTRODUCTION Solar flares are known to accelerate particles and heat plasma, producing hard X-ray (HXR) bursts and gamma-ray bursts <cit.>. To investigate the underlying physical processes of plasma heating, particle acceleration, and magnetic reconnection, spectra and imaging analysis of the HXR emission during flares serve as crucial methods <cit.>. The enhancement of X-ray fluxes mainly originates from the thermal emission that heats the plasma to a high temperature and nonthermal bremsstrahlung emission associated with the energy release of energetic particles <cit.>. HXR sources are often observed at the footpoints (FPs), loop-top (LT) and above the LT of flare loops, which have been extensively studied using instruments such as the Yohkoh <cit.> and the Reuven Ramaty High-Energy Solar Spectroscopic Imager <cit.>. In observations, the HXR sources located at the FPs of solar flares typically present asymmetry, which can be classified into two main types: S-type <cit.> and N-type (non-Sakao type). These asymmetric phenomena are generally believed to be associated with the transport processes of nonthermal electrons in the flare loop. The S-type asymmetry of the HXR sources is often explained by the magnetic mirror effect <cit.>. In this scenario, the brighter HXR FP is located in a region where the strength of the photospheric magnetic field is weak. The magnetic mirror effect plays a role in guiding and trapping nonthermal electrons, leading them to concentrate and emit more X-rays at one FP than the other. The N-type asymmetry is featured as the brighter FP of the HXR emission corresponds to the region with a stronger magnetic field, which can be interpreted as the reconnection site generated by interactions with other flare loops being closer to that FP <cit.>. The magnetic reconnection reduces the effect of magnetic field convergence, allowing numerous electrons to precipitate at the brighter FP. The Advanced Space-based Solar Observatory <cit.>, launched in Oct. 2022, is the first comprehensive solar space observatory in China. The scientific objectives of ASO-S are focused on “1M2B” <cit.>, namely the magnetic fields and two types of violent eruptions on the sun – solar flares and coronal mass ejections (CMEs). There are three coordinated payloads on board the ASO-S, the Full-disk vector MagnetoGraph <cit.>, the Lyman-alpha () Solar Telescope <cit.>, and the Hard X-ray Imager <cit.>. Three payloads can be used to investigate the relationships among solar flares, CMEs, and magnetic fields. The LST payload consists of three instruments: the Solar Disk Imager (SDI), the Solar Corona Imager (SCI), and the White-light Solar Telescope (WST). These instruments enable simultaneous dual-waveband observations from the solar disk to the low corona, focusing on the studies of flare, prominence and CME eruptions. In this work, we analyze the asymmetric HXR sources in a two-ribbon flare dominated by over 95% thermal radiation during the very long rise phase of about two hours in a long-duration event (LDE). We further explain this phenomenon by combining the results of the nonlinear force-free field (NLFFF) extrapolation. Section <ref> provides an overview of the LDE observations. Section <ref> presents our analysis and results: (1) STIX HXR spectra and imaging analysis; (2) comparisons between and 304 Å emissions; (3) the configuration of the magnetic field extrapolated by the NLFFF model. Our conclusions and discussion are given in Section <ref>. § OVERVIEW OF OBSERVATIONS The two-ribbon flare (SOL2023-03-20T15:34) studied in this paper was produced by the eruption of a hot channel and filament system in the active region NOAA AR 13258. This event was observed by various instruments, including the SDI instrument of the LST payload on board the ASO-S, Spectrometer Telescope for Imaging X-rays <cit.> on board the Solar Orbiter <cit.>, the Atmospheric Imaging Assembly <cit.> on board the Solar Dynamics Observatory <cit.>, the Geostationary Operational Environmental Satellite (GOES), and the Chinese Hα Solar Explorer <cit.>. The SolO was located at a longitude separation of 18.4 from Earth and had a heliocentric distance of 0.51 AU. Multi-view and multi-waveband observations provide detailed insights into the dynamics of the flare and enable a more comprehensive understanding of the associated particle behaviors. This event is a GOES C4.4 flare that started at 13:30 UT and peaked at 15:34 UT on March 20, 2023, shown in Figure <ref>a, which is a long-duration event (LDE) lasted for an impressive duration of 11 hours. The curve of the time derivative (purple line) presented in Panel a is derived from the GOES data <cit.>. The light curves of HXR emissions at photon energies of 4-10 keV (green line) and 10-18 keV (magenta line), detected by STIX, are shown in Panel b. Panel c displays the light curves of AIA 304 Å (black line), AIA 94 Å (cyan line), and SDI (blue line), obtained by integrating over the entire flare region. All of these light curves display a significant enhancement during the flare. This event is classified as a relatively small C-class flare, only low-energy photons (4-18 keV) were detected by STIX at a distance of about 0.51 AU. Additionally, the peak time of the SXR emission, marked by a red dashed line, is close to that of the HXR emission. It implies that the energy released during this event is probably not dominated by nonthermal electrons (further evidence is presented in Section <ref>). Unfortunately, due to the instrument being in the test status, the SDI only has available observations starting from 13:57 UT during this period. Despite this limitation, it is still possible to observe a good temporal correlation between the HXR emissions and the light curves of and 304 Å, implying that they might have similar radiation origins. In the flare rising phase, and 304 Å radiation were from flare ribbons. However, clear distinctions are evident between the HXR and 94 Å. Figure <ref> presents the flare-related structures observed by multiple instruments during this LDE. Panels a and b clearly show the associated filament and hot channel observed by CHASE Hα and AIA 94 Å, respectively. As the filament rotates and rapidly rises, it subsequently develops into a CME accompanied by the formation of an elongated current sheet (CS), a series of blobs, and supra-arcade downflows (SADs). Simultaneous observations from AIA 304 Å and SDI capture the spatio-temporal evolution of a pair of conjugate flare ribbons, as shown in Panels c and d, presenting a good consistency with each other. During the flare rising phase, the two ribbons rapidly separate towards the northern and southern directions, respectively. The flare ribbons can also be observed in the Hα images. There is no significant brightening in the AIA 1600 Å and 1700 Å images, while brightening in these two passbands is generally believed to be associated with the motion of nonthermal electrons in the flare loops. These phenomena indicate that the flare ribbons are mainly dominated by thermal emission, and formed in the middle-to-upper chromosphere and transition region. § ANALYSIS AND RESULTS §.§ HXR Spectra and Imaging Analysis To determine the nonthermal/thermal nature of the two-ribbon flare and clarify the origin of the HXR emission, we conduct an analysis of both the spectra and imaging of the HXR detected by STIX during the flare rising phase. Figure <ref> and Figure <ref> present the results for the selected time intervals marked by the pink and blue shadows in Figure <ref>, respectively. First, we use a double thermal model that describes optically thin thermal bremsstrahlung radiation to fit the observed HXR spectra. Figure <ref> presents the spatially integrated count spectra (black solid lines) detected by STIX for various time intervals from 2 to 4 minutes, after subtracting the pre-flare background (black dashed lines) from 13:20 to 13:22 UT. The chosen time intervals are to ensure sufficient count rates in 4 to 18 keV. Considering the extremely long rising phase of about 2 hours for this flare, the temperature may not have significant change in 2 to 4 minutes. Blue lines in each panel represent the optimal fitting curves. Free parameters of the double thermal functions, including electron temperature (T) and emission measure (EM), are indicated in each panel. The residual distribution and the coefficient chi-square (χ^2) quantitatively evaluate the level of consistency between the fitting function and the HXR spectra. We also attempted to use a standard combination of thermal and nonthermal power-law functions to fit the HXR spectra. It turns out that their χ^2 values are at least twice as large as those in the double-thermal fittings. The thick-target spectral indices are about 8 to 10 during the flare rising phase, and the ratios of integrated nonthermal to thermal count fluxes are less than 5%. Therefore, based on these two different fitting models, the spectral fitting analysis demonstrates that the two-ribbon flare radiation is thermal dominated by over 95%. Panel a presents the spectral fitting results at 13:51 UT of the sub-peak preceding the main peak of the HXR light curves, which corresponds to the filament eruption. In Panels b-d, it can be seen that during the rising phase, the weighted mean temperature range of the flare is from 6.83 MK (14:21 UT) to 7.76 MK (15:30 UT). Then, we reconstruct the HXR maps detected by STIX and superpose them on AIA 304 Å (left column) and SDI (right column) images, as shown in Figure <ref>. Panel c additionally presents the composite image combining AIA 304 Å (red) and AIA 94 Å (cyan) at 14:58 UT. To generate these images, visibilities are computed by subcollimators labeled from 3 to 10, with an integration time of 6 minutes centered on the observation time of AIA, and applied to the MEM_GE algorithm <cit.> for the thermal sources in the 4-18 keV energy range, resulting in an angular resolution of ≈14.6”. The co-alignment of the STIX images was done with the Full Sun Imager (FSI) 304 Å images observed by the Extreme Ultraviolet Imager <cit.> on board the SolO. Subsequently, we reprojected the STIX images from the SolO view to Earth view, assuming that all photons originate from almost the same altitude, i.e., the middle-to-upper chromosphere or transition region. The reason for selecting a 6-minute integration time is that it can obtain the HXR images more consistent with the flare ribbons observed in AIA 304 Å and SDI compared to the 4-minute images during the initial stage of the flare, and they have similar imaging results near the peak time of the GOES flux. There were no SDI observations before 13:57 UT, and the parallel sections of the flare ribbons formed after 14:00 UT. Therefore, STIX imaging results at 13:51 UT are not presented in Figure <ref>. The gold contours in each panel indicate 45%-90% of the maximum HXR intensity, with a step of 15%. The HXR radiation mainly originates from the flare FPs and presents an asymmetric distribution, primarily concentrated along the parallel section of the northern ribbon. As the flare evolves towards the peak time of the SXR flux, a weak HXR source begins to appear at the southern FP (not shown in Figure <ref>). The white contours in each panel represent 55% of the maximum intensity of the AIA 304 Å. From comparisons, it is evident that the 304 Å and emissions present a good spatial consistency in the evolution of the flare FPs, consistent with the study of <cit.>. Note that the HXR sources are predominantly from the flare ribbon instead of the projection of the higher coronal source. From the light curves in Figure <ref>, the STIX light curve has more similarity to that of AIA 304 Å than AIA 94 Å. Moreover, if we compare the morphology details of the STIX HXR source with that of the flare ribbon observed in AIA 304 Å and that of flare loops observed in AIA 94 Å, we find that the HXR source resembles the northern ribbon more, especially at the upper-left end in Figure <ref>a and c. The spectra and imaging analysis further support the conclusion that the HXR emission from the flare FPs is dominated by the collision of thermal electrons injected downward from magnetic reconnection with the surrounding plasma in the dense flare loop. Furthermore, both the emissions of 304 Å and have similar radiation features produced by the thermal process that occurred in the two ribbons of the flare. §.§ and 304 Å Emissions To quantitatively investigate the relationship between the radiation intensities of 304 Å and , we conduct pixel-by-pixel comparisons in the two-ribbon regions and calculate their Pearson correlation coefficients (CCs). Aligning the SDI images with the AIA using the method developed by <cit.> is crucial to ensure that the calculated radiation comes from the same flare region, enabling accurate point-to-point comparison of radiation intensities between 304 Å and . We then uniformly sample points within the white contour (representing 55% of the 304 Å intensity) shown in Figure <ref> and extract their corresponding radiation intensities on a pixel-by-pixel basis. Figure <ref> presents scatter diagrams of the intensity comparisons under logarithmic axes between 304 Å and in the southern (left column) and northern (right column) ribbon regions from 14:23 UT to 15:28 UT. One can see that the radiation has moderate intensity correlations with 304 Å, while the CCs of the southern ribbon (0.60-0.71) are greater than that of the northern ribbon (0.44-0.60). The calculated difference in CCs between the two ribbons may be attributed to the distinct environmental conditions in each region. Meanwhile, as the flare ribbons evolve, there is a gradual increase in the correlation between and 304 Å emissions, which may be closely associated with the heating process in the flare. §.§ NLFFF Extrapolation To understand the asymmetric distribution of the HXR emission and the morphology of the two ribbons, we conduct the NLFFF extrapolation based on the optimization method <cit.> in the flare region. The structure of the magnetic fields is presented in Figure <ref>. The Space-Weather HMI Active Region Patches <cit.> map, derived from the Helioseismic and Magnetic Imager <cit.> on board the SDO, is used as the boundary condition for the NLFFF model. The 180-degree ambiguity is corrected, and the map is projected onto the cylindrical equal-area (CEA) coordinate. Figure <ref>a presents the B_ z component of the SHARP map at 14:58 UT with a pixel scale of 0.36 Mm. The gold contour represents 30% of the maximum HXR intensity projected from the Helioprojective-Cartesian (HPC) coordinate in Figure <ref>c and d to the CEA coordinate, and red dots are uniformly sampled within it as the start points for tracing magnetic field lines (green lines). The dark green lines represent magnetic field lines, where the magnetic field strengths at the southern FPs are larger than those at the northern FPs. In Panel c, the distribution of traced field lines is displayed along with the ribbon contours of AIA 304 Å (55%, red lines) and HXR (30%, gold lines) on the HMI Line-of-Sight (LoS) magnetogram. The blue rectangle represents the Field-of-View (FoV) of the SHARP map shown in Panel a. One can see that the configuration of the magnetic field follows a horn-like shape, featuring a convergence field rooted in the southern region with negative fluxes and a divergent distribution in the northern region with positive fluxes. Based on the extrapolated magnetic field, we calculate the squashing factor Q <cit.> to quantify the magnetic connectivity using the FastQSL code developed by <cit.>. Figure <ref>b presents the distribution of the signed log Q, slog Q = sgn(B_ z)· log Q, at the photosphere. The magnetic field lines originating from the red dots marked in the high-Q region (red) in the northern region are all connected to the high-Q region (blue) in the southern region. To gain insights into the three-dimensional structure of the magnetic field, particularly the quasi-separatrix layer (QSL), we calculate the Q within the 3D box region. Panel d shows its distribution on a cross-section along the dashed line in Panel b and perpendicular to the XOY plane. From the QSL map, we can see the positions of the LT and FPs (F1, F2) of the flare loop. The LT is located at a height of ≈32 Mm above the photosphere, while F1 and F2 are rooted in the northern and southern ribbons, respectively. One can see that the distance from the flare LT to the two FPs is asymmetric, indicating that the length from the LT to F1 is shorter than that to F2. Such magnetic field configuration suggests that for the flare loops at the eastern side, the energy release of particles is located closer to F1. In addition, we conduct a statistical analysis on the relationship between the distance from LT to FPs, magnetic field strength B, and HXR intensity I for each traced field line, as shown in Figure <ref>. In Panel a, we calculate the ratio of the de-projected length L_ S/L_ N from the LT to the northern FT L_ N and the southern FT L_ S, as well as the ratio of the distance D_ S/D_ N from the projected points of the LT onto the XOY plane to the two FPs, respectively. The subscript `S' denotes the southern FP, while `N' denotes the northern FP. We find that 78% of the traced magnetic field lines have L_ S > L_ N, while 22% of the remains have L_ S < L_ N. Similarly, 79% of magnetic field lines have D_ S > D_ N, whereas 21% have D_ S < D_ N. The statistical relationship quantitatively demonstrates that the position of the flare LT is asymmetric. We then compare the relationship between the de-projected length ratio L_ S/L_ N from LT to FPs and the ratio of the magnetic field strength B_ S/B_ N at FPs, which is presented in Panel b. Based on the distribution of B_ S/B_ N, there is a slight difference in the magnetic field strength between the northern FPs and the southern FPs (62% of magnetic field lines have B_ S > B_ N at their FPs). The comparisons reveal that the asymmetry of the LT position is mainly attributed to the horn-shaped magnetic field configuration and is slightly influenced by the magnetic field strength at the FPs. Furthermore, we compare the ratio of the HXR intensity I_ N/I_ S at the FPs with the magnetic field strength B_ S/B_ N and the loop length L_ S/L_ N, as shown in Figure <ref>c and d. Due to the uniform sampling of the northern FP for each magnetic field line within the contour of 30% maximum HXR intensity, the HXR intensity ratios between the northern and southern FPs are always greater than 1, i.e., I_ N > I_ S. Similarly, one can see that there is a small difference in the magnetic field strength between the northern FP (38% field lines of B_ S < B_ N) and the southern FP (62% field lines of B_ S > B_ N), while 78% of LTs are closer to the northern FP, L_ S > L_ N. Therefore, we conclude that the asymmetric distribution of the HXR radiation is mainly attributed to the asymmetry of the flare LT, and is also slightly influenced by the magnetic mirror effect. § CONCLUSIONS AND DISCUSSIONS In this work, we investigate the asymmetry of the HXR radiation at two ribbons in a C4.4 flare by combining multi-view and multi-waveband observations from the ASO-S, SDO, and SolO spacecraft. The LDE observed on March 20, 2023, was caused by the eruption of a filament system, accompanied by a CME and an elongated CS. The light curves of both the SXR and HXR emissions present a consistent peak time, with no brightening observed in the 1600 Å and 1700 Å passbands, implying that thermal electrons dominate this event. The light curves of HXR, SDI and AIA 304 Å during the flare rising phase exhibit a consistent trend, indicating a common radiation origin primarily from flare ribbons formed in the middle-to-upper chromosphere and transition region. Thanks to the 24-hour uninterrupted full-disk observations from Earth's view provided by the SDI on board the ASO-S, we can study the spatio-temporal features of the two ribbons in waveband. These observations, along with those in the 304 Å, reveal similar evolution of the two ribbons, presenting rapid separation towards the northern and southern directions. The pixel-by-pixel comparisons between the SDI and AIA images further quantitatively demonstrate the moderate correlation between and 304 Å. Meanwhile, their emissions may be closely associated with the heating process in the flare. The spectral fitting of the HXR emission detected by STIX further confirms that the flare mainly releases energy through the process of thermal bremss-trahlung emission, implying that the radiations in both 304 Å and are produced by a thermal process. Additionally, the MEM_GE algorithm is applied to reconstruct the HXR maps observed by STIX. We find that the HXR radiation presents an asymmetric distribution, with the majority of the emission concentrated at the northern ribbon. Using the method of differential emission measure <cit.>, we estimate the electron density n_ e inside the flare loops. The total emission measure EM and the average electron number density are deduced via, EM = ∫_T_ min^T_ max DEM(T)dT, n_ e = √( EM/l) ( cm^-3). where l represents the effective depth along LoS. We further assume that the flare loops have similar extents in depth and width, and we estimate l to be ≈2.3×10^8 cm by measuring the loop width observed by AIA 94 Å. The value of EM can be directly derived from the DEM result. Therefore, we determine the density n_ e is ≈1.0×10^10 cm^-3, indicating that the flare loop is relatively dense. By combining the NLFFF extrapolation with observations, we explore the magnetic configuration and physical parameters within this active region. This approach allows us to understand the magnetic field structure and derive important physical properties within the two ribbons. The characteristic loop length L, derived from the traced field lines, is ≈5.9×10^9 cm. Based on the magnetic topology, we further demonstrate that the asymmetric HXR emission is produced by the asymmetric position of the flare LT and the magnetic mirror effect. The statistical parameters of the extrapolated field lines indicate that the de-projected length from the LT to the northern FP is shorter than that to the southern FP, and the magnetic field strength at the southern FP is stronger than that at the northern FP, which promotes thermal electrons to collide with the plasma inside the dense flare loop and easily reach the northern ribbon. At the same time, considering the physical conditions such as the lower height, ≈32 Mm, of the LT above the photosphere, as well as the denser plasma inside the flare loops, ≈1.0×10^10 cm^-3, more thermal electrons are more easily able to reach the northern FP during the collision process with the surrounding plasma, resulting in a stronger HXR emission compared to the southern FP. Our work provides a novel approach for investigating the asymmetric HXR radiation in the two-ribbon flare dominated by over 95% thermal radiation. This approach involves combining the NLFFF extrapolation with observations to understand the magnetic field structure and physical properties within the flare region. Unfortunately, due to no obvious HXR emissions above ≈18 keV in this flare, the HXI on board the ASO-S did not receive sufficient counts for data analysis. Future research endeavors will statistically analyze the asymmetric events of the HXR emission at FPs using the HXR data observed by the HXI and combine them with the unique waveband full-disk observations from the SDI to conduct more detailed studies of the asymmetric HXR radiation. We sincerely thank the anonymous referee for providing valuable suggestions that helped us improve the quality of the manuscript. We thank Shangbin Yang and Zhentong Li for their helpful discussions on the NLFFF model and STIX data processing. The authors thank the teams of SolO/STIX, SDO, GOES, CHASE for their open-data use policy. The ASO-S mission is supported by the Strategic Priority Research Program on Space Science, Chinese Academy of Sciences. SolO is a space mission of international collaboration between ESA and NASA, operated by ESA. The STIX instrument is an international collaboration between Switzerland, Poland, France, Czech Republic, Germany, Austria, Ireland, and Italy. SDO is a mission of NASA's Living with a Star (LWS) Program. The CHASE mission is supported by China National Space Administration. G.L. Shi wrote the main manuscript, analyzed the data, and generated figures. L. Feng discovered the asymmetric HXR radiation event, conceived the study, and revised the manuscript. J. Chen, B.L. Ying, S.T. Li, and Q. Li discussed results and provided helpful suggestions on this work. W.Q. Gan is PI of the ASO-S. H. Li and L. Feng are PI and Co-PI of the LST, respectively. K.F. Ji provided an alignment code for the AIA and SDI images. Y. Li, Y. Huang, Y.P. Li, J.W. Li, J. Zhao, L. Lu, J.C. Xue, P. Zhang, D.C. Song, Z.Y. Tian, Y.N. Su, Q.M. Zhang, Y.Y. Ge, J.H. Shan, Y. Zhou, J. Tian, G. Li, X.F. Liu, Z.C. Jing, and S.J. Lei contributed to the in-orbit testing, pipeline, and release of LST data. All authors reviewed the manuscript. This work is supported by the National Key R&D Program of China 2022YFF-0503003 (2022YFF0503000), Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB0560000, NSFC (grant Nos. 11973012, 11921003, 12103090, 12203102, 12233012, 12373115), the mobility program (M-0068) of the Sino-German Science Center. The ASO-S data before Apr. 1, 2023 are currently being tested and not publicly available, but can be obtained from the corresponding author with reasonable request. The ASO-S data after Apr. 1, 2023 are publicly available at <http://aso-s.pmo.ac.cn/sodc/dataArchive.jsp>. The AIA&HMI data are downloaded from the Joint Science Operations Center (JSOC) at <http://jsoc.stanford.edu>. The STIX data are publicly available at <https://datacenter.stix.i4ds.net/view/list/fits>. The GOES data are obtained from the National Oceanic and Atmospheric Administration (NOAA) at <https://www.ngdc.noaa.gov/>. The CHASE data are accessible through the solar science data center of Nanjing university at <https://ssdc.nju.edu.cn/NdchaseSatellite>. The authors declare no competing interests. spr-mp-sola
http://arxiv.org/abs/2407.12201v1
20240716215832
Dynamic Task Control Method of a Flexible Manipulator Using a Deep Recurrent Neural Network
[ "Kento Kawaharazuka", "Toru Ogawa", "Cota Nabeshima" ]
cs.RO
[ "cs.RO" ]
On the Calibration of Epistemic Uncertainty: Principles, Paradoxes and Conflictual Loss Mohammed Fellaji1 Frédéric Pennerath1 Brieuc Conan-Guez2 Miguel Couceiro2 July 16, 2024 ======================================================================================= empty empty § ABSTRACT The flexible body has advantages over the rigid body in terms of environmental contact thanks to its underactuation. On the other hand, when applying conventional control methods to realize dynamic tasks with the flexible body, there are two difficulties: accurate modeling of the flexible body and the derivation of intermediate postures to achieve the tasks. Learning-based methods are considered to be more effective than accurate modeling, but they require explicit intermediate postures. To solve these two difficulties at the same time, we developed a real-time task control method with a deep recurrent neural network named Dynamic Task Execution Network (DTXNET), which acquires the relationship among the control command, robot state including image information, and task state. Once the network is trained, only the target event and its timing are needed to realize a given task. To demonstrate the effectiveness of our method, we applied it to the task of Wadaiko (traditional Japanese drum) drumming as an example, and verified the best configuration of DTXNET. 柔軟な身体はその劣駆動性から環境接触等に対して優れている反面その制御は難しく, 与えられたタスクを動的かつ正確に実行するための手法を開発していく必要がある. 我々は柔軟な身体における制御指令列による身体変化と, それにより実行されるタスクの変化を汎用的に記述する再帰的ネットワークを構築し, それを用いることで最適な制御指令を計算する. 本研究は様々なマニピュレータ・タスクに適用可能であるが, 一つの例として太鼓を叩くという疎なイベント情報のみを出力する, 既存手法では扱うことが難しいタスクを設定し, 本ネットワークにより意図したタイミングと音量を実現することで, その有効性を確認する. § INTRODUCTION Conventionally, the increase of rigidity has been emphasized to achieve precise control of the robot body <cit.>. On the other hand, in recent years, the necessity and effectiveness of the soft robot, whose body is underactuated and flexible, are being reconfirmed <cit.>. It has advantages in environmental contact, including human-robot interaction and the impact of falling down. The flexible body is difficult to control, especially when applying widely-used conventional control methods, e.g. quadratic programming <cit.> and model predictive control <cit.>. It is because they have been designed for the rigid body; they mostly handle the states, which are calculated from robot postures, such as the center of gravity and contact state. As for the flexible body, the posture cannot be determined by just the output of actuators, and the modeling of its body is difficult. Furthermore, although the conventional methods can be applied for tasks such as applying force and moving objects to a certain degree, they are not applicable for tasks of handling sparse event information such as drumming, hitting with a hammer, and playing tennis. This is because ordinary motion equations are not adequate to represent sparse task state, and therefore, it is difficult to give successive feedback signals for the sparse task. There are many studies regarding the control of the flexible body <cit.>. Methods to control a robot with flexible links by precise modeling have been developed since over 30 years ago <cit.>. However, because the modeling of the flexible body is difficult, methods that do not require precise modeling have been developed, e.g. using fuzzy control <cit.>, reinforcement learning <cit.>, and neural network <cit.>. The focuses of these methods are on the control of the arm tip and inhibition of vibration, and not on realizing given tasks. Also, they need to modelize the flexible body to a certain degree and search its parameters, so it is difficult to handle materials with unknown characteristics or octopus-like flexible robots. Controlling the flexible body has some common points with manipulating flexible objects, and so there are many related works. Inaba, et al. manipulated a rope with vision feedback <cit.>. Yamakawa, et al. succeeded in dynamically manipulating flexible objects with precise modeling by high-speed robot hands <cit.>. Also, in recent years, thanks to the remarkable growth of deep learning, several methods acquiring manipulation of flexible objects by trial and error such as EMD Net <cit.> and Dynamics-Net <cit.> have been proposed. There are also methods to learn the manipulation of flexible objects <cit.> and even how much force to apply <cit.> from teaching signals. These methods represent the state of flexible objects, which is difficult to modelize, by image or point cloud, and make use of them for the control of the actual robot. While the manipulation of flexible objects aims to change their posture and shape, the flexible body control aims to not only change its posture but also realize given tasks. There is also a method to learn task execution using image by a robot with flexible joints <cit.>, but it is only verified at the simulation level and needs teaching signals. 柔軟な身体を制御することは難しく, これまでのロボティクスは剛性を高め, 精度を高くすることで目的のタスクを実現するものがほとんどだった<cit.>. 近年, 環境接触や転倒による衝撃, 人間との共生等から, これまでの剛なロボティクスが見直され始めており, その劣駆動で本質的に柔軟な身体を模索するソフトロボティクスの重要性が再確認され始めている<cit.>. しかし, 柔軟な身体の制御には, 従来の剛なロボットにおける制御手法を適用することが難しい. 例えば, 剛なロボットにおいては, 与えられたタスクを実行するためのロボットの姿勢を, 2次計画法<cit.>やモデル予測制御<cit.>等を用いて計算する手法が多く開発されてきた. これらはロボットの姿勢等から直接求解可能なタスク状態のみを扱い, 重心位置や接触状態等を操作するものが多い. しかし, 柔軟な身体は劣駆動性からアクチュエータの出力により姿勢が一意に定まらず, 姿勢等からタスクの状態を決定することも難しいため, これらの適用は困難である. また, これらの手法は力を加えたり, 物体の位置を操作したりするタスクを扱うことはできるかもしれないが, 音のような時間的に疎な情報を扱うタスクを行うことはより難しい. これは, 通常の状態方程式ではこれらを表現することが難しく, また, 報酬が疎なためそのタスクを達成するまでの連続的なフィードバックを与えることが難しいことが原因である. よって, 柔軟な身体を動的かつ正確に制御し, 目的のタスク状態を直接指定するのみで, 力や位置だけでなく, 音のような疎な情報も統一的に実現することのできる手法を開発する必要がある(figure:motivation). これまで, 多くの柔軟な身体を制御するための手法が開発されてきた<cit.>. 古くから, 柔軟なリンクを持つアームを, 完全なモデリングにより制御する手法が開発されている<cit.>. しかし, 柔軟な身体の困難なモデリングを解決するため, Fuzzy制御を用いた手法<cit.>, 強化学習を用いた手法<cit.>, ニューラルネットワークを用いた手法<cit.>等の制御も開発され続けている. これらの手法の多くは, アーム先端の位置を制御すること, 柔軟な身体の振動を抑制することに注力しており, 柔軟な身体の姿勢だけでなく, 定められたタスクを動的に実現するものではない. そして, 柔軟な身体はある程度モデル化され, それらのパラメータを学習的に探索するものがほとんどであり, 身体がタコのような柔軟な身体であったり, よりモデル化困難な素材を使った場合には適用が難しい. また, 柔軟な身体を制御することは, 柔軟物体を操作することと多くの共通した問題を扱っている. 稲葉らは視覚フィードバックによりロープを操り<cit.>, 山川らは柔軟物体をモデリングして高速ロボットハンドを用いて動的に操作することに成功している<cit.>. また近年, 深層学習の発展は著しく, EMD Net <cit.>やDynamics-Net <cit.>等, 試行錯誤により柔軟物体操作を獲得する手法が提案されている. 教示により柔軟物体操作を学習する例も存在し<cit.>, 柔軟物体の姿勢操作だけでなく力のかけ方を学習する手法も存在する<cit.>. これらは, モデリング困難な柔軟物体を画像やPoint Cloudを用いることで記述し, 実ロボットの制御に役立てている. 柔軟物体操作は操作物体の姿勢や状態を遷移させていくのに対して, 柔軟身体操作は自身の姿勢だけでなく, 最終的な目標は与えられたタスクを実現することである. 画像を用いて柔軟関節を持つロボットのタスク実行を学習させる手法<cit.>も存在するが, シミュレーションのみの実行であり, 教示信号を要する. In this study, we propose a method controlling the flexible body precisely and dynamically to realize given tasks, which handle not only force and position but also sparse event information (figure:motivation). Our method does not require any teaching or demonstration, and works by merely commanding the target task state. Our method consists of Dynamic Task Execution Network (DTXNET) and real-time backpropagation-based calculation of control command using DTXNET. DTXNET is a deep neural network which represents the motion equation of the flexible body and task using not only its joint angle, velocity, and torque but also image information. In order to make use of the dynamic characteristics of the flexible body, joint torque, which can directly control acceleration, is used as the control command. Our method is applicable to various manipulators and tasks if sensor information of actuators and image are available and the task state is clearly defined, because DTXNET does not require any task-specific structure. As one example, we realize a difficult task handling sound, which temporally outputs sparse events, instead of a task applying force or moving objects which can be solved to a certain degree by conventional frameworks. The detailed contributions of this study are, * Structure of DTXNET using image information to realize dynamic tasks by a flexible manipulator. * Real-time control method to calculate optimized torque command using DTXNET and its backpropagation through time. * Realization of a dynamic task of handling sparse event information. In the following sections, first, we will explain the basic structure of DTXNET, various configurations of DTXNET by changing inputs and outputs, training phase of DTXNET, and real-time control phase using DTXNET. Next, we will set a task of Wadaiko (traditional Japanese drum) drumming using a flexible manipulator, and compare the performance among various configurations of DTXNET by changing the input and output of the robot state. Finally, we will discuss the network structure and experiments, and state the conclusion. そこで本研究では, figure:motivationに示すコンセプトのように, モデリング困難な柔軟なアームを用いて, 教示を用いることなく動的に所望のタスクを実現する手法を開発する. 柔軟な身体とタスクの運動方程式を関節の角度・速度・トルク情報だけでなく, 画像情報を用いて記述するDynamic Task Execution Network (DTXNET)を開発する. 柔軟な身体の動的特性を活かすために, 関節角度や関節速度ではなく, 加速度を直接操作可能な関節トルクによる制御を行う. 本手法はアクチュエータまたは画像情報が取得可能であり, 扱いたいタスクの状態を定義することができれば, どのようなマニピュレータに対しても汎用的に適用可能な手法である. その一例として, 物体に力を加えるなど連続値を扱う, これまでの状態方程式を用いた枠組みである程度解決可能なタスクではなく, 音という時間的に疎なイベント情報を扱うより難しいタスクを実現することを試みる. 本研究の詳細なコントリビューションを以下に示す. * 柔軟な身体により動的にタスクを実現するための画像入力を用いたDTXNETの構成法 * DTXNETを展開し通時的誤差逆伝播法<cit.>を用いることで最適制御入力を算出するリアルタイム制御法 * 音という時間的に疎なイベント情報を扱う動的タスクの実現 以降ではまず, DTXNETの構成と, 入力と出力によるDTXNETの構造分類, DTXNETの訓練方法とそれを用いたリアルタイム制御手法について述べる. 次に, 柔軟マニピュレータを用いて太鼓を叩くという動作を設定し, DTXNETの状態入力や状態出力を変化させた様々な構造において, タスク達成度を比較する. 最後に, これらネットワークと実験について考察し, 結論を述べる. § DTXNET AND REAL-TIME CONTROL SYSTEM In subsec:basic-structure - subsec:optimized-torque, we will explain the overview of DTXNET and a real-time control system using it, and in subsec:whole-system, we will explain the specific implementation and parameters for experiments. 本節では初めは一般化してネットワーク等を記述し, 最後に具体的な実装やパラメータについて述べる. §.§ Basic Structure of DTXNET We show the overview of DTXNET and its application to the dynamic control system in figure:network-structure. DTXNET is a versatile network which represents the transitions of the current robot state and task state due to control commands. This network does not depend on the kind of manipulators and tasks. The equations of the network are, h(i=0) = f_init(s_init(t)) h(i+1) = f_update( [ h^T(i) u^T(i) ]^T ) s(i+1) = f_robot(h(i+1)) o(i+1) = f_task(h(i+1)) where s is the measured robot state, o is the task state, u is the control command, and h is the hidden state that is obtained by embedding the robot state. t is the time step for the robot, i is the time step for DTXNET, and the interval of these two variables is the same. f_init(t) is a function embedding the measured robot state to the hidden state, f_update is a function that outputs the hidden state at a next time step from the current hidden state and control command, and f_robot and f_task are functions that output the robot state and task state from the hidden state, respectively. s_init(t) and s(i+1) can represent different contents; for example, the former can be position and velocity and the latter can be just position. s(i+1) should be derived from s_init(t). We use actuator and image information as the robot state, and joint torque as the control command. If we assume the robot body is rigid, we can express s only by its joint angle and velocity. On the other hand, for the flexible body, we need to include image information and joint torque in s. When considering the flexible body as an underactuated multi-link structure, its state can be expressed by image information. Also, although the control command u is joint position or velocity for an ordinary position control robot, in this study, we use joint torque command, which can directly control the acceleration, in order to achieve a truly dynamic control. 本研究で提案するDTXNETの構造とそれを用いた動的制御の概要をfigure:network-structureに示す. DTXNETは, ある初期状態において制御入力を加えていくと, ロボットの身体状態, そしてタスクの状態が変化していくことを表す汎用的なネットワークである. 本手法は, ロボットの形やタスクの種類等に依存せず, 汎用的にロボット状態, 制御入力, タスク状態の関係を記述していくものとなる. これは式で表すと以下のようになる. h(i=0) = f_init(s_init(t)) h(i+1) = f_update( [ h^T(i) u^T(i) ]^T ) s(i+1) = f_robot(h(i+1)) o(i+1) = f_task(h(i+1)) ここで, sはロボットの観測される身体状態, oはあるタスクの状態, uは制御入力, hはロボットの圧縮された潜在的な状態を表す. tとiは異なり, 前者はロボットにおける時間ステップ, 後者はDTXNETにおける時間ステップを指し, 両者のステップ間隔は同じである. f_initはロボットの観測される身体状態を潜在状態に落としこむ関数, f_updateは潜在状態と制御入力から次のステップにおける潜在状態を出力する関数, f_robot, f_taskは潜在状態からそれぞれ次ステップで観測されるべきロボットの身体状態, タスク状態を出力する関数である. s_init(t)とs(i+1)は同じ形である必要はなく, 例えば前者は位置と速度, 後者は位置のみ, 等のバリエーションがあり得るが, s_init(t)の方がs(i+1)に比べて情報量が多くなければならない. 本研究では, このネットワークを用いて柔軟な身体を動的に制御することを考える. 通常の剛体を仮定したロボットであれば, sは関節角度・関節角速度のみで表現できるが, 柔軟な身体を制御するためには, それらだけでは表現できない身体状態が存在する. そこで本研究では, 関節角度・関節角速度に加え, 関節トルク・画像をsとして加える. 柔らかい身体を無限の自由度を持ったリンクと考えれば, 画像によってその身体状態を表現可能なことがわかる. しかし, それでもsには含まれないロボットにおける隠れ状態が存在する可能性があると考え, DTXNETにおけるf_updateは長期的な状態の記憶が可能なLong Short-Term Memory (LSTM) <cit.>を用いて表す. また, 通常の位置制御ロボットであれば制御入力uは関節角度または関節速度であるが, 真に動的な柔軟身体の制御を目指し, 本研究では直接加速度項を制御可能な関節トルクを制御入力として用いる. §.§ Various Configurations of DTXNET We represent eq:init - eq:out as neural networks. We can consider six types of configurations of DTXNET as shown in figure:various-network. DTXNET must fulfill the minimum requirements of inputting the measured robot state and control command to DTXNET and outputting the task state. In these configurations, Joint State represents a concatenated vector of the information of joint angle, velocity, and torque. FC, Conv, Deconv, and LSTM represent fully connected layers, convolutional layers, deconvolutional layers, and Long Short-Term Memory <cit.>, respectively. Although there are several choices for the recurrent network structure, in this study, we used LSTM, which is one of the most representative recurrent networks. There are three options in the design of the robot state: using only Joint State (Type 1), using only Image (Type 2), or using both Joint State and Image (Type 3). In these configurations, Image means the current image and optical flow regarding the inputted s_init, and only the current image regarding the outputted s. Because we knew that the inference of the optical flow is difficult based on our preliminary experiments, there is no output of the optical flow. There are two options in the output of the network. Type ·^- does not output the robot state, and Type ·^+ outputs the robot state. While the robot state is not directly involved in the realization of the task, we can expect DTXNET to increase the prediction accuracy by using the robot state at training phase. According to the above classifications, the six types of DTXNET are obtained by the combination of these options: 1^-, 2^-, 3^-, 1^+, 2^+, and 3^+. In this study, we not only realize a dynamic task using DTXNET, but also consider how the difference in types of DTXNET influences the task achievement. eq:init - eq:outを単純にニューラルネットワークに置き換えることを考える. 観測されうるロボットの身体状態と制御入力をネットワークの入力として加える, 与えられたタスクの状態を出力として加える, という最低限の制約を満たしたfigure:various-networkのような6種類のtypeを考えることができる. ここでは, 関節角度・関節角速度・関節トルクの情報をJoint Stateと表現する. FCは全結合層, Convは畳み込み層, Deconvは逆畳み込みを表す. まず, ロボットの観測される身体状態の種類によって分類する. type 1^-, 1^+は身体状態がJoint Stateのみ, type 2^-, 2^+は身体状態がImageのみ, type 3^-, 3^+は身体状態がJoint StateとImageの場合である. ここでImageとは, 入力となるs_initにおいては画像とオプティカルフローを指し, 出力となるsにおいては画像のみを指す. 実験からオプティカルフローを推論することは難しいと判断し, 出力にはオプティカルフローは含めていない. 次に, ネットワークの出力として身体状態を出力するかどうかという分類を考えることができる. タスクを実行することが目的のため, 無用に身体状態を出力することはない. type 1^-, 2^-, 3^-は身体状態を出力せず, type 1^+, 2^+, 3^+は身体状態を出力する. 本研究では, DTXNETを用いた動的タスク実現の手法だけでなく, これらのDTXNETの構造の違いがどのようにタスク実現に寄与するかについての考察も行う. §.§ Training Phase of DTXNET Because both robot state and task state, which are the input and output of DTXNET, can be easily obtained, the training procedure of DTXNET does not require any manual annotations. First, we send random control commands to the robot, and store the measured robot state s, control command u, and task state o. Then, we determine the number of time steps to expand DTXNET (T_train). After inputting s_init(t) into DTXNET, {s(i=1), ⋯, s(i=T_train)} and {o(i=1), ⋯, o(i=T_train)} are predicted by inputting u(i=c) (0 ≤ c < T_train) at T_train times, and DTXNET is trained by calculating the loss (L_train) between the predicted and actual values. In this training phase, T_train should be larger than the number of time steps to expand DTXNET by (T_control) at the real-time control phase. DTXNETは, 制御入力によるロボットの身体状態の遷移とタスク状態の遷移を記述するため, アノテーション等は必要としない. まず, ランダムな制御入力によりロボットを動作させ, その際に観測されたロボットの身体状態s, 制御入力u, タスクの状態oを蓄積しておき, これらを元に学習させる. DTXNETを展開する数T_trainを決め, s_init(t)をDTXNETに入力した後, T_train回u(i=c) (0 ≤ c < T_train)を入力し, {s(i=1), ⋯, s(i=T)}, {o(i=1), ⋯, o(i=T)}を予測し, 実際の値とloss (L_train)を取って学習させる. ここでT_trainは後にリアルタイム制御でDTXNETを展開するステップ数T_controlより大きいことが望ましい. 本研究では更新則としてAdam <cit.>を用いている. §.§ Real-time Control Phase Using DTXNET The real-time control phase using trained DTXNET has six steps, as shown in figure:network-structure. * Obtain the current robot state from the actual robot * Determine the initial control command sequence * Feed them into DTXNET * Calculate the loss * Optimize the control command sequence * Send the calculated control command to the robot All steps from (a) to (f) are executed within one time step that controls the actual robot. In (a), the current sensor information of the actual robot s_init(t) is obtained, and the current hidden state h(i=0) is calculated by eq:init. In (b), the initial control command sequence is calculated before optimization. This procedure is important, because the final calculation result largely depends on the initial value. We define the control command sequence that is optimized at the previous time step, as u^seq_pre = {u_pre(i=-1), u_pre(i=0), ⋯, u_pre(i=T_control-3), u_pre(i=T_control-2)}. By shifting it and replicating its last term u_pre(i=T_control-2), the initial control command sequence is constructed as u^seq_init, 0 = {u_pre(i=0), u_pre(i=1), ⋯, u_pre(i=T_control-2), u_pre(i=T_control-2)}. By starting from the previous optimized value, better control command can be efficiently obtained. Also, we define the number of data per minibatch as N_1, and the minimum and maximum value of control command as u_min and u_max, respectively. We divide [u_min, u_max] equally into N_1-1 parts and construct N_1-1 control command sequences u^seq_init, k (1 ≤ k < N_1) filled by each value. Then, a minibatch of N_1 initial control command sequences u^seq_init, k (0 ≤ k < N_1) is constructed. In (c), as stated in subsec:training, we determine T_control, which is the number of time steps to predict, and update DTXNET by eq:update T_control times. While updating eq:update, the task state is predicted by eq:out regarding each h. In this real-time control phase, we can omit eq:state to reduce the computational cost, because we do not have a certain desired robot state and the prediction of the robot state cannot be used for the calculation of loss. This omission is important for the real-time computation. When eq:state is omitted, the difference among the six types is only the calculation of f_init. Since eq:init is calculated only once at the beginning of T_control times iteration, the computational cost is almost the same among the six types. In (d), the loss L_control between the target task state sequence and predicted task sequence {o_i=1, ⋯, o_i=T_control} is calculated. The implementation of L_control depends on the task, which we will explain in subsubsec:loss-calculation. In (e), first, u^seq={u(i=0), ⋯, u(i=T_control-1)} with the lowest L_control is determined in a minibatch constructed at step (b). u^seq is optimized from L_control by using backpropagation through time <cit.>, as shown below, g = dL_control/du^seq u^seq_optimized = u^seq - γg/|g| where γ is the learning rate of optimization, g is the gradient of L_control regarding u^seq, u^seq_optimized is u^seq after the optimization. We can determine the γ manually, but in this study, a minibatch is constructed by changing γ variously, and the best γ is chosen. We define the number of data per minibatch as N_2 and the maximum value of γ as γ_max. We divide [0, γ_max] equally into N_2 parts, and a minibatch with N_2 control command sequences, which is u^seq_optimized optimized with each γ, is constructed. Steps (c) and (d) are executed again, and finally u^seq_optimized with the lowest loss L_control is determined. In (f), u^seq_optimized is sent to the actual robot. Because the maximum calculation time to be in time for the control frequency is used, we send not u^seq_optimized(i=0) but u^seq_optimized(i=1). Training phaseで学習されたDTXNETを用いたリアルタイム制御は, figure:network-structureに示すように6つの手順を踏む. * Obtain the current robot state from the actual robot * Determine the initial control command sequence * Feed them into DTXNET * Calculate the loss * Optimize control command sequence * Send the calculated control command to the actual robot この全手順をロボットの制御周期ごとに行う. (a)では, 現在のロボットのセンサ情報s_init(t)を取得し, eq:initにより現在の潜在状態h(i=0)を算出する. (b)では, 最適化する前の制御入力の初期値を求める. このステップは重要であり, 初期解によって最終的な計算結果が大きく異なる. 前ステップで最適化され最終的に実機に送られた制御入力列を{u_pre(i=-1), u_pre(i=0), ⋯, u_pre(i=T_control-3), u_pre(i=T_control-2)}とする. これを1ステップずつずらしてu_pre(i=T_control-2)を複製した, {u_pre(i=0), u_pre(i=1), ⋯, u_pre(i=T_control-2), u_pre(i=T_control-2)}を初期値に用いる. これは, 前回最適化された値を引き継ぐことで, より効率よくそのタスクを実現可能な制御入力を得るためである. また, 制御入力の最小値と最大値をu_min, u_maxとする. この最小値と最大値の間をN_batch1-1個に分割し, T_controlステップ分全てその同じ値で埋めた制御入力列も作成し, 合わせてN_batch1個のバッチを作成する. これは, タスクの指令状態が変化したり, 状況が変化したりしたときに柔軟に対応できるようにするためである. 以降では, これらのバッチの中から最もlossが下がった制御入力列を採用していくことになる. (c)では, subsec:trainingでも述べた何ステップ先まで予見するかの値T_controlを決定し, 制御入力を加えてDTXNETにおけるeq:updateを更新していく. T_control回eq:updateを更新すると同時に, それぞれのhに対してeq:outによりタスク状態を予測する. ここで重要なのは, Training phaseではeq:stateのlossも返す必要があるが, Control phaseではタスクの実現のみを考えるため, eq:outのみ走らせれば良い. ゆえにControl phaseにおいては, これまで説明した6種類のtypeはeq:initのみが異なり, 計算にかかる時間はほとんど変わらない. (d)では, targetとなるタスク状態と予測されたタスク状態{o_i=1, ⋯, o_i=T_control}のloss (L_control)を計算する. L_controlの実装はタスクによって異なり, 本研究における具体的なlossの計算は後に述べる. (e)ではまず, (d)で計算されたN_batch1個のそれぞれのバッチの損失の中でも最もL_controlが小さなu^seq={u_i=0, ⋯, u_i=T_control-1}を決定する. そのL_controlを元に以下のように通時的誤差逆伝播法<cit.>を用いてu^seqを更新していく. g = dL_control/du^seq u^seq_optimized = u^seq - γg/|g| ここで, γは更新に関する係数, gはL_controlのu^seqに対する勾配, u^seq_optimizedは更新後のu^seqである. このとき, γの値を決め打ちしても良いが, 本研究では様々なγによってバッチを作成し, 最も良いγを選ぶ. γの最大値γ_maxを決め, 0からγ_maxまでの値をN_batch2等分し, それらによって更新されたu^seq_optimizedをN_batch2個作成する. もう一度(c)と(d)を行い, 最も最終的なL_controlが小さかったu^seq_optimizedを決定する. (f)では, u^seq_optimizedを実機に送る. (a)から(e)は制御周期に間に合う最大限の時間を使うため, ここで実機に送るのはu^seq_optimized(i=0)ではなく, u^seq_optimized(i=1)となる. §.§ Implementation Details We will explain the detailed implementation of our system. We will explain the sound processing of Wadaiko, image processing, detailed network structure, parameters for the training and control phase, and calculation of loss, in order. 詳細な実装について述べる. 本研究で扱う太鼓を叩くタスクに必要な音声処理, 画像処理, ネットワークの具体的な構造, Training phaseやControl phaseにおけるパラメータ, lossの計算方法について順に述べていく. §.§.§ Sound Processing In this study, we set the target state of Wadaiko as a 1-dimensional value of the sound volume. First, the sound spectrum in each frequency is calculated by fast Fourier transform. We determine the frequency band with the largest spectrum when drumming Wadaiko (in this study, 270 - 310 Hz), and the largest spectrum value v in the band is calculated. We determine the threshold v_thre to judge whether it is noise or not. Then, the value v if v ≥ v_thre or 0 if v < v_thre is used as a task state. 本研究では, 太鼓を叩くタスクにおける状態を, 音の大きさの一次元としている. まず, 得られた音声を高速フーリエ変換し周波数領域ごとの音の大きさを求める. 太鼓が叩かれた時に最も大きなスペクトルが現れる周波数領域を決定し(本研究では270-310Hzとしている), その周波数帯でのスペクトルvの大きさを取得する. その音が雑音かどうかを判定する値v_threを決め, それ以上であればその値v, それ以下であれば-1をタスクの状態として用いる. §.§.§ Image Processing The image is used after resizing and binarization. The detailed procedures are, in order, resizing, background subtraction, blurring, binarization, closing, and opening. A 640 × 480 image is resized to 64 × 64. 画像は二値化処理を行ってから用いる. その処理はサイズの縮小, 背景差分処理, ぼかし, 閾値を決めて二値化, 膨張収縮である. サイズは, 640x480の画像を64x64xの画像にまでリサイズしている. §.§.§ Detailed Network Implementation We will explain the detailed network structure of Type 3^+ as an example. Regarding other types, the network does not use the part without correlation. First, we will explain eq:init. A 12288-dimensional Image of 3 × 64 × 64 is embedded to 256 dimensions by convolutional layers. The convolutional layers have six layers, and the numbers of channels are 3 (input), 4, 8, 16, 32, and 64, respectively. Regarding all layers, the kernel size is 2 × 2, stride is 2 × 2, and padding is 1. We insert Batch Normalization <cit.> after all the convolutional layers. The 3 channels of the input are for the current image and xy directions of the optical flow. The Joint State and 256 dimensions of Image are embedded to 128 dimensions of h(i=0) by one fully connected layer. Next, we will explain eq:update. It is expressed by one layer of LSTM in which both the input and output have 128 dimensions. In the procedure (c) stated in subsec:optimized-torque, the hidden state of LSTM is initialized to h(i=0) calculated by eq:init and the cell state of LSTM is initialized to 0. The control command u(i) is arranged to 128 dimensions by one fully connected layer, and is inputted in LSTM. eq:out is composed of one fully connected layer, and converts h(i+1) to the task state. Finally, we will explain eq:state. Joint State is predicted from h(i+1) through one fully connected layer. The image is predicted by six deconvolutional layers which have the same structures as the convolutional layers of eq:init. Compared to the convolutional layers, in the last layer, the deconvolutional layers do not include Batch Normalization and the activation function is not ReLU but Sigmoid. In this study, we implement DTXNET by Chainer <cit.>, and the entire system runs on just CPU. 具体的なネットワークの構造について述べるが, 基本的にtype 3^+について述べる. その他のtypeに関しては, 例えば画像入力がない場合は, それを入力とするネットワークが使用されないような構造を取る. まずeq:initについて述べる. 画像は畳み込みによって64x64の4096次元から256次元まで圧縮される. 画像の畳み込み層は6層構造となっており, チャネル数はそれぞれ3 (input), 4, 8, 16, 32, 64とし, 全層においてKernel Size: 3x3, Stride: 2x2, Padding: 1とし, 全畳み込みの後にBatch Normalization <cit.>を行う. 入力の3チャンネルは現在画像とオプティカルフローのxy方向である. Joint Stateと画像の256次元を全結合層により128まで圧縮しh(i=0)とする. 次に, eq:updateについて述べる. これは, 入力を128次元, 出力を128次元とするような一層のLSTMを用いて表現する. 最初のステップにおいては, LSTMの隠れ状態をeq:initにより出力された128次元のh(i=0)とし, LSTMのセル状態を0で初期化する. 制御入力u(i)は一層の全結合層により128次元に整形され, LSTMに入力される. eq:outに関して, これは一層の全結合層からなり, 128次元のh(i+1)をタスク状態に変換する. 最後に, eq:stateについて述べる. Joint Stateはh(i+1)を一層の全結合層を通した後得られる. Imageは, eq:initにおける画像の畳込み層の逆の構造をしており, 6層の逆畳み込みにより画像を復元する. 畳み込み層と違う点は, 最終層にBatch Normalizationを含まず, 最終層の活性化関数のみReLUではなくSigmoidであるという点のみである. 本研究ではこのDTXNETをChainer <cit.>により実装し, 本研究の全システムはCPUのみにより行われている. §.§.§ Parameters of Training and Control Phase For the task of Wadaiko drumming, we set the control frequency to 15 Hz. We set T_control = T_train = 8; it means that DTXNET can predict about 0.53 sec ahead. Also, we set N_batch1 = N_batch2 = 5, and γ_max=0.3. The parameters stated above depend on the limit of calculation time; they should be adjusted depending on the performance of machine learning library and the specification of PC. We set u_min = -0.3 [Nm] and u_max = 0.3 [Nm] for all motors. We use Adam <cit.> as an optimizer of DTXNET. まず, 本研究における太鼓叩きタスクの制御周期は15 Hzで行う. T_control = T_train = 8とし, 約0.53秒先までを予測する. また, N_batch1 = N_batch2 = 5, γ_max=0.3とする. 上述の制限は, 計算量から来るものがほとんどであり, 用いるライブラリやPCの性能によって調整が可能である. 全モータのトルク下限と上限は揃え, u_min = -0.3 [Nm], u_max = 0.3 [Nm]とする. §.§.§ Loss Calculation The loss L_train at training phase is calculated as below, L_train = L_j + L_i + L_o where L_j is the loss of Joint State, L_i is the loss of Image, and L_o is the loss of task state. L_j and L_i are mean squared error of all elements, and they are adjusted by gains of w_j=1.0 and w_i=30.0. Regarding L_o, we control timing and volume of the sound to follow the target value. However, the sound information is sparse, and if the network learns it simply by mean squared error, a state of no sound output may produce the least loss. To solve this problem, we use the loss shown as below, if o_target > 0 L_o = α(o_target-o_predicted)^2 else L_o = (o_target-o_predicted)^2 where o_predicted is the predicted sound volume, o_target is the target sound volume (at training phase, this is the actual data to be predicted), and α (α > 1) is a gain. Because the sound information is sparse, the loss is multiplied by α (in this study, α=10) when making a sound. Also, at training phase, in order to adjust the scales of L_i, L_j, and L_o, L_o is multiplied by a gain of w_o=10.0. At control phase, L_control equals to L_o. Training phaseにおけるloss (L_train)は以下のように計算される. L_train = L_j + L_i + L_o ここで, L_jはJoint Stateに関するloss, L_iは画像に関するloss, L_oはタスクの状態に関するlossを指す. L_j, L_iは全要素に対して平均二乗誤差を取ったものであり, その後それぞれw_j=1.0, w_i=30.0の係数をかけている. L_oに関して, 本研究では音のタイミング, 大きさを指令した値に近づけるような制御を行う. しかし, 音がなるタイミングは非常にスパースであり, そのまま学習させてしまうと, 一切音がならない方が良い, というような結果になることがある. そこで, 以下のようなlossを用いる. if o_target > 0 L_o = α(o_target-o_predicted)^2 else L_o = (o_target-o_predicted)^2 ここで, o_predictedは予測された音の値, o_targetは正しい値(Training phaseにおいては予測されるべき値, Control phaseにおいては指令値を指す), α (α > 1)は係数を表す. 音が出る瞬間はスパースなため, 音が出る際はlossをα倍している(本研究ではα=10). また, Training Phaseの際は, L_i, L_jと大きさを揃えるため, 10倍したうえで用いている. Control phaseにおいては, L_control = L_oとなる. § EXPERIMENTS §.§ Experimental Setup The experimental setup of Wadaiko drumming is shown in figure:experimental-setup. The active degrees of freedom (DOFs) of the flexible manipulator is 2, and they are actuated by Dynamixel Motors (XM430-W210-R). We use D435 (Intel Realsense) as RGB Sensor, and set a mic next to the manipulator. The links of the flexible manipulator are composed of a soft cutting board. While the flexible body structure can absorb the impact of drumming, its control is difficult because the volume and timing of sound are different depending on the degree it bends to. The drumstick is attached to the tip of the manipulator with a soft sponge, which makes its control more difficult with conventional methods, so we can verify the benefit of the learning control system. The entire experimental setup is covered by a black curtain to make it easier to binarize the image. まず, 太鼓を叩くというタスクにおける具体的な実験セットアップをfigure:experimental-setupに示す. マニピュレータの自由度は2であり, サーボモータとしてDynamixel Motor (XM430-W210-R)を使用している. RGB SensorとしてはD435を用いており, マニピュレータの横には音を取るためのマイクを設置している. リンクはプラスチック製の薄いまな板に穴を開けて使用しており, 非常に柔軟な構造をしている. 柔軟な構造は, 太鼓を叩くというタスクにおいて衝撃を吸収できるという意味で優れいる反面, 曲がり具合によって綺麗に音が出るかやタイミングが異なるため, 制御は難しい. アームの先端には太鼓を叩くバチがついており, モータとバチの間には柔らかなスポンジを挟むことでより制御を難しくし, 学習型制御の利点がわかるようにしている. 実験セットアップ全体は暗幕により覆われ, 二値化処理がしやすいようにしている. §.§ Training of DTXNET We conducted experiments regarding how well DTXNET can predict task state and robot state. First, we gave a random torque to the robot and obtained motion data for 5 minutes. In this process, we determined the maximum change of joint torque du_max = 0.1 [Nm] in one step, and sent the sum of the previous target torque and random value in [-du_max, du_max] to the robot. The control frequency is 15 Hz, so we obtained 4500 steps of data in 5 min. Of these data, we used 80% as training data and the rest as test data, and trained DTXNET for 100 epoch. The transition of losses L_i, L_j, L_o, L and validation losses L^v_i, L^v_j, L^v_o, L^v when using the model of Type 3^+, are shown in figure:train-type3. We can see that L_i, L_j and L^v_i, L^v_j decreased in the same way, but the prediction of L^v_o was difficult. Because predicting task state is the most important in achieving the task, we use the model with the lowest L^v_o in all epoch at control phase. We show the lowest L^v_o of each type, in figure:train-comparison. Regarding the L^v_o, we can see simple relationships between types: 1^- > 2^- > 3^-, 1^+ > 2^+ > 3^+, 1^- > 1^+, 2^- > 2^+, 3^- > 3^+. figure:inference-type3 shows examples of inference results when using the model of Type 3^+ with the lowest L^v_o in all types. These results were obtained in the two situations named (i) and (ii). The graphs in figure:inference-type3 show the predicted values from DTXNET (Predicted) and actual values (Actual) regarding task state, joint angle, joint velocity, joint torque, and image, respectively. We expanded DTXNET and obtained the predicted values until 10 steps ahead. We can see joint angle, joint velocity, joint torque, and image were predicted almost correctly. Regarding task state, the inference of (ii) was good, but the inference of (i) had some errors, although the shapes of the graphs were almost the same. DTXNETによってどの程度タスクの状態, またはロボットの身体状態を予測できるかについて実験する. まずはrandomな関節トルクを与え, 5分間の動作データを取得する. 関節トルクの変化幅の最大値du_max = 0.1 [Nm]を決め, 現在指令トルクに-du_maxからdu_maxまでのランダムな値を足しこみ, 実機に送る. 関節角度の最小値θ_minと最大値θ_maxを決めておき, その範囲を超えた場合はそれを戻す方向にdu_maxだけトルクを足しこみ実機に送る. 動作周期は15 Hzなため, 5 minで4500ステップ分のデータを得ることができる. これらデータから80%をtrain, 20%をtestとして用い, 100 epoch学習を行う. 例として, その際のtype 3^+におけるL_i, L_j, L_o, Lとvalidationの際のそれぞれのlossであるL^v_i, L^v_j, L^v_o, L^vの変化をfigure:train-type3に示す. train, testともにL_i, L_jはしっかり下がるものの, 特にL^v_oの推論は難しいことがわかる. 本研究では, 最終的にタスク状態の予測ができることが重要であるため, 全epochの中でL^v_oが最も小さいときのモデルをControl phaseでは使用する. figure:train-comparisonに全typeにおけるL^v_oの最小値を示す. L^v_oは, typeにおいて1^- > 2^- > 3^-, 1^+ > 2^+ > 3^+, 1^- > 1^+, 2^- > 2^+, 3^- > 3^+という綺麗な関係があることがわかる. 最も良いタスクの予測性能を持つtype 3^+において, task状態, joint state, imageをどの程度推論できるかの例をfigure:inference-type3に示す. Sample (1, 2)の2つのサンプルを用意し, それぞれtask状態, joint angle, joint velocity, joint torque, imageに関して実際の値ActualとDTXNETによる予測Predictedを示している. DTXNETをステップ数は10まで展開している. joint angle, joint velocity, joint torque, imageは概ね予測出来ていることがわかる. Outputを予測するのは難しく, Sample 1は予測の形こそあっているもの, 大きさには誤差が生じしていることもわかる. §.§ Wadaiko drumming with DTXNET By using DTXNET trained in the previous section, we conducted experiments with a Wadaiko drumming task. As evaluation, we used L^control_o (eq:loss) with the actual sound substituted for o_predicted. We executed the proposed real-time control using DTXNET for 1 min, and evaluated the average of L^control_o. Regarding the six types of 1^-, 2^-, 3^-, 1^+, 2^+, 3^+, and Type 0 of random control without any optimization, we conducted experiments with Constant Target Generation, which sends o_target=1 every 2 sec, and with Random Target Generation, which sends a random value of o_target in [0, 2] with 10% probability every step. We show the results in figure:average-loss. We can see that controls of all types with DTXNET are better than Type 0. Also, we can see the same relationship with the result of training phase, and Type 3^+ was the best controller. To understand the actual movements, regarding Type 1^- and Type 3^+ at Constant Target Generation, we show the comparison of target, predicted, and actual sound volume in the left figure of figure:control-comparison, and their movements in the right figure of figure:control-comparison. The right figure is the sequence of 10 images from 4 to 5.5 sec. Regarding Type 1^-, the predicted task state vibrated to a great extent, the actual task state was mostly larger than the target task state, and the actual sound was sometimes produced at a timing different from the target. As the right image sequence shows, the timing at which the robot should beat was only at 8, but in Type 1^-, the robot beat several times before the target timing. On the other hand, regarding 3^+, the predicted and actual values were close to the target value, and there were few unnecessary movements in right figure of figure:control-comparison. 前章で学習されたDTXNETを用いて, 太鼓を叩くタスクを行う. この際の評価は, eq:lossのo_predictedに実際に出た音を代入した値L^control_oを用いて行う. それぞれのtypeにおいて, 提案したリアルタイム制御を1 min実行し, その際のL^control_oの平均により評価を行う. 全6つのtype 1^-, 2^-, 3^-, 1^+, 2^+, 3^+と一切の制御せずにランダムに動かすtype 0において, 大きさ1の指令を2秒間隔で送るConstant target generation, 0から2の大きさの指令を毎ステップ10%の確率で指令するRandom target generationを行った. figure:average-lossにその結果を示す. DTXNETを用いた全てのtypeにおいて, 制御なしのtype 0よりも指令値を実現できていることがわかる. また, Training phaseと同様の関係が見られ, type 3^+が最も良い制御器を実現できている. 動きの違いがわかりやすいよう, type 1^-とtype 3^+におけるConstant target generationでの音の指令値, 予測値, 実際値の比較をfigure:control-comparisonの左図に, その際の動作画像を右図に示す. 右図の様子は左図のグラフの4 secから5.5 secまで0.15 secごとに10フレームを取った際の動作画像列である. まず, type 1^-とtype 3^+の大きな違いとして, 前者は予測が大きく振動しており, 実際に出る音も指令値に比べて大きかったり, 指令がないタイミングで鳴っていたりしている. 右図の画像列を見ればわかるように, 最終的に音を出すべきは8の瞬間であるが, その前の段階で何度か太鼓を叩いてしまっていることがわかる. それに対してtype 3^+では, 予測, 実際の値ともに指令値に近く, 右図の画像列からもわかるように動作の際に無駄な動きが少ない. § DISCUSSION In all experiments, the network with an input and output of Joint State and Image (Type 3^+) was the best. This means that each Joint State and Image includes important information, which is not included in the other, to realize the task. Joint State has the current joint torque information, but it cannot be reconstructed from the current image and optical flow. In the same manner, to determine the posture of the flexible body is difficult from just Joint State. Also, by including robot states in the output of DTXNET, the network can obtain the losses other than those of sparse sound information, and so a more correct training can proceed. DTXNET can use variable T, and we can adjust the T without retraining DTXNET. This is an advantage from EMD Net <cit.> and Dynamics-Net <cit.> that predict the states after a fixed time step T. However, this can also be a problem, because DTXNET is expanded sequentially and needs more calculation time. Another superior point of DTXNET is that while it takes large computational cost to reconstruct robot states at training phase, it takes smaller computational cost at control phase, because there is no need to output robot states. In this study, although we can execute eq:opt only once at each time step to reduce the computational cost, the control command is optimized T_control times before sending it, and so it was enough to realize the task. In our experiments, we used Joint State and Image as robot states, sound volume as task state, and joint torque as control command. In actuality, DTXNET can handle joint angle, joint velocity, air mass flow, etc. as control command, using the same network structure. Also, DTXNET has potential to handle values other than sound, such as the position of the arm tip and the degree of object deformation, as task state. Regarding the soft robot with redundant sensors, DTXNET can handle air pressure, strain gauge, infrared sensor, etc., as robot state. In future works, we would like to consider various tasks by various flexible bodies with redundant sensors. まず, 本研究で行った実験結果について考察する. どの実験においても, Joint StateとImageを入力し, それらを出力にも加えるようなtype 3^+のネットワークの性能が最も良かった. これは, Joint State, Imageそれぞれに, 一方にはないタスク実行に関する重要な情報が内包されているからであると考えられる. Joint Stateは現在の関節トルク情報を持つ一方, 画像とそのオプティカルフローからだけではそれを復元することは難しい. そして, Joint Stateの関節角度・角速度からでは, 柔軟マニピュレータの現在の姿勢を決めることは難しいということである. また, DTXNETの出力に身体状態を含めることで, 情報が疎なloss以外についてのlossが返るため, より正しい予測の進行がしやすくなると考えられる. 次に, DTXNETの構造に関する考察を行う. EMD Net <cit.>やDynamics-Net <cit.>では, 現在状態から固定されたステップ後の状態を予測していた. それに対してDTXNETは, Tを可変にすることができ, 計算時間等から再学習なしにそれを調整することができる. これは同時に問題点ともなり, Tを固定して単一の全結合層で表すモデルと比べ, sequentialにNetworkを展開するため実行時間は遅くなる. また, DTXNETの構造の優れている点として, Training phaseでは画像を復元するような計算量の多い処理を行うが, リアルタイム制御実行時はその部分に関する計算を必要としないため, 計算量を減らすことができるという点が挙げられる. 本モデルは計算量的な観点から一度しかeq:optを実行できていないが, 制御入力はT_controlステップ分だけ毎回最適化された結果実機に送られるため, タスクを実現するのに十分な最適化のループを回すことができている. 最後に, DTXNETの汎用性について議論する. 本研究の実験では, DTXNETが考慮するロボット状態をJoint State, Image, タスク状態を音の大きさ, 制御入力を関節トルクとしていた. 実際には, DTXNETは同じ構造で制御入力として関節角度や関節速度, 空気の流入量等を扱うことができる. また, タスク状態も手先位置や物体の変形具合等, 音以外の様々な情報を扱うことができる. 冗長なセンサを持つソフトロボットにおいては, 空気の圧力やひずみゲージ, 赤外線センサ等をロボット状態として扱うこともできる. 今後は, 冗長なセンサを有する様々な柔軟身体を扱ったタスク実行についても考察していきたい. § CONCLUSION In this study, we proposed a learning control method to realize dynamic tasks by a flexible manipulator. By constructing DTXNET with LSTM, we can predict the task state variable time steps ahead, and realize dynamic control of a flexible manipulator using image information. We can propose six types of configurations of DTXNET, depending on if the network predicts robot states or not, and if the network uses actuator or image information as robot states. In these configurations, by using both the actuator and image information and adding the limitation of DTXNET to predict robot states, we can control the flexible manipulator more precisely. In future works, we would like to realize various tasks by various manipulators, and develop a system to realize multiple tasks continuously using a flexible manipulator. 本研究では, 柔軟なマニピュレータによる疎な状態を扱う動的タスクを, 学習的に実現する手法を提案した. LSTMを用いたDTXNETを構築することで予見するステップを可変にでき, 画像を用いることで柔軟なマニピュレータの正確な制御を実現している. DTXNETにはロボットの観測状態を予測するかどうか, 観測状態としてアクチュエータ情報と同時に画像情報を用いるかどうかから6種類の構造が提案できる. その中でも画像とアクチュエータ情報の両者を用い, DTXNETがロボットの観測状態も予測できるような制限を加えることで, より正確な制御を実現できることがわかった. 今後は, より様々な柔軟マニピュレータにより多くのタスクを実現すること, また, 柔軟なマニピュレータにより一つのタスクではなく複数のタスクをcontinuousに続けていけるようなシステムを開発していきたい. IEEEtran
http://arxiv.org/abs/2407.13167v1
20240718051247
Quasi two-zero texture in Type-II seesaw at fixed points from modular $A_4$ symmetry
[ "Takaaki Nomura", "Hiroshi Okada" ]
hep-ph
[ "hep-ph" ]
http://arxiv.org/abs/2407.12689v1
20240717161047
Propagation of Interplanetary Shocks in the Heliosphere
[ "Munkhjargal Lkhagvadorj", "Gabor Facsko", "Andrea Opitz", "Peter Kovacs", "David G. Sibeck" ]
physics.space-ph
[ "physics.space-ph" ]
0000-0003-3026-4456]Munkhjargal Lkhagvadorj Wigner Research Centre for Physics, Department of Space Physics and Space Technology Konkoly-Thege Miklós út 29-33. H-1121 Budapest, Hungary Eötvös Loránd University, Faculty of Science Pázmány Péter sétány 1/A. H-1117 Budapest, Hungary Institute of Physics and Technology, Department of Theoretical and High Energy Physics Peace Avenue 54b, Bayanzurkh District 13330 Ulaanbaatar, Mongolia 0000-0001-9502-2816]Gábor Facskó Wigner Research Centre for Physics, Department of Space Physics and Space Technology Konkoly-Thege Miklós út 29-33. H-1121 Budapest, Hungary Milton Friedman University, Department of Informatics Kelta utca 2. H-1039 Budapest, Hungary 0000-0003-0845-0201]Andrea Opitz Wigner Research Centre for Physics, Department of Space Physics and Space Technology Konkoly-Thege Miklós út 29-33. H-1121 Budapest, Hungary 0000-0001-6626-5253]Péter Kovács Wigner Research Centre for Physics, Department of Space Physics and Space Technology Konkoly-Thege Miklós út 29-33. H-1121 Budapest, Hungary 0000-0003-3240-7510]David G. Sibeck NASA Goddard Space Flight Center 8800 Greenbelt Rd, Greenbelt MD 20771, USA Gábor Facskó facsko.gabor@wigner.hu, gabor.i.facsko@gmail.com § ABSTRACT Interplanetary shocks are one of the crucial dynamic phenomena in the Heliosphere. They accelerate particles to high energies, generate plasma waves, and can trigger geomagnetic storms in the terrestrial magnetosphere disturbing significantly our technological infrastructures. In this study, two IP shock events are selected to study the temporal variations of the shock parameters using magnetometer and ion plasma measurements of the STEREO-A and B, the Wind, Cluster fleet, and the ACE spacecraft. The shock normal vectors are determined using the minimum variance analysis (MVA) and the magnetic coplanarity methods. During the May 7, 2007 event, the shock parameters and the shock normal direction are consistent. The shock surface appears to be tilted almost the same degree as the Parker spiral, and the driver could be a Co–Rotating Interacting Region (CIR). During the April 23, 2007 event, the shock parameters do not change significantly except for the shock θ_Bn angle, however, the shape of the IP shock appears to be twisted along the perpendicular direction to the Sun-Earth line as well. The driver of this rippled shock is Stream Interaction Regions (SIRs)/CISs as well. § INTRODUCTION The solar corona is hotter than the photosphere, the chromosphere, and the transient layers beneath it. As a result, the high temperatures ionize atoms, creating a plasma of free-moving electrons and ions, the so-called solar wind. Historically, <cit.> predicted the existence of the solar wind and coined terms describing it. He deduced it based on German astronomer Ludwig Bierman's observation of how the comet tail always points away from the Sun <cit.>. The existence of the solar wind was confirmed by the Mariner 2 spacecraft <cit.>. The solar wind is a collisionless plasma, whose speed is both supersonic and super-Alfvenic. The latter means that its speed exceeds the Alfvén speed, which is a representative speed of magnetohydrodynamic waves in a plasma. Due to abrupt change of the solar wind speed from supersonic to subsonic (e.g. by encountering an obstacle), a shock wave is emerged. Interplanetary (IP) shock waves are common in the heliosphere, which is a bubble-like region of space surrounding the Sun and extending far beyond the orbits of the planets and is filled with the solar wind. There are many varieties of shocks such as planetary bow shocks, shocks that are risen due to the stream interaction regions (SIR), which are called co-rotation interaction region (CIR) when extending beyond 1 AU <cit.>, and coronal mass ejection (CME) driven shocks. IP shocks are one of the main and efficient accelerators of energetic particles in the heliosphere <cit.>. These accelerated particles can enter the geomagnetic field and are hazardous to astronauts and satellites. IP shocks driven by CMEs can trigger large geomagnetic storms that can damage oil and gas pipelines and interfere with electrical power infrastructures <cit.>. GPS navigation and high-frequency radio communications are also affected by ionosphere irregularities brought on by geomagnetic storms <cit.> and can cause internet disruptions around the world <cit.>. Therefore, IP shocks are important in determining and understanding space weather. Some dayside magnetospheric transient events <cit.> generated by the interaction of a discontinuity frozen in the solar wind and the terrestrial bow shock. This fact raises the questions of how stabile the discontinuities are, how long these phenomena travels with the solar wind, and whether they evolve during their travel, or show some wave dispersion <cit.>. Dayside magnetospheric transient events are generated by tangential (TD) and rotational discontinuities (RD). However, there are catalogues for neither TDs nor RDs. Therefore, it is easier to study IP shocks because many were observed and listed previously (see Section <ref>). <cit.> studied the 2D curvature of two IP shock events. <cit.>'s paper showed that we do not know so much about the shape of discontinuities especially, the IP shocks. <cit.> extended the previuos study to 3D. More events and more discontinuities (IP shocks) were needed. A special automated shock identification method was developed <cit.>, however, finally, the appropriate events to study the temporal development and 3D shape of IP shocks were found in an open database developed by <cit.>. Finally, the origin of the IP shock are also very interesting. Based on out current knowledge the shock could be CME driven, the result of SIR/CIR development, or caused by a flare. The determination of the 3D shape of IP shocks with a backward projection might explain the origin of these very important events. The shape of an IP shock front depends on the inhomogeneities of the solar wind through the shock is propagating <cit.>. <cit.> studied IP shock using two- and four-spacecraft measurements assuming that the IP shocks were planar locally. <cit.> observed 15-20^∘ shock surface ripple using Geotail <cit.> and Advanced Composition Explorer <cit.> data. <cit.> reported discrepancies of IP shock normal directions observing the same IP shock using ACE and Wind <cit.>, or Wind and IMP–8 <cit.> spacecraft. <cit.> determined IP shock normals using Wind and IMP–8 observations at 1 AU (Astronomical Unit ≈ 1.5× 10^8 km). The average radius of the IP shock curvature was a few million km. <cit.> collected and analyzed 26 IP shocks between the Earth's bow shock and L_1 Lagrangian point. The average radius of the IP shock curvatures was a bit larger than <cit.>'s result, however, it also was a few million km. This result agreed with previous simulation results <cit.>. <cit.> observed the same IP shock at 1 AU using ACE measurements and at 5.4 AU using Ulyesses observations. However, <cit.> studied the solar energetic particle (SEP) events. The interplanetary solar wind properties changed the properties of the SEP events in this study. The Helios probe <cit.> observed IP shocks to evolve and change their behaviour at various heliospheric distances in the inner heliosphere <cit.> and between 1 and 5 AU by the Ulysses mission <cit.>. <cit.> studied shocklets using ACE, Wind, DSCOVR (Deep Space Climate Observatory), Time History of Events and Macroscale Interactions during Substorms <cit.> B and C measurements. <cit.> observed a CME driven shock at 0.07 AU and 0.7 AU using Parker Solar Probe <cit.>, the Solar Orbiter <cit.> observations. The PSP crossed a thick magnetosheath at the flank according to the early evolution phase of the CME. The SolO observed high energy (2 MeV) shock accelerated protons and a much more disturbed shock environment than the one observed by PSP. Therefore, the local small-scale environment of the shock was very different at different locations of the heliosphere. In this study, we focus on the macroscopic structure and parameters of the observed IP shock events. The structure of this paper is as follows: we first list and describe briefly the missions, instruments, and databases that we used in this study in Section <ref>. We describe the applied analysis methods in Section <ref>. Then we introduce the datasets that we analysed in Section <ref>. We discuss and analyse the observations in Section <ref>. Finally, we summarize the results of our study in Section <ref>. § MISSIONS AND INSTRUMENTS In this paper, Solar Terrestrial Relations Observatory <cit.> A and B, Wind <cit.>, ACE, and the Cluster <cit.> spacecraft magnetic field, ion plasma, and spacecraft potential data are used. §.§ The STEREO mission NASA's twin STEREO A and B spacecraft were launched on October 26, 2006, from Kennedy Space Center. In heliospheric orbit at 1 AU, STEREO-A (Ahead) leads while STEREO-B (Behind) trails Earth. The two spacecraft separate at 44^∘ in heliographic longitude from each other annually. Both spacecraft were equipped with two instruments and two instrument suites, with a total of 13 instruments on each spacecraft. The PLAsma and SupraThermal Ion Composition (PLASTIC) instrument measures protons and alpha particle parameters, as well as the composition of heavy ions in the solar wind plasma <cit.>. The In-situ Measurements of Particles and CME Transients magnetic field experiment (IMPACT) suite of instruments consists of seven instruments among which three are located on on a 6-meter long boom while the other four are installed in the of the spacecraft <cit.>. The IMPACT measures the parameters of protons, heavy ions, and electrons, and the MAG magnetometer sensor in it measures the in situ magnetic fields in a range of ± 512 nT with 0.1 nT accuracy <cit.>. We downloaded all data from NASA's Coordinated Data Analysis Web (CDAWeb, <https://cdaweb.gsfc.nasa.gov/>). We obtained the STEREO-A and B magnetic field observation and the ion plasma data from the STEREO IMPACT with the time resolution of 100 ms and the STEREO PLASTIC instruments, respectively. The magnetic field and the plasma data of the STEREO-A and B are in the RTN spacecraft coordinate system, where R is radially outward from the Sun, T is along the planetary orbital, and N is northward direction. §.§ The Wind mission NASA's Wind spacecraft was launched on November 1, 1994, <cit.>. The Wind was initially sent to L_1 Lagrange point, however, the reaching of L_1 was delayed in order to be able to study in advance the terrestrial the magnetosphere and lunar environment, as well. Following a sequence of orbital adjustments, the Wind spacecraft was positioned in a Lissajous orbit close to the L_1 Lagrange point in early 2004 for studying the incoming solar wind <cit.>. The spacecraft is equipped with eight instruments, however, we used only the Magnetic Field Investigation <cit.> and the Three-Dimensional Plasma and Energetic Particle Investigation <cit.> measurements in this study. The MFI consists of two magnetometers at the 12-meter boom measures magnetic field from 4nT, to 65536nT with a time resolution of 22 or 11 vectors per second in the calibrated high-resolution mode, while in the primary science mode the resolution can be three seconds, one minute and one hour <cit.>. The 3DP measures the solar wind key parameters such as velocity, density, and temperature. From CDAWeb, we got Wind magnetic and plasma data from the MFI and 3DP instrumens with a time resolution of 3 s, respectively. The magnetic field and the plasma data of the Wind are in the Geocentric Solar Ecliptic System (GSE) coordinate system, where X-axis is pointing to the Sun from the Earth, Y-axis is in the ecliptic plane showing opposite to the planetary motion, and Z-axis is the northward direction. §.§ The ACE mission NASA's ACE spacecraft was launched on August 25, 1997 <cit.>. The spacecraft is located at L_1 Lagrangian point similarly to the Wind spacecraft. The spacecraft is equipped with nine primary scientific instruments and one engineering instrument. We used in the study the measurements of the Magnetometer <cit.>, and the Solar Wind Electron, Proton and Alpha Monitor <cit.>. The MAG consists of twin triaxial flux-gate magnetometers such that magnetometer sensors have between 3 and 6 vectors s^-1 resolutions for continuous observation of the interplanetary magnetic field <cit.>. From CDAWeb, we downloaded the ACE magnetic and plasma data from the ACE MAG with a time resolution of 1 s and from SWEPAM with a time resolution of 64 s, respectively. The magnetic field and the plasma data are both in the RTN and GSE coordinate systems. §.§ The Cluster fleet ESA's Cluster constellations consist of four satellites, which were launched on 16 July and 9 August 2000 <cit.>. The Cluster satellites orbit in a tetrahedral formation around Earth. The perigee and apogee of the orbit are approximately four and 19.6 R_E, respectively <cit.>. Each of the four satellites is equipped with 11 identical instruments. We used the data of the Cluster Ion Composition <cit.>, the Fluxgate Magnetometer <cit.>, and the Electric field and waves <cit.> instruments in this study . The FGM is composed of two tri-axial fluxgate magnetometers, which are installed on one of the two 5-meter radial booms. It measures in the dynamic range ± 65,536 nT. At the highest dynamic level, the resolution is ±8 nT, and the time resolution is 100 vectors per second <cit.>. The CIS instrument measures three-dimensional ion distribution, and it is composed of two distinct sensors: the Composition Distribution Function (CODIF) sensor and the Hot Ion Analyzer (HIA) sensor <cit.>. The CIS experiment is not operational for Cluster-2 and the HIA sensor is switched off for Cluster-4 due to a problem with the high voltage of the electrostatic analyzer <cit.>. Hence, for Cluster-2 and Cluster-4, the EFW instrument measurement became more indispensable for the analyses because we could obtain electron density in another way, independently from ion and electron plasma measurements <cit.>. As the spacecraft travels through the plasma environment, it acquires an electric charge as a result of contact with charged particles. This charging process results in an electrical potential discrepancy between the spacecraft and the plasma around it, a phenomenon referred to as spacecraft potential. The EFW instrument measures the spacecraft potential and the electron density of the plasma could be calculated using an empirical formula. N_e=200(V_sc)^-1.85, where N_e is the calculated electron density and V_sc is the Cluster EFW spacecraft potential <cit.>. From CDAWeb, we acquired the magnetic data of Cluster satellites from the Cluster FGM with a time resolution of 4 s for all the four Cluster satellites, ion data from the Cluster CIS with a time resolution of 4 s for the Cluster SC1 and SC3 satellites, and the spacecraft potential data from EFW with a time resolution of 4 s for the Cluster SC2 and SC4 satellites where CIS measurements were not available. All the Cluster data were in the GSE coordinate system. § IP SHOCK EVENTS AND TRANSFORMATIONS IP shock candidates are chosen from the shock lists in the Database of Heliospheric Shock Waves maintained at the University of Helsinki <http://www.ipshocks.fi/>. For the event selection, we chose the year 2007 because STEREO-A and STEREO-B were close to each other near the Sun-Earth line. In the first event, May 07 of 2007, the selected spacecraft are STEREO-A, STEREO-B, Wind, and the four Cluster satellites. For the second event, April 23, 2007, the spacecraft are STEREO-A and B, ACE, and Wind. After choosing the shock candidates and obtaining the data, we transformeddata coordinates in such a way that all the coordinate systems are changed to the Heliocentric Earth Ecliptic (HEE) coordinate system, where the X-axis is toward the Earth from the Sun, Z-axis is the ecliptic northward direction, while Y axis completes the right-handed Cartesian system. This system is fixed with conserning the Sun-Earth line. To transform from the RTN and the GSE coordinate systems to the HEE coordinate system, we used Transformation de REpères en Physique Spatiale (TREPS; <http://treps.irap.omp.eu/>) online tool. The TREPS tool, which is developed by the French Plasma Physics Data Centre (CDPP), the national data centre in France for the solar system plasmas, is based on SPICE (Spacecraft, Planet, Instrument, C-matrix, and Events) information system kernels created by National Aeronautics and Space Administration (NASA)/Navigation and Ancillary Information Facility (NAIF) tool <cit.>. § ANALYSIS METHODS Usually coplanarity method is applied to find the shock normal. The coplanarity method is based on the magnetic coplanarity theorem, which states that the magnetic field vectors on both sides of the shock and the shock normal lie in the same plane. Similarly, the velocity on both sides or, in other words, the velocity jump through the shock also lie in the same plane. The method of magnetic coplanarity is straightforward to implement and, it only magnetic field data is needed for applying the method <cit.>. We select two intervals upstream and downstream of the shock, calculate average upstream and downstream magnetic field vectors using the values of the interval, and determine the shock normal vector using the coplanarity method. However, in this case, the magnetic field varies strongly, therefore the calculated shock normal vector looks very sensitive for the selected intervals if one use only the coplanarity method. Hence, the calculated shock normals for each spacecraft pointed in different directions. The result cannot be explained physically, therefore applying the coplanarity method only is pointless in this case. Hence, we need the support of another method in this case. Therefore, we use Minimum Variance Analysis (MVA) <cit.> and magnetic co-planarity (CP) methods to determine the IP shock normals. The MVA technique is based on the assumption that variations in the magnetic field would be observed when a single spacecraft passes through a 1D (in reality, it is a 2D or 3D transition layer) current layer or wavefront. The divergence of the magnetic field constraint (∇·𝐁 = 0), therefore the normal component of the magnetic field must remain constant. If such a normal direction can be found, then the variations in the magnetic field are zero or at the least have a minimum variance <cit.>. MVA can only work if there are less fluctuations in the direction normal to the shock than in the direction perpendicular to magnetic fields upstream and downstream from the shock. By comparing the two methods, the upstream and downstream time intervals of the magnetic field measurements are set. We set up the following requirements to accept the results: * The angle between the shock normal vectors obtained by the minimum variance analysis (MVA) and the magnetic coplanarity method must be less than 15^∘. * The ratio between the intermediate eigenvalue and the smallest eigenvalue should be greater than 2. The above methods have been used in previous studies for shocks <cit.>. Similar conditions have been used in other studies for tangential discontinuities <cit.>. We apply both methods for every single spacecraft. The shock normal vectors calculated independently point to a similar direction. This result is accaptable and could be explained physically. We suppose that the same shock passes all spacecraft at different times and locations. The shock is supposed to be a bit curved and perhaps ripped. Therefore, applying both methods leads to a valuable and acceptable result. § OBSERVATIONS With the visual inspection of the magnetic field and plasma data, we identified jumps in magnetic field (𝐁), speed (𝐕), density (N), and temperature (T). Using these quantities, we determined the times when shock events occurred for each spacecraft data set. The common origin of a shock event detected by different spacecraft can be concluded from the similar time profiles of the magnetic field and plasma data recorded in each spacecraft position near to the shock. From the positions of the spacecraft and the times of shock arrivals to the different positions, the shock geometry and the propagation parameters (velocity, direction) can be deduced. The following case studies of two events are not orderly defined for one another. First, we analyzed the shock event on May 7, 2007, and then the event on April 23, 2007. §.§ Event May 7, 2007 On May 7, 2007, the Wind spacecraft first detected the shock at 07:02:30 (UTC). After that, the STEREO-A spacecraft detected the shock at 08:11:30 (UTC), and then the STEREO-B spacecraft detected the shock at 09:42:00 (UTC). The four Cluster satellites detectd the shock as well. Cluster SC1 detected the shock at 08:27:55 (UTC), and Cluster SC3 detected the shock at 08:28:00 (UTC). The magnetic field and the plasma parameters are shown in Figure <ref> for the Wind (a), for the STEREO-A(b), for the STEREO-B (c), in Figure <ref> for the Cluster SC1 (a), and SC3 (b). There is no indication in the shock database that Cluster SC2 and SC4 detected a shock in the analysed period. However, since the four Cluster spacecraft are located close to each other, they all had to encounter the given shock. Without plasma data on Cluster SC2 and SC4 satellites, a shock cannot be confirmed based solely on the magnetic field data. However, using the empirical formula described in Section <ref> the electron density could be estimated using EFW spacecraft potential data. Therefore, comparing the magnetic field and the electron plasma density profile, the shock is determined. The Cluster SC2 detected the shock at 08:28:10 (UTC), according to the magnetic field and density plots (Figure <ref>c). The Cluster SC4 detected the shock at the same time as SC2 (Figure <ref>d). Similarly, The electron density parameter is obtained by the empirical formula (Eq. <ref>). Here we list all magnetic field observations of the spacecraft for the event and apply the MVA and the CP methods to them. The upstream and downstream time intervals that most agree between the MVA and the CP for all the spacecraft are shown in Table <ref>, and their corresponding magnetic field plots are shown in Figure <ref> for Wind (a), STEREO-A (b), STEREO-B (c), while in Figure <ref> for Cluster spacecraft. Table <ref> also shows the ratio between the smallest eigenvalue λ_3 and the intermediate eigenvalue λ_2 as well as the angle between the MVA normal and CP normals in the determined upstream and downstream time intervals for each spacecraft. The accepted upstream and downstream time intervals were properly selected considering the angle difference between the results obtained by the two methods is minimal and the eigenvalue criteria are sufficient for each spacecraft data. Using the determined time intervals, the calculations of the ratio between the upstream and downstream total magnetic fields, ion plasma densities, and ion plasma temperatures as well as the bulk speed, and shock θ_Bn angle are made. These parameters, the MVA normal, and the magnetic CP normals are shown in Table <ref>. The shock parameters fulfil the necessary shock criteria <cit.>. The results of the additional parameter calculations are shown in Table <ref>. Using the results, the 2D sketches of the IP shocks that were detected by the spacecraft are shown in Figure <ref>. In these 2D sketches, the shock propagation and normal vector orientation are shown in a temporal development manner. Since four Cluster satellites are relatively close to one another, their averaged position as well as normal vectors are shown in the general 2D sketches (also in 3D sketches, see later). For explicitly showing normal vector directions and positions of all Clusters satellites, it is suitable to change their coordinates and normal vectors into a GSE coordinate system with the positions in the Earth radii (R_E) unit, see Figure <ref>. The 3D sketches of the shock normals are shown in Figure <ref>a, and the 3D sketch, the STEREO-A, and B, and averaged Clusters positions are time-shifted to the position of Wind to see the overall shape of this IP shock. In order to assess the shape of the shock, we shifted the shock normals hence, as they had been detected contemporary with the shock observation time in Wind. For this, we used the average solar wind speeds and the time-differences between the observations in Wind and in the three other missions. The shifted shock normals are shown in Figure <ref>b. It turns out that the starting points of all of the normal vector lay approximately in a plane surface, meaning that the underlying shock exhibits a planar structure in the spatial scale of the spacecraft distances. The plane fitted to the time-shifted normals is shown in Figure <ref>c. §.§ Event April 23, 2007 In this specific event, the STEREO-A spacecraft detected a shock first at 06:53:35 (UTC), then the ACE spacecraft detected the shock at 08:57:00 (UTC), and the Wind spacecraft detected the shock at 09:12:00 (UTC). The magnetic field and the plasma data are shown in Figure <ref> for the STEREO-A (a), ACE (b), and Wind (c). STEREO-A and B have identical instruments, therefore, STEREO-B must have detected the shock on that day even though there is no detected IP shock for STEREO-B in the shock lists database mentioned in Section <ref>. So, by using the average solar wind speed, 400 km/s, we concluded that the shock detection time for STEREO-B should be around 13:00 to 15:00. Considering this, a possible shock signature from STEREO-B is found around 13:21:30 even though it is a faint signature. The magnetic field and the plasma plot are shown in Figure <ref>d. We use all magnetic field observations of the spacecraft about the event and apply the MVA and the CP methods to them. In this event, the order between upstream and downstream is swapped because it is a fast-reverse (FR) shock event, which means the shock propagates toward the Sun against the flow however is carried outward from the Sun by the bulk solar wind flow <cit.>. The upstream and downstream time intervals that give the best agreement between the results of the MVA and CP shock normal determination for all spacecraft are shown in Table <ref> Figure <ref> shows the corresponding magnetic field observations for STEREO-A (a), ACE (b), Wind (c) and STEREO-B (d) missions. Table <ref> also shows the ratio between the intermediate eigenvalue λ_2 and the smallest eigenvalue λ_3 as well as the angle between the MVA and CP normal vector determinations of the shock in the upstream and downstream time intervals for each spacecraft. Similarly to the Event May 7, the upstream and downstream time intervals are defined correctly as well, considering the angle difference between the two methods is minimal and the eigenvalue criteria are fulfilled for each spacecraft data. Also, using the determined time intervals, in Table <ref> we summarize, for each mission, the ratio between the upstream and downstream total magnetic fields, densities and temperatures, the differences between the upstream and downstream solar wind bulk speeds. From the obtained upstream/downstream ratios and bulk speed differences we conclude that the criteria for detecting a sock event in the considered period is fulfilled <cit.> for each spacecraft. Even for STEREO-B, for which shock was not indicated in the shock catalogue (see above), the numbers unambiguously prove the existence of a shock event during the studied interval. However, the almost quasi-parallel shock detected by the STEREO-B appears to become quasi-parallel a few hours after the shock detection times of STEREO-A, ACE, and the Wind based on the shock θ_Bn angle, see Table <ref>, where the calculated results of additional parameters are shown. The sketches of the IP shocks geometries that were detected by the spacecraft are shown in Figure <ref>. In these 2D sketches, the shock propagation and normal vector orientation are shown in a temporal development manner. The 3D sketch of the IP shock geometries are shown in Figure <ref>. The 3D sketch, the STEREO-A, and B positions are time-shifted to the position of STEREO-A to see the overall shape of this IP shock. In order to assess the shape of the shock, we shifted the shock normals hence, as they had been detected contemporary with the shock observation time in STEREO-A. For this, we used the average solar wind speeds and the time-differences between the observations in STEREO-A and in the two other spacecraft. The shifted shock normals are shown in the top right panel of Figure <ref>. The top view of the time-shifted normals is shown in the bottom part of Figure <ref>. § DISCUSSION The IP shock parameters did not change significantly most separated couple of probes, from STEREO-A to B in neither case (Table <ref>, <ref>, <ref>, <ref>). There is no sign of wave dispersion on the scale of 40 million km, that is the distance of the STEREO spacecraft. §.§ Event May 7, 2007 The IP shock fitted plane is tilted 56.42^∘ from the Sun-Earth line according to Figure <ref>c. The tilt appears to be almost the same as expected for the Parker spiral impacting the Earth from the dawn side. Hence, it is the reason why the Wind detected the shock first even though STEREO-A's position is relatively closer to the Sun's direction. This period, 2007, was during the solar minimum phase, and stream interaction region (SIR) or co-rotating stream interactions (CIR) were dominant <cit.>, and this further proves the result, see Figure <ref>. Furthermore, to see a correlation between fast-forward shock and geomagnetic activity, we investigated the K_p-index <cit.> on May 07, 2007, as shown in Figure <ref>. It appears that this forward shock-leading event disturbed the magnetosphere, causing a G1-minor geomagnetic storm with the K_p- index peaking around 15:00 (UTC). The geomagnetic substorm happened about 6 hours after the detection of the shock by the STEREO-B spacecraft. §.§ Event April 23, 2007 In Figure <ref> it seems that the shape of the shock is not uniform and is twisted from the STEREO-A spacecraft to the other three spacecraft in the Z-axis along the transverse direction (Y-axis) to the Sun-Earth line, see Figure <ref>b. As seen from Figure <ref>c, the STEREO-A alone is on the dusk (left) side of the Sun-Earth line while the Wind, the ACE, and the STEREO-B are on the dawn (right). In this XY-plane view (Figure <ref>), the shock normal vectors appear to be changing or slightly rotating their direction from the STEREO-A to the Wind, the ACE, and the STEREO-B about the Z-axis, and the shock is changing from the quasi-perpendicular to quasi-parallel based on the shock θ_Bn angle. This IP shock is a fast reverse shock, meaning it travels toward the Sun in a frame fixed to the solar wind even though the shock propagates away from the Sun with the solar wind. The shock detection time difference between the STEREO-A and the Wind/ACE is about two hours while between STEREO-A and B is almost six hours, yet the shock orientation is significantly changed from the STEREO-A to the other three spacecraft indicating the change is spatial, not temporal. So, due to this nature, the IP shock could be a local ripple. The ripples on the shock surface are known be caused by ICME (interplanetary coronal mass ejections) shock drivers as they do not propagate into homogeneous interplanetary medium <cit.>. However, for this particular event the source is suggested to be a stream interaction region (SIR) according to the catalogue given in <https://stereo-ssc.nascom.nasa.gov/pub/ins_data/impact/level3/STEREO_Level3_Shock.pdf>. We investigated the K_p-index of this event as seen in Figure <ref>. Similar to the fast-forward shock event of May 07, 2007, a G1-minor geomagnetic storm occurred on this day (see space weather scales of National Oceanic and Atmospheric Administration (NOAA) <https://www.swpc.noaa.gov/noaa-scales-explanation>), but the beginning of the geomagnetic storm happened three hours before the shock detection time of the STEREO-A while the ending of the storm was partially at the same time as the STEREO-A. As stated before, SIR/CIRs form when the fast-moving solar wind catches the slow-moving solar wind <cit.>. This sometimes forms a pair of shocks, one leading as a fast-forward shock while a rarefied shock trails the solar wind as a fast-reverse shock but oftentimes as just sole fast-forward or fast reverse shocks <cit.>. Nevertheless, since it always involves the fast-moving solar wind, it makes total sense why the G1-minor geomagnetic storm with a K_p-5 index happened before the detection of the shock because it seems the fast-moving solar wind caused the minor storm before the fast-reverse shock finally arrived at the spacecraft positions. The normal orientation inconsistency can also be explained by the argument that this shock is a fast-reverse shock because SIRs have characteristic tilts such that forward waves direct towards the solar equatorial plane while the reverse waves tend to move in the direction of the solar poles <cit.>. Therefore, this tilting nature may explain the XZ component tilted orientation between the STEREO-A and the other three spacecraft – the Wind, the ACE, and the STEREO-B. § SUMMARY AND CONCLUSIONS In this paper, two conjugated multi-spacecraft studies of interplanetary (IP) shock propagation are presented. It aims to determine how IP shocks develop and evolve in spatial and temporal propagation. The spatial separation of observations is 40 million km. In this scale, we do not see any significant development of the IP shocks. The shock parameters are similar within their estimated error at each location. No sign of wave dispersion was observed. The Event May 7, 2007 is a fast-forward (FF) shock event, appears planar and tilted 56.42^∘ to the Sun-Earth line. The tilt of this planar shock surface is almost identical to the usual propagation of the Parker spiral impacting the Earth. Therefore, the origin of the shock is co-rotating stream interactions (CIRs), which agrees with the detected CIR on that day. There is no sign of temporal change in this scale. In the spatial range of 40 million km, no change was observable. The G1-minor geomagnetic storm happened after this shock was detected, indicating this shock event caused the geomagnetic storm on that day. The Event April 23, 2007 is a fast reverse (FR) shock event and, it appears the shape of the shock is not uniform and twisted from the STEREO-A spacecraft to the other spacecraft along the transverse direction to the Sun-Earth line. The shock normal vectors also were observed to be changing along the Y-axis. A G1-minor geomagnetic storm occurred on this day, similar to the fast-forward shock event of May 7, 2007. Intriguingly, the onset of this storm took place three hours before the shock detection time recorded by STEREO-A. The termination of the storm partially coincided with the STEREO-A detection time. The source of this shock is SIR/CIR. The G1-minor geomagnetic storm, with a K_p-5 index, began before the shock detection. It seems that the fast-moving solar wind instigated the minor storm before the arrival of the fast-reverse shock at the spacecraft's position. The orientation irregularity can be potentially accounted for by the characteristic tilts of SIRs. No multipoint observation was made before using so many and so different probes. The 3D spatial curve and the origin of the IP shocks are determined. These macroscopic features were visible only in heliospheric magnetohydrodynamic simulations before. However, the known IP shock catalogues of the STEREO, the Wind, the ACE and the Cluster missions provided older observations. A natural continuation of this study is using PSP, the SolO, and the BepiColombo <cit.> measurements for joint observations. The larger spatial separation of the IP shock observations might allow us to observe the temporal development and dispersion of these shocks. Acknowledgments This paper uses data from the Heliospheric Shock Database, generated and maintained at the University of Helsinki (<https://www.ipshocks.fi>). We would also like to acknowledge NASA CDAWeb, the STEREO-A and B IMPACT, PLASTIC, the Wind MFI, 3DP, ACE MAG, SWEPAM, and the Cluster FGM, CIS, EFW teams for providing data, and TREPS, designed and developed by the French Plasma Physics Data Centre (CDPP), for coordinate transformations for this study. This work was partially financed by the National Research, Development, and Innovation Office (NKFIH) FK128548 grant. ML was supported by the Stipendium Hungaricum Scholarship. The authors thank Oleksiy Agapitov, Géza Erdős and Zoltán Németh for the useful discussions. aasjournal ccccccc The parameters and main results of the minimum variance (MVA) and magnetic co-planarity (CP) analyses of magnetic field records of the considered missions recorded about the May 7, 2007 shock event. Here, Δ t_up denotes the defined upstream and Δ t_down denotes the defined downstream time intervals with MVA and CP analysis methods. 𝐧_MVA and 𝐧_CP are the normal vectors given by MVA and CP methods, respectively. λ_2/λ_3 indicates the ratio between the intermediate eigenvalue λ_2 and the smallest eigenvalue λ_3, and Δθ_MVA-CP is the angle between the MVA and CP normals on May 7, 2007. Spacecraft Δ t_up Δ t_down 𝐧_MVA 𝐧_CP λ_2/λ_3 Δθ_MVA-CP [t_up^start-t_up^end] [t_down^start-t_down^end] [hh:mm:ss-hh:mm:ss] [hh:mm:ss-hh:mm:ss] [^∘] Wind 06:59:00 - 07:01:47 07:03:50 - 07:05:50 [-0.76, -0.63, -0.14] [-0.76, -0.63, -0.15] 14.82 0.38 STEREO-A 08:08:00 - 08:10:05 08:12:30 - 08:13:50 [0.89, 0.44, 0.14] [0.89, 0.44, 0.144] 3.16 0.37 STEREO-B 09:37:00 - 09:40:35 09:42:57 - 09:43:38 [-0.85 -0.51 -0.09] [-0.86 -0.50 -0.08] 9.51 1.10 Cluster-1 08:26:10 - 08:27:00 08:28:00 - 08:29:00 [0.85 0.53 -0.03] [-0.85 -0.53 -0.03] 167.90 0.21 Cluster-3 08:26:30 - 08:27:10 08:29:00 - 08:29:47 [0.75 0.64 0.15] [-0.76 -0.63 -0.12] 22.43 1.92 Cluster-2 08:26:56 - 08:27:40 08:28:32 - 08:29:37 [0.84 0.54 0.01] [-0.84 -0.54 -0.01] 91.56 0.10 Cluster-4 08:26:25 - 08:27:10 08:28:45 - 08:29:40 [0.86 0.51 -0.08] [0.84 0.53 0.11] 238.63 2.49 cccccc Resulting core parameters of studying the data of the Wind, STEREO-A, B, and Cluster spacecraft on May 7, 2007. B_d/B_u, N_d/N_u, and T_d/T_u are the ratio of the upstream and downstream magnetic field, density, and temperature, respectively. Δ V is the velocity difference between the two sides of the shock. Furthermore, θ_Bn is the shock θ angle. Due to the CIS experiment is not operational for Cluster SC2, and the CIS HIA sub-instrument being switched off for Cluster SC4 spacecraft, the plasma parameters are not available for these satellites. The Cluster CIS HIA instruments SC1 and SC3 provide parallel (T_∥) and perpendicular (T_⊥) temperatures, respectively. Spacecraft B_d/B_u N_d/N_u T_d/T_u Δ V θ_Bn [km/s] [^∘] Wind 1.86± 0.10 1.70± 0.06 1.95± 0.12 26.8± 1.7 70.80 STEREO-A 1.89± 0.02 2.18± 0.90 3.21± 2.4 36.00± 6.00 81.83 STEREO-B 1.90± 0.07 1.90± 0.6 2.90± 6.00 40.25± 11.00 59.45 Cluster 1 1.53± 0.04 1.63± 0.09 1.10_∥± 1.30_∥ 31.20± 3.10 86.62 1.40_⊥± 0.32_⊥ Cluster 3 1.57± 0.05 1.69± 0.08 1.20_∥± 1.50_∥ 30.50± 2.80 89.74 1.40_⊥± 0.40_⊥ Cluster 2 1.53± 0.03 1.42± 0.18 N/A N/A 88.19 Cluster 4 1.57± 0.05 1.49± 0.19 N/A N/A 86.55 ccccccccc Resulting additional parameters of studying the data of the three spacecraft on May 7, 2007. V_sh is the shock speed, C^up_s is the upstream sound speed, V^up_A is the upstream Alfvén speed, and C^up_ms is the upstream magnetosonic speed. Plasma β_up based on upstrem parameters, Alfvén-Mach (M_A), and Magnetosonic-Mach (M_ms) numbers are also shown. Spacecraft V_sh [km/s] C^up_s [km/s] V^up_A [km/s] C^up_ms [km/s] Plasma β_up M_A M_ms Wind 317.33 48.84± 2.73 29.81± 10.91 57.22± 6.14 3.22± 2.38 2.83± 1.20 1.47± 0.35 STEREO-A 352.41 46.27± 3.30 34.89± 15.00 57.95± 9.41 2.11± 1.80 2.19± 1.12 1.32± 0.42 STEREO-B 354.91 46.89± 5.85 31.95± 7.33 56.74± 6.35 2.58± 1.34 2.76± 1.07 1.55± 0.52 Cluster 1 342.02 43.88± 3.09 25.59± 7.70 51.31± 3.99 3.26± 1.89 3.84± 1.40 1.99± 0.46 Cluster 3 308.11 43.88± 2.63 26.80± 7.64 51.42± 3.98 3.21± 1.83 3.72± 1.35 1.93± 0.46 ccccccc The parameters and main results of the minimum variance (MVA) and co-planarity (CP) analyses of magnetic field records of the considered missions recorded about the April 23, 2007 shock event. Here, Δ t_down denotes the defined downstream and Δ t_up denotes the defined upstream time intervals with MVA and CP analysis methods. 𝐧_MVA and 𝐧_CP are the normal vectors given by MVA and CP methods, respectively. λ_3/λ_2 indicates the ratio between the intermediate eigenvalue λ_2 and the smallest eigenvalue λ_3, and Δθ_MVA-CP is the angle between the MVA and CP normals on April 23, 2007. Spacecraft Δ t_down Δ t_up 𝐧_MVA 𝐧_CP λ_2/λ_3 Δθ_MVA-CP [t_down^start-t_down^end] [t_up^start-t_up^end] [hh:mm:ss-hh:mm:ss] [hh:mm:ss-hh:mm:ss] [^∘] STEREO-A 06:46:00 - 06:52:23 06:57:03 - 07:02:34 [-0.86, 0.02, -0.50] [-0.86, 0.01, -0.51] 3.29 1.07 ACE 08:51:30 - 08:56:30 09:03:00 - 09:07:00 [0.87, -0.19, -0.45] [0.87, -0.23, -0.43] 4.22 3.43 Wind 09:04:00 - 09:10:00 09:12:20 - 09:13:15 [-0.82, 0.43, 0.37] [-0.81, 0.44, 0.39] 3.38 1.46 STEREO-B 13:15:45 - 13:20:00 13:24:00 - 13:26:00 [-0.75, 0.59, 0.29] [-0.73, 0.61, -0.30] 8.18 1.67 ccccccccc Resulting additional parameters of studying the data of the three spacecraft on April 23, 2007. V_sh is the shock speed, C^up_s is the upstream sound speed, V^up_A is the upstream Alfvén-speed and C^up_ms is the upstream magnetosonic speed. Plasma β_up, Alfvén-Mach number, and Magnetosonic Mach number are also shown. Spacecraft V_sh C^up_s V^up_A C^up_ms Plasma β_up Alfvén-Mach Magnetosonic-Mach [km/s] [km/s] [km/s] [km/s] STEREO-A 467.95 64.97± 7.74 68.71± 29.20 94.57± 21.85 1.07± 0.94 1.00± 0.67 0.72± 0.41 ACE 412.36 58.28± 4.94 52.82± 15.90 78.66± 11.26 1.46± 0.91 1.49± 0.56 1.00± 0.27 Wind 395.53 63.79± 5.15 69.16± 21.71 94.09± 16.34 1.02± 0.66 0.90± 0.42 0.66± 0.26 STEREO-B 370.31 57.31± 5.38 29.04± 12.28 64.25± 07.34 4.67± 4.04 1.69± 1.08 0.76± 0.37
http://arxiv.org/abs/2407.13619v1
20240718155558
A tree rewriting system for the Reflection Calculus
[ "Sofía Santiago-Fernández", "Joost J. Joosten", "David Fernández-Duque" ]
math.LO
[ "math.LO" ]
On the collider-testability of the type-I seesaw model with 3 right-handed neutrinos [ ==================================================================================== § ABSTRACT The Reflection Calculus (, c.f. <cit.>, <cit.>) is the fragment of the polymodal logic in the language ℒ^+ whose formulas are built up from ⊤ and propositional variables using conjunction and diamond modalities. is complete with respect to the arithmetical interpretation that associates modalities with reflection principles and has various applications in proof theory, specifically ordinal analysis. We present , a tree rewriting system (c.f. <cit.>) that is adequate and complete with respect to , designed to simulate derivations. is based on a given correspondence between formulas of ℒ^+ and modal trees . Modal trees are presented as an inductive type (c.f. <cit.>, <cit.>) allowing precise positioning and transformations which give rise to the formal definition of rewriting rules and facilitates formalization in proof assistants. Furthermore, we provide a rewrite normalization theorem for systematic rule application. The normalization of the rewriting process enhances proof search efficiency and facilitates implementation (c.f. <cit.>, <cit.>, <cit.>). By providing as an efficient provability tool for , we aim to help on the study of various aspects of the logic such as the subformula property and rule admissibility. § INTRODUCTION Modal logics provide an attractive alternative to first or higher order logic for computational applications, largely due to the fact that they often enjoy a decidable consequence relation while remaining expressive enough to describe intricate processes. However, decidability alone does not suffice for practical implementation when complexity comes at a hefty price tag; even propositional logic is np-complete, which quickly becomes intractable as formula size and especially the number of variables is large. This is no longer an issue when working in strictly positive fragments (see e.g. <cit.>), which in contrast enjoy a polynomially decidable consequence relation. Strictly positive formulas do not contain negation and instead are built from atoms and ⊤ using conjunction and (or, more generally, a family of modalities ⟨ i⟩ indexed by i in some set I). Strictly positive formulas tend to be contingent, so validity and satisfiability are no longer the most central problems, but the consequence relation is indeed useful for example for reasoning about ontologies and is the basis for some description logics <cit.>. One remarkable success story for strictly positive logics comes from the reflection calculus (RC) <cit.>. Beklemishev has shown how Japaridze's polymodal provability logic GLP <cit.> can be used to perform a proof-theoretic analysis of Peano aritmetic and its fragments <cit.>; however, the logic GLP is notoriously difficult to work with, especially as it is not Kripke-complete. In contrast, its strictly positive fragment is rather tame from both a theoretical and computational point of view, yet suffices for the intended proof-theoretic applications. The current work is inspired by two distinct ideas that have arisen in the study of strictly positive logics. The first is the tree representation of formulas, which yield a way to decide strictly positive implications. This was developed by Kikot et al. <cit.> in a general setting and by Beklemishev <cit.> in the context of RC. In both cases, one assigns to each strictly positive formula φ a finite, tree-like Kripke model T(φ) with the crucial property that φ→ψ is valid if and only if T(φ) ψ. Thus the study of strictly positive fragments can be reduced to the study of their tree-like Kripke models. The second is the connection of strictly positive calculi to term rewrite systems. Strictly positive formulas and, particularly, those built exclusively from ⊤ and the modalities ⟨ i⟩, may be regarded as words (or `worms'). This has prompted Beklemishev <cit.> to view strictly positive fragments as term-rewriting systems <cit.>, but connections between such systems and modal logic are not new and can be traced back to Foret <cit.>. Term rewriting is a discipline that integrates elements of logic, universal algebra, automated theorem proving, and functional programming. It has applications in algebra (e.g. Boolean algebra), recursion theory (computability of rewriting rules), software engineering and programming languages (especially functional and logic programming <cit.>), with the λ-calculus perhaps being the most familiar example <cit.>. Of particular interest to us, tree rewriting systems <cit.> are term rewriting systems such that terms are trees. When terms represent formulas, rewrite rules are similar to deep inference rules, i.e. rules which may be applied to strict subformulas of the displayed formulas. This is the approach taken by Shamkanov <cit.> for developing a cut-free calculus for GLP. As is the case for other technical differences between GLP and the reflection calculus, our tree rewrite system makes up for the loss in expressive power with increased simplicity and transparent Kripke semantics. Our approach is to recast RC as an abstract rewriting system in which terms are trees. In the parlance of rewrite systems, cut-elimination can be viewed as a normalization procedure for derivations. In our setting we do not have an analogue of the cut rule, but we do obtain a rewriting normalization theorem which states that the rewriting process can be consistently and efficiently executed by grouping rewriting rules by their kinds and applying them in a designated sequence. By doing so, it enhances our comprehension of the dynamics of the tree rewriting system, offering valuable insights into the nature of the rewriting process and the interplay among rules. Moreover, it furnishes an efficient framework for proof search methodologies. Thanks to the normalization theorem, the need for exhaustive exploration is minimized by focusing on normalized rewriting sequences, which mitigates the risk of redundancy in rewriting. Consequently, when searching for a proof of a certain result, we only need to consider the normalized derivations, thereby reducing the proof search space and improving computational efficiency <cit.>. Furthermore, it serves as a practical guide for implementing the rewriting process in proof assistants <cit.>. In our presentation, we make use of the inductive type of lists within the framework of type theory (cf. <cit.>, <cit.>) to define the trees in our tree rewriting system. The use of lists allows to define inductive structures with an order, facilitating the specification of internal positions and transformations for rewriting systems, and its formalization in proof assistants. Since lists play such a central role in our work, we conclude this introduction by establishing some notation. A list of elements of type 𝒜 is either the empty list ∅ or [x] L for x an element of type 𝒜, a list L of elements of type 𝒜 and the operator of concatenation of lists. We write x L and L x to denote [x] L and L [x], respectively. The length of a list L of elements of type 𝒜 is denoted by |L|. § FROM ^+ TO In this section we present the basic sequent-style system ^+ for the language of strictly positive formulae, concluding by an introduction to the Reflection Calculus () as an extension of ^+. We consider the language of strictly positive formulae ℒ^+ composed from propositional variables p,q,..., in Prop, the constant ⊤, and connectives ∧ for conjunction and α for diamond modalities for each ordinal α∈ω. Formally, the strictly positive formulae φ of ℒ^+ are generated by the following grammar: φ ::= ⊤| p |αφ| (φ∧φ), α∈ω and p ∈ Prop. The modal depth of φ, denoted by (φ), is recursively defined as (⊤) := 0, (p) := 0 for p ∈ Prop, (αφ) := (φ) + 1 and (φ∧ψ) := max{(φ),(ψ)}. Sequents are expressions of the form φ⊢ψ for φ, ψ∈ℒ^+. If L is a logic, we write φ⊢_ Lψ for the statement that φ⊢ψ is provable in L. We write φ≡_ Lψ to denote φ⊢_ Lψ and ψ⊢_ Lφ. Polymodal can be readily adapted to its strictly positive variant, where most notably the necessitation rule is replaced by distribution for each diamond modality. (^+, <cit.>) The basic sequent-style system ^+ is given by the following axioms and rules: * φ⊢_^+φ; φ⊢_^+⊤; * if φ⊢_^+ψ and ψ⊢_^+ϕ then φ⊢_^+ϕ (cut); * φ∧ψ⊢_^+φ and φ∧ψ⊢_^+ψ (elimination of conjunction); * if φ⊢_^+ψ and φ⊢_^+ϕ then φ⊢_^+ψ∧ϕ (introduction to conjunction); * if φ⊢_^+ψ then ⟨α⟩φ⊢_^+⟨α⟩ψ (distribution). For Π a finite list of strictly positive formulae, ⋀Π is defined by ⊤ for Π = ∅ and φ∧⋀Π̂ for Π = φΠ̂. Note that ⋀ (Π_1 Π_2) ≡_^+⋀Π_1 ∧⋀Π_2 for Π_1 and Π_2 finite lists of strictly positive formulae. A diamond modality can be distributed over a conjunction of formulas for ^+ as follows. ⟨α⟩ (φ_1 ... φ_n) ⊢_^+⟨α⟩φ_1 ... ⟨α⟩φ_n. By an easy induction on n. We aim to define a tree rewriting system adequate and complete w.r.t. the Reflection Calculus, the strictly positive fragment of Japaridze's polymodal logic formulated as an extension of ^+. (, <cit.>, <cit.>) The Reflection Calculus () is the strictly positive modal logic extending ^+ by the following axioms: * ⟨α⟩⟨α⟩φ⊢_⟨α⟩φ (transitivity); * ⟨α⟩φ⊢_⟨β⟩φ, α > β (monotonicity); * αφ∧βψ⊢_α (φ∧βψ), α > β (J). § MODAL TREES In this section we present modal trees, a concrete set of inductively defined trees on which our rewriting system is based, and the corresponding framework for their manipulation. Modal trees are finite labeled trees with nodes labeled with lists of propositional variables and edges labeled with ordinals less than ω. Specifically, modal trees can be regarded as tree-like Kripke models of the form (𝒲, {R_α}_α∈ω, v) such that an ordinal α labels an edge if R_α relates the corresponding nodes, and a list of propositional variables labels a node if its elements are the only propositional variables being true under the valuation v in that node. However, for technical convenience, both in presenting the rewrite system and in formalizing our results in a proof assistant, it will be convenient to present modal trees as inductively defined structures. In particular, the children of a node of a modal tree are given by lists instead of sets, providing a default ordering on its children useful for unambiguously determining positions in the tree. () The set of modal trees is defined recursively to be the set of pairs Δ; Γ where Δ is a finite list of propositional variables and Γ is a finite list of pairs (α, T), with α < ω and T∈. Elements of will be denoted by T and S. Note that we employ distinct notations to enhance clarity on wether a pair is a modal tree: ⟨· ; ·⟩ is used for a pair representing a modal tree, while (· , ·) denotes a pair comprising an ordinal and a modal tree. The root of a modal tree Δ ; Γ is Δ and its children is the list [ S | (α, S) ∈Γ]. Note that, in general we write [f(α, S) | (α, S) ∈Γ] to denote the list [f(α_1, S_1),...,f(α_n, S_n)] for Γ = [(α_1, S_1),...,(α_n, S_n)], n ≥ 0 and f a function of domain Ord^< ω×. For the sake of readability we write γ∈Γ and T∈Γ to denote γ∈ [α | (α, S) ∈Γ] and T∈ [ S | (α, S) ∈Γ] respectively, since the context permits a clear distinction. A modal tree is called a leaf if it has an empty list of children. The height of a modal tree T, denoted by ( T), is inductively defined as (Δ ; ∅) := 0 and (Δ ; Γ) := max[( S) | S∈Γ] + 1. The sum of modal trees is the tree obtained by concatenating their roots and children. The sum of modal trees T_1 = Δ_1; Γ_1 and T_2 = Δ_2; Γ_2 is defined as T_1 + T_2 := Δ_1 Δ_2; Γ_1 Γ_2. More generally, for Λ a finite list of modal trees, ∑Λ is defined as ∅ ; ∅ if Λ = ∅ and T + ∑Λ̂ if Λ = TΛ̂. Note that ( T_1 + T_2) = max{( T_1), ( T_2)} for T_1, T_2 ∈. A standard numbering of the nodes of the tree by strings of positive integers allows us to refer to positions in a tree. Specifically, the set of positions of a tree includes the root position, defined as the empty string, and the positions from its children which are obtained by appending the order of each child to its positions. (Set of positions) The set of positions of a modal tree T = Δ ; Γ, denoted by Pos( T) ∈𝒫(ℕ^*), is inductively defined as * Pos(Δ ; ∅) := {ϵ} for ϵ∈ℕ^* the empty string, * Pos(Δ ; Γ) := {ϵ}∪⋃_i = 1^n{i𝐤 | 𝐤∈ Pos( S_i)} for Γ = [(α_1, S_1), ... , (α_n, S_n)]. Using the precise position apparatus we can define derived notions like, for example, that of subtree. (Subtree) The subtree of T∈ at a position 𝐤∈ Pos( T), denoted by T|_𝐤, is inductively defined over the length of 𝐤 as * T|_ϵ := T, * T|_i𝐫 := S_i|_𝐫 for 1 ≤ i ≤ n such that T = Δ ; [(α_1, S_1), ... , (α_n, S_n)]. We can now define subtree replacement based on the precise positioning provided. (Replacement) Let T, S∈ and 𝐤∈ Pos( T). The tree obtained from T by replacing the subtree at position 𝐤 by S, denoted by T[ S]_𝐤, is inductively defined over the length of 𝐤 as * T[ S]_ϵ := S, * T[ S]_i𝐫 := Δ ; [(α_1, S_1),...,(α_i, S_i[ S]_𝐫),...,(α_n, S_n)] for 1 ≤ i ≤ n and T = Δ ; [(α_1, S_1),...,(α_n, S_n)]. Here below we present useful results on positioning and replacement in a modal tree. Let T, S, Ŝ∈ be modal trees. Then, for 𝐤 and 𝐫 belonging to the adequate position sets we have * ( T|_𝐤)|_𝐫 = T|_𝐤𝐫; * T [ T|_𝐤]_𝐤 = T; * ( T [ S]_𝐤)|_𝐤 = S; * ( T[Ŝ]_𝐤)[ S]_𝐤 = T[ S]_𝐤 (transitivity of replacement); * ( T[Ŝ]_𝐤)[ S]_𝐤𝐫 = T[Ŝ[ S]_𝐫]_𝐤. We proceed by induction on the tree structure of T for each statement. If T is a leaf, the results follow easily since 𝐤 = ϵ. Otherwise, we continue by cases on the length of 𝐤. For ϵ the statements trivially hold. Finally consider i𝐤̂ for 1 ≤ i ≤ n and 𝐤̂∈ Pos( S_i) such that T = Δ ; [(α_1, S_1),...,(α_n, S_n)]. Then, by definition and each statement's inductive hypothesis for S_i, we conclude 1. ( T|_i𝐤̂)|_𝐫 = ( S_i|_𝐤̂)|_𝐫 = S_i|_𝐤̂𝐫 = T|_i𝐤̂𝐫; 2. T [ T|_i𝐤̂]_i𝐤̂ = T [ S_i|_𝐤̂]_i𝐤̂ = Δ ; [(α_1, S_1),...,(α_i, S_i [ S_i|_𝐤̂]_𝐤̂),...,(α_n, S_n)] = T; 3. ( T [ S]_i𝐤̂)|_i𝐤̂ = (Δ ; [(α_1, S_1),...,(α_i, S_i[ S]_𝐤̂),...,(α_n, S_n)])|_i𝐤̂ = ( S_i[ S]_𝐤̂)|_𝐤̂ = S; 4. ( T[Ŝ]_i𝐤̂)[ S]_i𝐤̂ = (Δ ; [(α_1, S_1),...,(α_i, S_i[Ŝ]_𝐤̂),...,(α_n, S_n)] )[ S]_i𝐤̂ = Δ ; [(α_1, S_1),...,(α_i,( S_i[Ŝ]_𝐤̂)[ S]_𝐤̂),...,(α_n, S_n)] = Δ ; [(α_1, S_1),...,(α_i, S_i[ S]_𝐤̂),...,(α_n, S_n)] = T[ S]_i𝐤̂; 5. ( T[Ŝ]_i𝐤̂)[ S]_i𝐤̂𝐫 = (Δ ; [(α_1, S_1),...,(α_i, S_i[Ŝ]_𝐤̂),...,(α_n, S_n)] )[ S]_i𝐤̂𝐫 = Δ ; [(α_1, S_1),...,(α_i,( S_i[Ŝ]_𝐤̂)[ S]_𝐤̂𝐫),...,(α_n, S_n)] = Δ ; [(α_1, S_1),...,(α_i, S_i[Ŝ[ S]_𝐫]_𝐤̂),...,(α_n, S_n)] = T[Ŝ[ S]_𝐫]_i𝐤̂. § RELATING FORMULAS AND MODAL TREES Our tree rewriting system is based on a correspondence between the language of ℒ^+ and . Thereby we can ensure that transformations within the structure of modal trees accurately simulate derivations in a considered proof system. For this purpose, we introduce the tree embedding operator inductively defined over the set of strictly positive formulas mapping them to modal trees. This definition is inspired by the canonical tree representation of strictly positive formulae presented by Beklemishev (see <cit.>) as a combinatiorial tool for proving the polytime decidability of . Additionally, we define mapping modal trees to formulas. Ultimately, we prove that composition ∘ serves as the identity over , while ∘ acts as the identity on ℒ^+ modulo equality for +. () The modal tree embedding is the function : ℒ^+⟶ inductively defined over the structure of strictly positive formulae as * (⊤) := ∅;∅, * (p) := [p]; ∅ for p ∈ Prop, * (αφ) := ∅; [( α, (φ))] for φ∈ℒ^+, * (φ∧ψ) := (φ) + (ψ) for φ, ψ∈ℒ^+. The modal depth of a formula coincides with the height of the modal tree which it is mapped to. ((φ)) = (φ) for φ∈ℒ^+. By an easy induction on the structure of φ. We also introduce a corresponding embedding in the opposite direction. () The strictly positive formulae embedding is the function : ⟶ℒ^+ defined as (Δ ; Γ) := ⋀Δ∧⋀ [α( S) | (α, S) ∈Γ]. For the sake of readability, we will simply write Γ to denote [α( S) | (α, S) ∈Γ]. We conclude this section by providing a relation among strictly positive formulas and modal trees through the composition of the embeddings. ∘ = id_ and ∘ = id_ℒ^+ / ≡_^+. We firstly prove ∘ ( T)= T for T∈ by induction on the modal tree structure. The leaf case follows by definition since ( (Δ; ∅)) = (⋀Δ∧⊤) = Δ; ∅. Otherwise, assuming (( S)) = S for every S∈Γ, we conclude ((Δ; Γ)) = (⋀Δ∧⋀Γ) = (⋀Δ) + (⋀Γ) = Δ; ∅ + ∑ [(α( S)) | (α, S) ∈Γ] = Δ;∅ + ∑ [∅; [( α,(( S)))] | (α, S) ∈Γ] = Δ ; ∅ + ∑ [∅; [(α, S)] | (α, S) ∈Γ] = Δ ; Γ. Finally, we prove ∘ (φ) ≡_^+φ for φ∈ℒ^+ by induction on the structure of φ. * ∘ (⊤) = ⊤∧⊤≡_^+⊤; ∘ (p) = (p ∧⊤) ∧⊤≡_^+ p. * Let us assume ∘ (ψ) ≡_^+ψ. Then, ∘ (αψ) = (∅ ; [(α , (ψ))] = ⊤∧α∘ (ψ) ∧⊤≡_^+αψ. * Let (ψ) = Δ_ψ ; Γ_ψ and (ϕ) = Δ_ϕ ; Γ_ϕ. Assuming ∘ (ψ) ≡_^+ψ and ∘ (ϕ) ≡_^+ϕ, ∘ (ψ∧ϕ) ≡_^+⋀Δ_ψ∧⋀Γ_ψ∧⋀Δ_ϕ∧⋀Γ_ϕ≡_^+ψ∧ϕ. § THE TREE REWRITING SYSTEM FOR We introduce the tree rewriting system for , an abstract rewriting system for which will be proven adequate and complete w.r.t. in the next section. Additionally, we present useful results for the rewriting, along with the Inside Rewriting Property which involves transforming a subtree while preserving the remaining parts invariant. An abstract rewriting system is a pair (A, {↪^μ}_μ∈ I) consisting of a set A and a sequence of binary relations ↪^μ on A, also called rewriting rules (r.r.). Instead of (a,b) ∈↪^μ we write a ↪^μ b and call b is obtained by applying μ to a or b is obtained by performing one step μ to a. The composition of rewriting rules μ_1 and μ_2 is written a ↪^μ_1∘↪^μ_2 b and denotes that there is â∈ A such that a ↪^μ_1â↪^μ_2 b. In particular, the rewriting rules of the tree rewriting system for transform a tree by replacing a subtree with a predetermined tree. The rules are classified into five kinds according to the performed transformation: atomic, structural, replicative, decreasing, and modal rewriting rules. Atomic rules duplicate or eliminate propositional variables in the lists labeling the nodes; the structural rule permutes the order of a node's children; the replicative rule duplicates a child of a node; decreasing rules either eliminate a child or remove a node and its children under certain conditions; and modal rules either decrease the label of an edge or apply a transformation simulating the J axiom of . To define the rewriting rules, we introduce the following notation. Let T = Δ ; Γ be a modal tree, 0< i,j ≤ |Γ| and n ≤ |Δ| such that Γ = [(α_1, S_1),...,(α_m, S_m)]. The ith element of Γ is denoted by #_i Γ. The list obtained by erasing the ith element of Γ, i.e. [(α_1, S_1),...,(α_i-1, S_i-1),(α_i+1, S_i+1) ,...,(α_m, S_m)], is denoted by Γ^-i. The list obtained by duplicating the ith element of Γ and placing it at the beginning, i.e. (α_i, S_i) Γ, is denoted by Γ^+i. Analogously, the list obtained by erasing the nth element of Δ and the list obtained by duplicating the nth element of Δ and placing it at the beginning are denoted by Δ^-n and Δ^+n, respectively. The list obtained from Γ by replacing its ith element by (α, S) is defined by Γ[(α, S)]_i := [(α_1, S_1),...,(α_i-1, S_i-1),(α, S), (α_i+1, S_i+1), ...,(α_m, S_m)]. Note that we use the same notation for replacement in a list of pairs and replacement in a modal tree since the context allows for a clear distinction. Finally, the list obtained by interchanging the ith element with the jth element, i.e. (Γ[#_i Γ]_j)[#_j Γ]_i, is denoted by Γ^i ↔ j. We can now present the tree rewriting system for Reflection Calculus. () The tree rewriting system for , denoted by , is the abstract rewriting system (,{↪^μ}_μ∈ℛ) for ℛ = {ρ^+,ρ^-,σ, π^+,π^-, , λ, }. The rewriting rules of ℛ are defined in Table <ref>. Due to the transformations they induce, the rules are named as follows: the ρ^+-rule is called atom duplication, the ρ^--rule is called atom elimination, the σ-rule is called child permutation, the π^+-rule is called as child duplication, the π^--rule is called child elimination, the -rule is called transitivity, and the λ-rule is called monotonicity. More generally, the tree rewriting relation is the union of the rewriting rules. (↪) The tree rewriting relation ↪ is defined as ↪ := ↪^ρ^+∪↪^ρ^-∪↪^σ∪↪^π^+∪↪^π^-∪↪^∪↪^λ∪↪^. We say that the step in T↪ T' has been performed at position 𝐤 if the applied rule replaces the subtree at position 𝐤∈ Pos( T). We say that T rewrites to T', denoted by T↪^* T', if T' is the result of applying zero, one or several rewriting rules of ℛ to T. In other words, ↪^* denotes the reflexive transitive closure of ↪. The trees T and T' are -equivalent, denoted by T*↔ T', if T↪^* T' and T'↪^* T. If T rewrites to T' by applying the rewriting rule μ zero, one or several times, we write T↪^μ * T'. For Ω a list of rewriting rules, we define T↪^Ω S inductively as T↪^* T by applying no rewriting rules if Ω = ∅, and T↪^μ∘↪^Ω̂ S for Ω = μΩ̂. Likewise, for Ω_1 and Ω_2 lists of rewriting rules, T↪^Ω_1∘↪^Ω_2 S denotes that there is Ŝ such that T↪^Ω_1Ŝ↪^Ω_2 S. Modal trees with permuted lists labeling the nodes are -equivalent. Δ_1 Δ_2 ; Γ*↔Δ_2 Δ_1 ; Γ. It suffices to show Δ_1 Δ_2 ; Γ↪^*Δ_2 Δ_1 ; Γ by induction on the length of Δ_2. If Δ_2 is empty, the result trivially holds. Assuming ΔΔ_2 ; Γ↪^*Δ_2 Δ ; Γ for any list of propositional variables Δ, using atom duplication and atom elimination rewriting rules we conclude Δ_1 (p Δ_2) ; Γ = (Δ_1 p) Δ_2 ; Γ↪^*Δ_2 (Δ_1 p) ; Γ ↪^ρ^+ p Δ_2 Δ_1 p ; Γ↪^ρ^- (p Δ_2) Δ_1 ; Γ. Here are some useful results on rewriting a sum of modal trees. Let T_1, T_2, T_3, S_1, S_2 ∈. Then, * T_1 *↔ T_1 + T_1; * T_1 + T_2 *↔ T_2 + T_1; * T_1 + T_2 ↪^* T_1 and T_1 + T_2 ↪^* T_2; * If T_1 ↪^* S_1, then T_1 + T_2 ↪^* S_1 + T_2; * If T_1 ↪^* T_2 and T_1 ↪^* T_3 then T_1 ↪^* T_2 + T_3; * If S_1 ↪^* T_1 and S_2 ↪^* T_2, then S_1 + S_2 ↪^* T_1 + T_2. The implication from left to right of the first statement holds by atom and child duplication, and the inverse implication by atom and child elimination. The second result holds by Lemma <ref> and child permutation. The third follows by atom and child elimination. The fourth result holds by induction on the number of rewriting steps performed in T_1 ↪^* S_1 and by cases on the rewriting rules. The fifth statement holds by the fourth result using the statements one and two: T_1 ↪^* T_1 + T_1 ↪^* T_2 + T_1 ↪^* T_1 + T_2 ↪^* T_3 + T_2 ↪^* T_2 + T_3. Finally, by the fifth statement it suffices to show S_1 + S_2 ↪^* T_1 and S_1 + S_2 ↪^* T_2 to prove the sixth result, which follow by the third statement and the hypotheses. The following results state that transformations in subtrees can be extended to the entire tree, allowing for systematic and consistent modifications throughout the tree. In consequence, we can effectively manipulate and modify complex tree structures while leaving the other parts untouched. If S↪^* S', then T [ S]_𝐤↪^* T [ S']_𝐤. By induction on the number of rewriting steps that are performed in S↪^* S'. If no rewriting step is performed, the result is trivially satisfied. Now assume T[ S]_𝐤↪^* T[Ŝ]_𝐤 for S↪^*Ŝ by performing n rewriting steps. Moreover, let S↪^*Ŝ↪^μŜ[S̃]_𝐫 for S̃∈ and 𝐫∈ Pos(Ŝ) according to the rewriting rule μ. Since T[ S]_𝐤↪^* T[Ŝ]_𝐤 by the inductive hypothesis, it suffices to show T[Ŝ]_𝐤↪^μ T[Ŝ[S̃]_𝐫]_𝐤 = ( T[Ŝ]_𝐤)[S̃]_𝐤𝐫 by Lemma <ref>. We proceed by induction on the tree structure of T. For T a leaf, by the hypothesis, T[Ŝ]_ϵ = Ŝ↪^μŜ[S̃]_𝐫 = ( T[Ŝ]_ϵ)[S̃]_ϵ𝐫. Let T = Δ ; [(α_1, S_1),...,(α_m, S_m)] such that S_i[Ŝ]_𝐥↪^μ ( S_i[Ŝ]_𝐥)[S̃]_𝐥𝐫 for 𝐥∈ Pos( S_i) and 1 ≤ i ≤ m. We continue by cases on the length of 𝐤. For 𝐤 = ϵ, the result is trivially satisfied. For 𝐤 = i𝐤̂ such that 𝐤̂∈ Pos( S_i) and 1 ≤ i ≤ n, we conclude by performing μ at position 𝐤𝐫 using the inductive hypothesis for S_i as follows, T[Ŝ]_i𝐤̂ = Δ ; [(α_1, S_1),..., (α_i, S_i[Ŝ]_𝐤̂),...,(α_m, S_m)] ↪^μΔ ; [(α_1, S_1),..., (α_i,( S_i[Ŝ]_𝐤̂)[S̃]_𝐤̂𝐫),...,(α_m, S_m)] = ( T[Ŝ]_i𝐤̂)[S̃]_i𝐤̂𝐫. If T|_𝐤↪^* S, then T↪^* T[ S]_𝐤 for 𝐤∈ Pos( T). By Proposition <ref> since T = T[ T|_𝐤]_𝐤 (Lemma <ref>). § ADEQUACY AND COMPLETENESS We aim to show how the tree rewriting system faithfully simulates logical derivations in thanks to the embeddings defined in Section <ref>. Thereby, adequacy and completeness theorems are presented as key results that underscore the efficacy of tree rewriting systems in relating logical inference and structural transformation. Firstly, we show adequacy by proving that if a rewriting step is performed, the sequent of formulas which the trees are mapped to is provable in . If T↪ T', then ( T) ⊢_( T') for T, T'∈. By induction on the tree structure of T. For T a leaf the result follows easily by cases on the performed rewriting rule. Now consider T = Δ ; Γ for Γ = [(α_1, S_1),...,(α_m, S_m)] such that ( S_i) ⊢_( S') if S_i ↪ S' for 1 ≤ i ≤ m. Assuming Δ ; Γ↪^μ T' for μ a rewriting rule, we show ( T) ⊢_( T') by cases on the length of the position at which μ is performed. First consider a rewriting at a position i𝐤∈ Pos( T). By definition, T' is of the form Δ ; [(α_1, S_1),...,(α_i, S'),...,(α_m, S_m)] for S_i ↪^μ S' by rewriting at position 𝐤. Hence, by the inductive hypothesis, ( S_i) ⊢_( S'). Thus, ( T) ≡_^+⋀Δ∧α_i ( S_i) ∧⋀ (Γ^-i) ⊢_⋀Δ∧α_i ( S') ∧⋀ (Γ^-i) ≡_^+( T'). For rewriting performed at position ϵ, the proof concludes easily by cases on μ. Let see in some detail the cases of transitivity and rewriting rules. -rule: Consider T↪^Δ ; Γ[(β, S)]_i such that #_i Γ = (β, Δ̃; Γ̃) for 0 < i ≤ |Γ| and #_j Γ̃ = (β , S) for 0 < j ≤ |Γ̃|. By Lemma <ref> and transitivity rule for we conclude, ( T) ≡_^+⋀Δ∧β(Δ̃ ; Γ̃) ∧⋀ (Γ^-i) ≡_^+⋀Δ∧β (⋀Δ̃∧β( S) ∧⋀ (Γ̃^-j)) ∧⋀ (Γ^-i) ⊢_^+⋀Δ∧ββ( S) ∧⋀ (Γ^-i) ⊢_⋀Δ∧β( S) ∧⋀ (Γ^-i) ≡_^+(Δ ; Γ[(β, S)]_i ). -rule: Consider T↪^Δ ; (Γ[(α, Δ̃ ; Γ̃ (β , S) )]_i)^-j such that #_i Γ = (α , Δ̃ ; Γ̃) and #_j Γ = (β, S) for 0 < i,j, ≤ |Γ| satisfying i ≠ j. Let i < j without loss of generality. By the J rule for we conclude, ( T) ≡_^+⋀Δ∧α(Δ̃ ; Γ̃) ∧β( S) ∧⋀ ((Γ^-j)^-i) ⊢_⋀Δ∧α ((Δ̃ ; Γ̃) ∧β( S)) ∧⋀ ((Γ^-j)^-i) ≡_^+⋀Δ∧α ((Δ̃ ; Γ̃ (β, S) )) ∧⋀ ((Γ^-j)^-i) ≡_^+(Δ ; (Γ[(α, Δ̃ ; Γ̃ (β , S) )]_i)^-j). If T↪^* T', then ( T) ⊢_( T') for T, T'∈. By an easy induction on the number of rewriting steps that are performed using Proposition <ref>. We conclude this section by showing that is complete with respect to . If φ⊢_ψ, then (φ) ↪^*(ψ) for φ, ψ∈ℒ^+. By induction on the length of the derivation. * If φ⊢_φ, then (φ) ↪^*(φ) by applying no rewriting rule; if φ⊢_⊤, then (φ) ↪^*(⊤) by atom and child elimination. (Cut): If φ⊢_ψ and ψ⊢_ϕ such that (φ) ↪^*(ψ) and (ψ) ↪^*(ϕ), then (φ) ↪^*(ϕ) ↪^*(ϕ). (Elimination of conjunction): If φ∧ψ⊢_φ, by Lemma <ref> follows (φ∧ψ) = (φ) + (ψ) ↪^*(φ). Analogously, if φ∧ψ⊢_ψ then (φ∧ψ) ↪^*(ψ) by Lemma <ref>. (Introduction to conjunction): If φ⊢_ψ and φ⊢_ϕ such that (φ) ↪^*(ψ) and (φ) ↪^*(ϕ), by Lemma <ref> we conclude (φ) ↪^*(ψ) + (ϕ) = (ψ∧ϕ). (Distribution): If φ⊢_ψ such that (φ) ↪^*(ψ), then (αφ) ↪^*(αψ) by the inside rewriting property (Corollary <ref>). (Transitivity): If ααφ⊢_αφ, by the -rule, (ααφ) = ∅ ; [(α,∅;[(α,(φ))])]↪^∅;[(α,(φ))] = (αφ). (Monotonicity): If αφ⊢_βφ for α > β, by the λ-rule, (αφ) = ∅ ; [(α, (φ))] ↪^λ∅ ; [(β, (φ))] = (βφ). (J): Consider α > β and (φ) = Δ ; Γ. If αφ∧βψ⊢_α (φ∧βψ), by the -rule, (αφ∧βψ) = ∅;[(α, Δ ; Γ),(β,(ψ))]↪^∅;[(α,Δ ; Γ (β,(ψ)) )] = ∅;[(α,(φ) + (βψ))] = (α (φ∧βψ)). This concluding corollary states that a sequence can be shown in by identifying transformations within the trees in which the involved formulas are embedded into. Likewise, a rewriting can be proven by showing the corresponding sequence for the formulas embedding the corresponding trees. (φ) ↪^*(ψ) ⟹φ⊢_ψ and ( T) ⊢_( T') ⟹ T↪^* T'. By adequacy and completeness (Theorems <ref> and <ref>, respectively) using the embedding composition property (Proposition <ref>). § THE REWRITE NORMALIZATION THEOREM In this section we present our rewriting normalization theorem, which allows us to perform rewriting in a designated sequence of rewriting rules according to their kinds. Specifically, we can always perform the rewrite process using normal rewriting sequences. (Normal rewrite) A list of rewriting rules Ω is a normal rewriting sequence if it is of the form Ω_π^+Ω_♢Ω_δΩ_ρΩ_σ for Ω_π^+, Ω_♢, Ω_δ, Ω_ρ and Ω_σ lists of replicative, modal, decreasing, atomic and structural rewriting rules, respectively. The order of the presented normal rewriting sequence adheres to the following principles. Firstly, performing any kind of rewriting rule before a replicative one cannot be equivalently reversed. Similarly, the order of modal and decreasing rewriting rules cannot be interchanged (for example, it may be necessary to decrease the label of an edge in order to apply -rule). Finally, atomic and structural rewriting rules, which pertain to node labels and child permutation, are placed last because nodes and children may be removed during the rewriting process. In the following, we disclose the results needed to establish the normal rewriting theorem. Their proofs delve into technical intricacies, requiring meticulous attention to the rewritten positions and the arguments determining the application of the concerned rewriting rules. If two rewriting rules are applied in different positions of the tree such that their outcome transformation is disjoint, their permutation is proven straightforward. However, if their application intersects, additional transformations may be required. To illustrate these scenarios, we provide simplified examples alongside the proofs. If T↪^σ∘↪^μ S, then T↪^μ∘↪^σ * S for μ∈ℛ∖{σ}. By cases on μ. Each case is proven by induction on the length of the position of σ and by cases on the position of μ. If T↪^σ *∘↪^Ω S, then T↪^Ω∘↪^σ * S for Ω a list of atomic, replicative, decreasing and modal rewriting rules. By induction on the length of Ω and by induction on the number of σ-steps performed using Lemma <ref>. Let ρ be an atomic rule. If T↪^ρ∘↪^μ S, then T↪^μ∘↪^ρ * S for μ∈{π^+, π^-, , λ, }. By cases on μ. Each case is shown by induction on the length of the position of ρ and by cases on the position of μ. If μ is π^+, we prove that either T↪^π^+∘↪^ρ S or T↪^π^+∘↪^ρ∘↪^ρ S holds (see Figure <ref>). If μ is a decreasing rewriting rule, then either T↪^μ S or T↪^μ∘↪^ρ S. Otherwise we show T↪^μ∘↪^ρ S. Let Ω_ρ be a list of atomic rewriting rules. If T↪^Ω_ρ∘↪^μ S, then T↪^μ∘↪^Ω'_ρ S for μ∈{π^+,π^-,,λ, } and Ω'_ρ a list of atomic rewriting rules. By an easy induction on the length of Ω_ρ using Lemma <ref>. Let δ be a decreasing rule. If T↪^δ∘↪^μ S, then T↪^μ *∘↪^δ * S for μ∈{π^+, λ, }. By induction on the length of the position of δ and by cases on the length of the position of μ. Let δ be π^-. If μ is π^+, we show that either T↪^π^+∘↪^π^- S or T↪^π^+∘↪^π^-∘↪^π^- S holds (see Figure <ref>). Otherwise we prove T↪^μ∘↪^π^- S. Let δ be . If μ is π^+ we show that either T↪^π^+∘↪^ S or T↪^π^+∘↪^∘↪^ S (see Figure <ref>). For μ being λ we prove that either T↪^λ∘↪^ S or T↪^λ∘↪^λ∘↪^ S (see Figure <ref>). Finally, for μ being we show that either T↪^J∘↪^ S or T↪^J∘↪^J∘↪^ S (see Figure <ref>). Let δ and θ be decreasing and modal rewriting rules respectively, and Ω_δ be a list of decreasing rewriting rules. * If T↪^Ω_δ∘↪^π^+ S, then T↪^π^+∘↪^Ω'_δ S for Ω'_δ a list of decreasing rules. * If T↪^δ∘↪^θ * S, then T↪^θ *∘↪^δ S. * If T↪^Ω_δ∘↪^μ * S for μ∈{π^+, λ, }, then T↪^μ *∘↪^Ω'_δ S for Ω'_δ a list of decreasing rewriting rules. The first statement follows by induction on the length of Ω_δ and Proposition <ref>. The second one is shown by induction on the number of modal rewriting rules applied and Proposition <ref>. The third result follows by cases on μ. For μ∈{λ, }, we conclude by induction on the length of Ω_δ and the second statement. Otherwise, we conclude by induction on the number of μ-rules applied and the first statement. Let θ be a modal rewriting rule. If T↪^θ∘↪^π^+ S, then T↪^π^+ *∘↪^θ *∘↪^σ * S. By induction on the length of the position of θ and by cases on the length of the position of π^+. If θ is λ, we show that either T↪^π^+∘↪^λ S or T↪^π^+∘↪^λ∘↪^λ S holds (see Figure <ref>). For θ being , we prove that either T↪^π^+∘↪^ S or T↪^π^+∘↪^∘↪^ S (see Figure <ref>) or T↪^π^+∘↪^π^+∘↪^∘↪^ S (see Figure <ref>) or T↪^π^+∘↪^∘↪^∘↪^σ S (see Figure <ref>). We now want to show permutation of the application of multiple π^+-rules after multiple modal rewriting rules. To show permutability of π^+-rules following a -rule, we need to reorganize duplications applied at positions in increasing depth in the tree. Furthermore, we require flexibility to choose which branch to duplicate first. For readability, we write T↪^π^+(𝐤,i) S to denote that S is obtained by applying the π^+-rule to duplicate the ith child at position 𝐤∈ Pos( T). Similarly, T↪^σ(𝐤,i,j) S denotes the application of the σ-rule to permute the ith and jth children at position 𝐤∈ Pos( T). Given this notation, the following result aids in reorganizing duplications at positions in increasing depth. * If T↪^π^+(l𝐤,i)∘↪^π^+(ϵ,j) S for j ≠ l, then T↪^π^+(ϵ,j)∘↪^π^+((l+1)𝐤,i) S. * If T↪^π^+(j𝐤,i)∘↪^π^+(ϵ,j) S, then T↪^π^+(ϵ,j)∘↪^π^+((j+1)𝐤,i)∘↪^π^+(1𝐤,i) S. * If T↪^π^+(l𝐤_l,i)∘↪^π^+ (n𝐤_n,j) S for |n𝐤_n| < |l𝐤_l|, then T↪^π^+(n𝐤_n,j)∘↪^π^+ (l𝐤_l,i) S. The first two statements are trivially satisfied by definition. The third follows by induction on the tree structure of T. Likewise, the lemma below permits to choose which branch to duplicate first. * If T↪^π^+(ϵ,i)∘↪^π^+(ϵ,j) S for 1 < j and j ≠ i+1, then T↪^π^+ (ϵ,j-1)∘↪^π^+ (ϵ,i+1)↪^σ (ϵ,1,2) S. * If T↪^π^+(n𝐤_n,i)∘↪^π^+(l𝐤_l,j) S for n ≠ l, then T↪^π^+(l𝐤_l,j)∘↪^π^+(n𝐤_n,i) S. Therefore, we can show permutation of the application of arbitrary π^+-rules after a -rule. If T↪^∘↪^π^+ * S, then T↪^π^+ *∘↪^ *∘↪^σ * S. If the -rule and the π^+-rules are applied in different positions of the tree such that the performed transformations are disjoint, their permutation is proven straightforward as shown in Proposition <ref>. Therefore, we will use Lemma <ref> and Remark <ref> to reorganise the application of the π^+-rules to easily permute the disjoint cases. Otherwise, in cases where the subtree affected by the -rule is duplicated, we use a unique labeling of the nodes to track the number of times those subtrees are duplicated. Specifically, we label the nodes with their positions. For T↪^↪^* S such that has been performed at position n𝐤∈ Pos( T), we call the -label to the label n. Moreover, if has been performed at the empty position relating the ith and the jth children according to the notation of its definition, we call the upper -label to the label i and the lower -label to the label j. Therefore, the proof proceeds by induction on the length of the position in which the -rule has been applied. For the -rule applied at the empty position, we proceed by induction on the occurrences of the upper -label. The inductive step is shown by Proposition <ref> and Corollary <ref>, using Lemma <ref> and Remark <ref> to get a suitable reorganisation of the applications of the π^+-rules. The base case in which the upper -label occurs once in S is shown by induction on the occurrences of the lower -label using the same strategy. Lastly, if the -rule is applied at a non-empty position, we proceed by induction on the occurrences of the -label. The base case follows by Proposition <ref> using Lemma <ref> and Remark <ref> together with the inductive hypothesis on the position in which has been performed. The inductive step is similarly proven by Proposition <ref> and Corollary <ref> using Lemma <ref> and Remark <ref>. Let Ω_♢ be a list of modal rewriting rules. * If T↪^λ∘↪^π^+ * S, then T↪^π^+ *∘↪^λ * S by applying the same number of π^+-rules. * If T↪^Ω_♢∘↪^π^+ * S, then T↪^π^+ *∘↪^Ω'_♢∘↪^σ * S for Ω'_♢ a list of modal rewriting rules. The first statement follows by induction on the number of π^+-rules applied and Proposition <ref>. The second statement follows by induction on the length of Ω_♢. The inductive step is shown by cases on the considered modal rewriting rule: the λ-rule case follows by the first statement and the -rule case is shown by Proposition <ref> and Corollary <ref>. Finally, we present the main theorem of the section. If T↪^* S, then T↪^Ω S for Ω a normal rewriting sequence. By induction on the number of rewriting steps that are performed using Corollaries <ref>, <ref>, <ref> and <ref>. § CONCLUSIONS AND FUTURE WORK We have provided a method for designing tree rewriting systems for different strictly positive logics and in particular re-cast the reflection calculus in this framework. Although not explicitly stated, it easily follows from the presented techniques that a rewriting system for K^+ can be defined by only considering atomic, structural, extensional and π-rules, and we are confident that our framework will find applications in positive fragments for other (poly)modal logics. The use of abstract rewriting systems aids in the analysis of further properties like subformula property and admissibility of rules and provides a foundation for computationally efficient implementation. Our work is based on a type-theoretic presentation of trees, which has the dual benefit of allowing for rules to be described in a precise and succinct manner and being particularly amenable to formalization in proof assistants. Many of our results have already been implemented in the proof assistant Coq and our goal is to fully formalize our work. Since rewriting normalization theorem and proof-theoretic investigations in general require checking multiple distinct cases in detail, the benefit of formalization is particularly clear in this type of proof, and indeed the community has been gravitating towards formalized proofs (see e.g. <cit.>). In the framework of RC, this has the added advantage that results often need not only to be true, but provable in suitable systems of arithmetic, and the latter can be made no more transparent than via formalization. Last but not least, proofs implemented in Coq can be automatically extracted into fully verified algorithms, paving the road to reasoners with the highest degree of reliability attainable by current technology. plain
http://arxiv.org/abs/2407.12090v1
20240716180006
Impact of star formation models on the growth of galaxies at high redshifts
[ "Cheonsu Kang", "Taysun Kimm", "Daniel Han", "Harley Katz", "Julien Devriendt", "Adrianne Slyz", "Romain Teyssier" ]
astro-ph.GA
[ "astro-ph.GA" ]
Department of Astronomy, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea Department of Astronomy and Astrophysics, University of Chicago, Chicago, Illinois 60637, USA Sub-department of Astrophysics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK Department of Astrophysical Sciences, Princeton University, Peyton Hall, Princeton NJ 08544, USA To understand the impact of star formation models on galaxy evolution, we perform cosmological zoom-in radiation-hydrodynamic simulations of a dwarf dark matter halo with virial mass ∼ 10^9 at z=6. Two different star formation models are compared, a model based on a local gravo-thermo-turbulent condition and a model based on a sink particle algorithm. Using idealized tests of collapsing isothermal spheres and giant molecular clouds with different turbulent structures, we determine the optimal accretion radius to be twice the cell size and the resolution required to achieve reasonable convergence in star formation efficiency to ≲ 0.3 pc for the sink algorithm. As a first study in this series, we use cosmological zoom-in simulations with different spatial resolutions and find that star formation is more bursty in the runs with the sink algorithm, generating stronger outflows than in the runs with the gravo-thermo-turbulent model. The main reason for the increased burstiness is that the gas accretion rates on the sinks are high enough to form stars on very short timescales, leading to more clustered star formation. As a result, the star-forming clumps are disrupted more quickly in the sink run due to more coherent radiation and supernova feedback. The difference in burstiness between the two star formation models becomes even more pronounced when the supernova explosion energy is artificially increased. Our results suggest that improving the modelling of star formation on small, sub-molecular cloud scales can significantly impact the global properties of simulated galaxies. Bursty star formation with sinks Impact of star formation models on the growth of galaxies at high redshifts Cheonsu Kang1E-mail:astro.ckang@gmail.com0009-0003-1180-4979 Taysun Kimm1E-mail:tkimm@yonsei.ac.kr0000-0002-3950-3997 Daniel Han10000-0002-2624-3129 Harley Katz20000-0003-1561-3814 Julien Devriendt30000-0002-8140-0422 Adrianne Slyz3 Romain Teyssier40000-0001-7689-0933 Received XXX; accepted YYY ======================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION Star formation is the fundamental process that determines the properties of galaxies. It affects the distribution of gas by removing cold gas from star-forming dense clumps and ejecting matter into the interstellar medium (ISM) and intergalactic medium through various forms of stellar feedback. The gas ejected from the galaxy then recollapses and forms stars as it cools, balancing heating and cooling and regulating the growth of galaxies <cit.>. Star formation is known to be a slow process <cit.>. The Schechter form of galaxy mass functions, which differs from the power law predicted for dark matter halos <cit.>, already suggested that star formation is inefficient in both low-mass and high-mass galaxies <cit.>. Indeed, by comparing the ratio of stellar mass to DMH mass using the abundance-matching technique <cit.>, it has been shown that the conversion efficiency of gas to stars is generally low on galactic scales <cit.>. Studies based on the rotation curves of dwarf galaxies <cit.> have also reached the same conclusion, suggesting that star formation is better regulated in these regimes. In low-mass systems where the gravitational potential is shallow, the explosion of supernovae (SNe) is thought to suppress star formation <cit.>, while in massive galaxies the accretion energy of supermassive black holes may have driven galactic winds and offset the cooling flow <cit.>. However, it has been a challenge for numerical simulations to reproduce the observed properties of dwarf and/or L_⋆ galaxies <cit.>. This has proven to be difficult especially in the center of galaxies, where gas flows in from all directions, forming too many stars and making the galaxy very compact <cit.>. One culprit is excessive cooling during SN explosions, as the cooling length tends to be underresolved in galactic-scale simulations <cit.>. Several methods have been proposed to overcome this by explicitly incorporating the momentum-conserving phase <cit.>. For example, <cit.> have found that simulations with their mechanical SN feedback suppress star formation in halos with mass ≲ 10^10 better than those with thermal or kinetic SN feedback. Applying the same scheme to the arepo code, <cit.> have shown that the Kennicutt-Schmidt relation <cit.> is indeed better reproduced than with classical thermal or kinetic schemes <cit.>. However, the authors also point out that in dense regions of galaxies, SNe alone cannot regulate star formation as observed <cit.>. Instead, <cit.> argued that hot superbubbles driven by clustered young stars can produce stronger outflows if thermal conduction is properly taken into account. Radiation from massive stars is also thought to be an important source of pressure that can disrupt giant molecular clouds (GMCs) in star-forming galaxies by driving the expansion of HII bubbles <cit.>. Indeed, cosmological simulations have found that star formation can be further suppressed if the radiation feedback from massive stars is included in addition to SN explosions <cit.>. Finally, cosmic rays accelerated by SNe have recently emerged as a potential solution, as they can produce continuous galactic winds without artificially increasing the SN energy or randomly distributing the SN explosion <cit.>. Since these feedback energies come from massive stars, bursty star formation histories (SFHs) can, in principle, help explain the observational properties of galaxies. For example, <cit.> argued that massive outflows can be better explained when star formation is modelled in a bursty way. It can also flatten the central density profile in dwarf galaxies by non-adiabatically changing the orbits of dark matter particles <cit.>, whereas this does not seem to happen in a galaxy with a smooth SFH <cit.>. The detection of UV-bright galaxies at z≳ 10 in the recent JWST survey has challenged the standard cosmological model <cit.>, but using FIRE-2 cosmological zoom-in simulations, <cit.> argued that the abundance of galaxies with M_ UV<-20 is naturally explained by rapidly time-varying SFHs, highlighting the importance of accurately modelling the burstiness of star formation. In numerical simulations of galaxy formation, the star formation process is implemented as subgrid physics, assuming a Schmidt law <cit.>: d M_*/dt = M_ gas/, where M_* is the stellar mass, M_ gas is the gas mass, and is the free-fall time. The most uncertain parameter is the star formation efficiency (SFE) per free-fall time (), which is typically a few percent at densities relevant to GMC scales <cit.>. <cit.> showed analytically that varies with the virial parameter and the turbulent Mach number of GMCs. <cit.> and <cit.> further considered the multi-freefall collapse and magnetic fields for star formation, respectively. Comparing high-resolution magnetohydrodynamic simulations with observations, <cit.> argued that interstellar turbulence is the primary source controlling the star formation rate (SFR). More recently, <cit.> showed that the external gravity of the host galaxy can also influence the of star-forming clouds. Indeed, <cit.> simulated an isolated disk galaxy in a 10^12 DMH with both single and multi-freefall collapse models, and found that the observed relationship between SFR surface density and gas surface density is better reproduced with multi-freefall models, indicating that gas inside star-forming clouds collapses on different timescales. Identifying potential star formation sites presents its own set of challenges. Stars are known to form in regions of dense gas, and as such, a density threshold is commonly used as a prerequisite for star formation. This threshold can be used in conjunction with temperature or convergent gas flow criteria <cit.>. In addition to these criteria, <cit.> also introduced dynamical restrictions, requiring the gas to have a cooling time shorter than the dynamical time and to be Jeans unstable. Building upon the previous work, and motivated by the fact that stars form in GMCs, some studies have assumed that star formation takes place in molecular hydrogen-rich gas and have shown that the model reproduces the molecular Schmidt-Kennicutt relation <cit.> or the observed transition from atomic to molecular hydrogen <cit.>. To determine the most reliable way to model star formation, <cit.> simulated two disk galaxies using seven different criteria, including those mentioned above. They showed that the simulated SFRs converge reasonably well when stellar feedback is included, with star formation occurring in self-gravitating regions emerging as the most physically motivated criterion. In a similar vein, <cit.> investigated the impact of multi-freefall star formation on the evolution of a Milky Way-like galaxy and showed that the detailed structure of the ISM depends on the interplay between star formation and feedback models. <cit.> further argued that pressure support from magnetic fields can make the Schmidt-Kennicutt relation shallower, highlighting the importance of magnetic field dynamics in regulating star formation rates within galaxies. To mitigate the uncertainties associated with and star formation criteria, it thus seems natural to attempt to model the formation and growth of star particles directly. In simulations where the ISM structure is resolved in some detail, this can be done by seeding density peaks within gas clumps and allowing these seeds to grow by subsequent accretion of inflowing gas. For example, using such a model, <cit.> simulated an M51-like galaxy at sub-parsec resolution and showed that the overall star formation rate is mostly controlled by the self-regulated properties of the ISM rather than by the large-scale gas flows triggered by the interaction with a companion galaxy. More recently, <cit.> simulated nuclear rings with inflowing gas using sink particles with a uniform spatial resolution of 4 pc. They found that the depletion time, which is inversely proportional to the SFR, agrees well with the predictions of self-regulation theory <cit.>. This illustrates the usefulness of this type of approach to explore the global impact of star formation in controlled experiments. In this study, we further quantify such an impact on SFHs by simulating a dwarf galaxy at high redshift with a similar sink particle algorithm for forming stars and comparing its results to a subgrid model based on gravo-thermodynamic gas properties. This paper is organized as follows. In Section 2, we describe the initial conditions, numerical methods, input physics, and the two star formation models used in our simulations. In Section 3 we test the sink particle algorithm against singular isothermal sphere collapse as well as turbulent GMC simulations. The main results of the radiative hydrodynamics simulations of a dwarf galaxy, which focus on comparing the burstiness and feedback strength in the two star formation models, are presented in Section 4. In Section 5 we discuss the effect of Type II SN explosion energy and spatial resolution on our conclusions. Finally, we summarize our results in Section 6. § SIMULATIONS To study the relationship between burstiness of SFH and strength of SN feedback, we perform both idealized and cosmological simulations using ramses-rt <cit.>. The Euler equations are solved using the Harten-Lax-van Leer Contact (HLLC) scheme <cit.>. The Poisson equation is solved using a multigrid method <cit.>. The radiative transfer calculation is performed using a fluid of photon approach with the M1 closure <cit.> and the global Lax-Friedrich scheme. A reduced speed of light approximation is employed where c, the speed of light, is set to a hundredth of its true value. A Courant factor of 0.7 is adopted. Photoionization, direct momentum transfer from photon absorption and non-thermal pressure due to trapped infrared (IR) photons are modelled on the basis of eight photon groups ([0.1, 1.0), [1.0, 5.6), [5.6, 11.2), [11.2, 13.6), [13.6, 15.2), [15.2, 24.59), [24.59, 54.42), [54.42, ∞) in units of eV), as described in Table 2 of <cit.>. These are coupled with the non-equilibrium chemistry of seven primordial species (H_2, HI, HII, HeI, HeII, HeIII, e^-), allowing more accurate calculations of cooling rates <cit.>. A redshift-dependent uniform UV background radiation from <cit.> is included in the calculation of the photoionization and heating with a self-shielding factor exp(-/0.01 ), following <cit.>. At T>10^4 K, cooling from metal species is considered by interpolating predefined tables as a function of gas density and temperature <cit.>. For lower temperatures, metal cooling due to fine structure transitions is modelled according to <cit.>. For dust, we assume a constant dust-to-metal ratio of 0.4 at temperature below 10^5 K. §.§ Stellar feedback In our cosmological simulations we consider several forms of stellar feedback: photo-ionization heating and direct radiation pressure from massive stars, non-thermal IR pressure, Type II SN explosions, and mass-dependent explosions of Pop III stars. Interested readers are referred to <cit.> for details, but we describe the main features of the models here for completeness sake. We inject the photon flux from each star particle as a function of its age, metallicity and mass. For Pop III stars, we take the lifetimes and photon production rates from <cit.>. For Pop II stars, we employ the Binary Population and Spectral Synthesis model <cit.> with a maximum stellar mass of 100 and a Kroupa initial mass function <cit.>. Among the eight photon groups, Lyman continuum photons (λ<912 Å) ionize neutral hydrogen, heating the gas and forming HII bubbles around the stars. The absorption of these photons by the neutral ISM, or the absorption of non-ionizing radiation by dust can also transfer momentum to their surroundings. In addition, we include the non-thermal pressure from trapped IR photons, although its effect is negligible due to the low dust optical depth in our early low-mass galaxies. Type II SN feedback is implemented using a mechanical feedback scheme <cit.>. Assuming that stars more massive than 8 explode as core-collapse SNe, a simple stellar population of mass 100 would host one SN explosion. This means that each star particle of mass 500 explodes five times, imparting radial momentum to 48 neighboring cells. The final amount of radial momentum depends on the gas properties of the SN site <cit.>, characterized as P_ SN=2.5×10^5 E^16/17_51 n^-2/17_ H Z'^-0.14, where E_51 is the Type II SN energy in units of 10^51 erg, n_ H is the hydrogen number density in units of , and Z'= max[0.01, Z/Z_⊙] is the gas phase metallicity, normalized to the solar value (Z_⊙=0.02). The fiducial value of E_51 is 1, and we run additional simulations with E_51=5 to study how more energetic explosions affect the evolution of galaxies in the simulations with two different star formation models. The chemical yield of the SN ejecta is assumed to be η_ Z=0.075. For Pop III stars, we release different energies and metal yields based on the progenitor mass. This is done by categorizing Pop III stars into four groups: normal Type II SN <cit.>, hypernova <cit.>, or pair instability SN <cit.>. We assume that neither energy nor metals are released from Pop III stars if their mass does not fall in any of these three mass ranges. §.§ Star formation models In this section we describe the differences between the two different star formation models, a subgrid scheme based on local gravo-thermo-turbulent (GTT) conditions <cit.> and a more direct model based on a sink particle algorithm <cit.>. To minimize the potential differences due to random sampling of Pop III stellar masses, we apply the same Pop III formation scheme in cosmological simulations as in <cit.>. The formation criteria for Pop III are essentially identical to those of the GTT model, but the stellar metallicity is assumed to be lower than Z_ star≤ 2×10^-8 (=10^-6 Z_⊙) <cit.>. §.§.§ A subgrid model based on gravo-thermo-turbulent conditions The GTT model assumes that the central cell in which a star particle is to form, together with its six neighbor cells, forms a spherical cloud. Based on a Schmidt law, the amount of newly formed stars is computed as in Equation <ref>. The main difference from previous simple models used in galaxy formation simulations is the use of the variable , which is calculated using the multi-freefall star formation theory <cit.>, as =ε_ ecc/2ϕ_t exp (3/8σ^2_ s) [2- erfc (σ^2_ s-s_ crit/√(2σ^2_ s))]. Here ε_ ecc is the maximum fraction of gas that can accrete without being blown away by protostellar feedback, which is set to 0.5. 1/ϕ_t ≈ 0.57 accounts for the uncertainty in the free-fall time measurement <cit.>, and is chosen as the best-fit value from <cit.>. The remaining parameters are σ^2_ s=ln(1+b^2ℳ^2), s_ crit= ln(0.067θ^-2α_ virℳ^2), where σ_ s is the standard deviation of the logarithmic density contrast, assuming that the probability distribution function (PDF) of the density of a star-forming cloud can be well described by a log-normal distribution, and s_ crit is the critical (minimum) density required for the gas to collapse. The turbulence parameter b accounts for the mixing ratio of different turbulence modes. We use a value of 0.4 for b, which corresponds to an approximate 3:1 mixture of solenoidal (divergence-free) and compressive (curl-free) modes. θ=0.33 is a proportionality constant between the diameter of a spherical cloud and the post-shock thickness <cit.>. then simply becomes a function of two parameters representing the local gas properties, the turbulent Mach number ℳ (≡σ_ gas/c_ s) and the local virial parameter α_ vir, as α_ vir=5(σ^2_ gas+c^2_s)/π G ρ=5(ℳ^2+1)/π^2(λ_ J/)^2, where σ_ gas is the gas turbulent velocity, c_ s, ρ and λ_ J are the sound speed, the total (gas plus star and dark matter) density and the thermal Jeans length of the central cell, respectively. The relationship between and α_ vir at seven different ℳ is shown in Figure <ref>. Note that if a cloud is highly turbulent and strongly bound, can be greater than 1. However, we find that in the runs ranges from 0.05 to 0.37, with a clear peak around 0.3, regardless of the spatial resolution. For a cloud to collapse and form stars, the gravitational force must be stronger than the thermal plus turbulent pressure gradient. Based on this argument, star formation is triggered when the local cell size is larger than the turbulent Jeans length <cit.>, as λ_ J,turb=πσ^2_ gas+√(π^2 σ^4_ gas + 36π G ρ c^2_s )/6G ρ≤. This condition can also be expressed with parameters representing local gas properties, as α_ vir≤15/π^2ℳ^2+1/ℳ^2+3, or λ_ J/≤√(3/ℳ^2+3). Note that for strong (weak) turbulence, this effectively reduces the range over which we apply the GTT subgrid model to α_ vir<1.52 (α_ vir<0.5), as indicated by the dotted lines in Figure <ref>. By construction, coarse cells cannot satisfy Equation <ref> and <ref>, because refinement is triggered when the thermal Jeans length is less than 8 and therefore star formation can only occur in maximally refined cells. We also require a convergent gas flow condition towards the central cell. In addition, we examine the Equation <ref> criterion only for cells with densities greater than ≥ 100 to reduce the computational cost of searching for neighboring cells of potential star-forming sites. In practice, stars form at much higher densities, and Equation <ref> imposes a stricter criterion than others, thereby controlling the overall star formation process. In the cosmological zoom-in simulations presented in this work (Section <ref>), the minimum mass of the star particles is chosen so that each particle hosts five SNe events, i.e., m_*, min=500. In principle, it is possible to increase the stellar mass resolution even further, but we defer this improvement to a later investigation, because in this case the initial mass function must be properly sampled to account for the continuum emission from individual stars <cit.>, as done in e.g., <cit.>. We allow for the formation of more massive star particles, with masses integer multiples of m_*, min, by sampling a Poisson distribution <cit.>. The probability of creating a more massive star increases with increasing gas mass and . Some Pop II stars in our simulations with the lowest spatial resolution are as massive as 10 m_*, min, but ∼ 85% of the total stellar mass is composed of minimum-mass star particles and this fraction increases with resolution. §.§.§ Star formation model based on sink particles Sink particles were first introduced by <cit.> to study accretion onto proto-stellar systems in a Lagrangian code. The idea was to replace crowded gas particles in high-density regions, which cause time steps to shorten, with a single non-gaseous particle representing a proto-star. The low-mass seed star then accretes surrounding gas, in contrast to most subgrid models for star formation where the star particle mass is fixed at formation. The formation criteria for such sink particles are not significantly different from other subgrid star formation models, as both involve forming within dense clumps. A widely used sink formation criterion is the Truelove condition <cit.>. <cit.> suggested that the thermal Jeans length should be resolved with at least four cells in gravitationally collapsing regions, otherwise artificial fragmentation may occur. This condition gives the density threshold ρ_ Tr=π c^2_s/16G Δ x_ min^2, where is the size of the finest cell. If =1 pc, the value of ρ_ Tr for the typical ISM gas with a temperature of 30 K and a mean molecular weight of 2.273 is about 5.7×10^-22 g cm^-3, or ≈ 260. In contrast, <cit.> used the density profile of a self-gravitating isothermal sphere <cit.> ρ_ LP(r)=8.86 c^2_s/4 π G r^2 and set the threshold at r=0.5. The corresponding density threshold is ρ_ LP(0.5)=8.86 c^2_s/π G , which is ≈ 14.4 times higher than ρ_ Tr. Once a sink is formed, its growth depends on how the accretion rate is calculated. There exists an analytic solution for spherical accretion onto a point mass, first described by <cit.> and <cit.>, as[Strictly speaking there are two analytic solutions: one for the Bondi case with the accreting point mass at rest and the other for the Hoyle-Lyttleton case with the accreting mass moving highly supersonically with respect to the surrounding gas <cit.>. Equation <ref> is an educated interpolation between these two regimes which is not always accurate in the transonic regime <cit.>.] Ṁ_ acc=4 πρ_∞ r^2_ BH√(c^2_∞+v^2_∞), where ρ_∞, c_∞, v_∞ are the gas density, the sound speed, and the relative velocity to the ambient medium far from the accreting mass, respectively. The Bondi-Hoyle radius r_ BH=GM_*/(c^2_∞+v^2_∞) is the maximal impact parameter gas can have in order to be accreted. Bondi-Hoyle accretion was used in the pioneer study of <cit.> and later implemented in an AMR code by <cit.>. However, as pointed out by these authors, it is not trivial to measure ρ_∞, c_∞, and v_∞ in the simulation, which motivates the search for alternative methods to model the accretion rates. For example, <cit.> employed a scheme based on a density threshold that allows mass growth when the gas density of cells within the accretion zone exceeds ρ_ Tr. The mass accreted onto a sink particle is calculated such that the remaining gas density inside the accretion zone equals ρ_ Tr. This approach ensures that the average density surrounding a sink particle remains similar to ρ_ Tr over a long period of time. While this method is simple and effective in preventing artificial fragmentation, it does not take into account the motion of the gas, which can be as important as the gas density in determining the accretion rate. <cit.> introduced a flux accretion scheme that takes into account the velocity of the infalling gas when computing the accretion rate. In this scheme, the accretion rate is set to the inflowing mass flux at the boundary of the accretion zone. <cit.> modified this scheme by replacing the gas velocity with its relative velocity with respect to the sink particle to correct for the motion of this latter. Using the divergence theorem, the accretion rate in this case can be written as Ṁ_ acc = ∮ρ_ gas (v⃗_ gas-v⃗_ sink) ·d⃗A⃗ = ∫ρ_ gas▽⃗· (v⃗_ gas-v⃗_ sink) dV, where ρ_ gas and v⃗_ gas are the density and velocity of the gas within the accretion zone, and v⃗_ sink is the velocity of the sink particle. The first and second integrals represent the closed surface and volume integrals, respectively, over the accretion zone. The three different accretion schemes are tested against the analytic solution at the center of a collapsing isothermal sphere, as given by <cit.>, and all of them are found to reproduce the analytic solution well <cit.>. In our simulations, a sink particle is formed at the density peak of a gas clump. We first identify gas clumps using isodensity contours <cit.>, based on clumpfind from <cit.>. A sink particle is then created if the following set of conditions are met: - the peak of a clump is at the maximum AMR level and has a density higher than a given threshold, - a clump is virialized, - there is a net gas inflow (▽⃗·v⃗ < 0), and - there is no pre-existing sink particle located closer than one accretion radius. For the density threshold, we adopt the <cit.> criterion, ρ_ th=ρ_ LP(0.5). We keep ρ_ th constant in isothermal turbulent GMC simulations (Section <ref>) where the typical sound speed can be estimated for a given temperature. In contrast, it is unlikely that all the clumps of gas in a galaxy have the same temperature, and so we calculate ρ_ th on a cell-by-cell basis in cosmological zoom-in simulations (Section <ref>). For the accretion radius, we take =2, as this provides a reasonable compromise between accuracy for the accretion rate on a singular isothermal sphere and the ability to capture local structures (see Section <ref>). Each sink particle is surrounded by 257 cloud particles of equal mass. They are uniformly distributed throughout the accretion zone, positioned every 0.5 along the three axes. Cells containing one or more cloud particles are used to calculate the physical quantities of a parent sink particle, including accretion rate and velocity. This ensures that there are 54 maximally refined cells within the accretion zone. Each cell has a different weight depending on the number of cloud particles it contains. For comparison, the number of cloud particles for = and =4 is 33 and 2109, respectively. To calculate the accretion rate, we use both the flux-based and the Bondi-Hoyle accretion schemes. The flux accretion scheme assumes that the infalling gas at the boundary of the accretion zone is directly absorbed by the sink particle. Thus the flux accretion scheme is activated when >, where =GM_ sink/2c^2_ s is the sonic radius, and M_ sink is the sink particle mass. <cit.> introduced a correction factor that slightly modifies the flux accretion rate depending on the mean gas density within the accretion zone in order to keep the gas density similar to the threshold density for a long time and to avoid artificial fragmentation. However, we do not use it in this work because the corrected form slightly underestimates the accretion rate in the collapsing isothermal sphere test. For <, we use the Bondi-Hoyle accretion scheme. The accuracy of the two schemes is discussed further in Section <ref>. The seed mass of the sink particle is set to 0.1, which is small enough that the sudden conversion of gas into a particle does not affect the local gas dynamics. Note that the hydrodynamic time step is also limited to prevent the sink mass from doubling in a single fine time step. To avoid having too many sink particles sharing an accretion zone, we allow sink particles to merge if their distances to one another becomes smaller than 2. The formation of a star particle is triggered when a sink becomes more massive than a given threshold <cit.>. We set this mass threshold to 100 in turbulent GMCs (Section <ref>) or 500 in cosmological zoom-in simulations (Section <ref>). A new star particle is created at the position of its parent sink particle, and the corresponding mass is subtracted from the sink. If there is a continuous supply of gas to the sink, multiple star particles can form from a single sink. § TESTING THE SINK MODEL Before running cosmological zoom-in simulations, we perform two different types of idealized simulations: gravitational collapse of singular isothermal spheres and turbulent GMCs. First, we aim to find the parameter set for the sink particle algorithm that can most accurately model the gravitational collapse. This is done by simulating a series of singular isothermal spheres, changing the accretion scheme and the accretion radius. We then run turbulent GMC simulations using the result from these former simulations to study the effect of resolution on stellar mass. §.§ Gravitational collapse of singular isothermal spheres §.§.§ Simulation set-up Assuming an initially static singular isothermal sphere, the density and radial velocity profiles at t=0 can be written as ρ_ SIS(r, t=0)=A c^2_ s/4π G r^2, u_ SIS(r, t=0)=0, where A is the overdensity constant. The sphere is in unstable hydrostatic equilibrium when A=2. When A>2, the sphere collapses globally because gravity is stronger than the thermal pressure gradient everywhere inside the sphere. As it collapses, the density and velocity profiles change with time, but the central accretion rate Ṁ (r→ 0) is a constant for a given A <cit.>. This property allows us to test the accuracy of different accretion schemes. We use ramses <cit.> to compare the mass growth of a sink particle with the analytic solution given by <cit.>. We place a sink particle at the center of the sphere in a (32 pc)^3 box, with three different A (2.2, 4, 90) and three different accretion radii (, 2, 4). The different values of A represent gas clouds with initial average densities of =10–103 or gas mass of 36–374 within a radius of 3 pc. The simulations are carried out with two different accretion schemes: Bondi-Hoyle accretion and flux accretion[We do not test the accretion scheme based on a density threshold because the mean gas density inside the accretion zone is already less than ρ_ th= ρ_ LP(0.5) for A=2.2, 4 and thus the accretion rate in such a scheme would be zero by construction, which is incorrect.]. The gas temperature is set to 10 K. The initial density and velocity profiles are set to follow the analytic solution at t=9 Myr <cit.>. The epoch is chosen arbitrarily, and the initial sink mass is set accordingly to Ṁ_ ana t, where Ṁ_ ana is the analytic accretion rate, which depends only on A. The cells within a radius of 6.4 pc are maximally refined with 0.0625 pc, and the profiles are truncated at r=6 pc. The gas density outside the sphere is set to ρ_ SIS(r=6 pc, t=9 Myr), while its radial velocity is set to u_ SIS(r=6 pc, t=9 Myr). We stop the simulations at t_ end=M(r<3 pc)/Ṁ_ ana, so that the accretion at the center is not affected by the gas outside the sphere. Note that this timescale differs from the free-fall time in the sense that while the free-fall time of a uniform cloud is independent of radius, t_ end of an isothermal sphere increases with size, since the central accretion rate is fixed. §.§.§ Determining the accretion method Figure <ref> shows relative errors on the sink mass in the collapsing isothermal sphere simulation runs with two different accretion schemes, compared to the analytic solution of <cit.>. From top to bottom, the relative errors are computed for different values of A, while the different colors and line styles correspond to different sizes for the accretion zone and the accretion schemes. Several interesting features appear in this plot. First, the analytic accretion rates are well reproduced in all cases with different A and . Among our 18 test simulations, the one with the largest error is 97% accurate at the end of the simulation. Second, the computed accretion rates are underestimated compared to the true values. This happens because the analytic solution for the central accretion rate is determined by the asymptotic behavior at r→ 0, while the radius at which the mass flux is computed is larger. For a flux accretion scheme, the accretion rate must be constant at any radius if the product of the density and velocity profiles is proportional to r^-2. This is true for the very inner part of the sphere, where the density and velocity profiles can be approximated as ρ∝ r^-1.5, u∝ r^-0.5. However, this scaling breaks down at larger radii, where the slope of the two profiles changes to ρ∝ r^-2, u∝ r^-1. Therefore, as becomes larger, the product of the two analytic profiles diverges slightly from r^-2, leading to an underestimation of the accretion rate. For the Bondi-Hoyle accretion scheme, where the accretion rate is proportional to both the kernel-weighted mean density within <cit.> and the sink plus gas mass, the trend is somewhat different, but the accretion rates are still slightly underestimated. In this scheme, as becomes larger, the mean gas density decreases but the total mass within increases, and we find that the two effects cancel out when =1–2, while the accreted mass becomes less accurate with =4. These experiments suggest that a smaller is better for accurately modelling accretion rates. However, <cit.> pointed out that too small accretion radii may not be able to represent the physical properties of the gas flow, especially if local gas structures are underresolved. Following their argument, we adopt =2 as a compromise between accuracy and the ability to account for the local environment. Figure <ref> also shows that the accretion rate based on the Bondi-Hoyle scheme is more accurate than that of the flux accretion method. This is partly due to the fact that we use the total mass of the sink particle and the enclosed gas within the accretion zone, rather than the sink mass alone, to determine the Bondi-Hoyle radius. <cit.> further showed that even in the regime where ≈ 0.1, the spherical Bondi accretion is well reproduced by the Bondi-Hoyle accretion scheme. On the other hand, by running AU-scale collapsing sphere simulations with a more realistic outcome, where sink particles form inside an accretion disk, these authors also demonstrated that the density and velocity profiles at the boundary of the accretion zone change violently when the Bondi-Hoyle accretion scheme is used, while the flux accretion method yields smoother profiles. Therefore, following <cit.>, we adopt the flux accretion scheme when >, while Bondi-Hoyle accretion is used for the opposite condition. §.§ Turbulent GMCs In this section, we investigate what resolution is required for the sink-based model to reasonably predict stellar mass growth in a turbulent medium, and estimate the model uncertainty. To do this, we use isothermal GMC simulations with varying levels of turbulence and compare the formation and growth of sink particles at different resolutions. §.§.§ Simulation set-up The simulated cloud has an initial uniform density of ≈ 186, with a total mass of 2×10^5 and a radius of 20 pc, respectively. These values are chosen to mimic the observed properties of typical molecular clouds in the Milky Way <cit.>. The cloud is located at the center of a (128 pc)^3 box with outflowing boundary conditions. The cloud has a uniform temperature of 30 K and a metallicity of 0.1 Z_⊙, while the ambient gas temperature outside the cloud is set to 10^4 K to maintain pressure equilibrium with the cloud. The free-fall time and thermal Jeans length of the cloud are ≈ 3.3 Myr and λ_ J≈ 4.7 pc, respectively. The simulation box is divided into 32^3 root cells, which are further refined if the local thermal Jeans length is not resolved by 32 cells. The maximum resolution is 1 pc for the lowest resolution run and 0.0625 pc for the highest resolution case. Turbulent velocity fields are driven without gravity for the first 0.2 so that the Kolmogorov spectrum develops throughout the box. Self-gravity is then activated while the turbulence is continuously driven. No stellar feedback is included. We run a total of six simulations with three different random seeds for the turbulence driving and two different turbulence strengths. These six clouds are labeled S1, S2, S3 (strong turbulence), and W1, W2, W3 (weak turbulence), where the same number indicates the same random seed. The density threshold for sink formation is ρ_ th=ρ_ LP≈8.15×10^-21(/1 pc)^-2 g cm^-3, or ≈3730(/1 pc)^-2 in hydrogen number density. Based on the results of Section <ref>, the accretion radius is set to =2. The early evolution of the turbulent velocity and the virial parameter of these six clouds is shown in Figure <ref>. The turbulent Mach number ℳ ranges from 7–10. Although the virial parameter exceeds 1 and continues to increase even after gravity is turned on, stars form at the local density maxima where the gas structures are locally gravitationally bound. This is illustrated in more detail in Figure <ref> where the formation of sink and star particles is plotted against the hydrogen column density of two clouds, S1 and W1. Before gravity is turned on (t=0.2), the two clouds do not fragment and retain their initial spherical shape with some density contrast due to turbulence. Over time, clumpy structures develop locally, forming sinks and stars, while unbound gas is distributed throughout the box following the net turbulent velocity field. Due to the same random seed for the turbulence driving, the overall structure of the two clouds is similar, but the W1 cloud is more compact, gravitationally bound and thus forms more stars. By t=3, about three times more stars are created in W1 than in S1. §.§.§ Testing the resolution convergence Figure <ref> shows the total stellar mass divided by the initial cloud mass (i.e., the total SFE) in the six different GMCs for five different AMR resolutions. Note that the simulations cover a wide range of the SFE, from ≈ 4% (S3) to ≈ 60% (W2), depending on turbulence strength and random seed. We find that the total SFEs have converged reasonably well for 0.0625 ≤≤ 0.25 pc resolutions. In the weak turbulence cases (W1–W3), the self-gravity of the local clumps is stronger than the turbulence, and almost all of the infalling gas in the vicinity of each sink is eventually accreted onto the particle. As a result, a small difference in the turbulent structures and accretion rates in the runs with different resolutions does not significantly affect the overall SFE. Similarly, even when the cloud is broken up into smaller structures by strong turbulence (S1–S3), the final stellar mass at 3 remains relatively similar within ∼ 30% in the ≤ 0.25 pc runs, although it becomes somewhat more sensitive to the resolution. In some cases (W1 and W3) convergence appears to be achieved even in low resolution runs with ∼ 1 pc, but the first star particles are formed later than in the corresponding higher resolution runs by more than ∼ 1. This could have led to dramatically different SFHs if stellar feedback had been included. To understand how convergence is achieved, we first examine the formation of sink particles. In our simulations, clumps are defined at local density maxima with densities greater than 0.1ρ_ th, and sinks form within these clumps when their peak density increases by an order of magnitude, while maintaining a continuous net inflow and the entire clump remains virialized. To see how sink formation is affected by resolution, we show in Figure <ref> the cumulative distribution function of peak densities for clumps from S1–S3, at the epoch of sink formation. There is a clear trend for sink particles to form at densities closer to ρ_ th in the higher resolution runs. For example, 50% of the total sink particles form at densities lower than ≈ 1.33ρ_ th in the 0.0625 pc runs, while the clump peak density must increase to ∼ 10 ρ_ th to form half of the sink particles in the 1 pc runs. This means that resolved clumps are already gravitationally bound or close at ρ_ th, and thus there is only a small difference in the formation time of the first sink. In contrast, the gas clumps in the 1 pc runs are still super-virial even at densities several times higher than ρ_ th, delaying the formation of the sink. At the intermediate resolution (0.5 pc), only in S1 are the formation time of the first sink and the SFHs found to be similar to those in the corresponding higher resolution runs. In S2 and S3 (still at 0.5 pc resolution), a significant fraction (∼ 20%) of the total number of sink particles form at densities higher than ∼ 10ρ_ th. These simple experiments suggest that a resolution of 0.25 pc is desirable to resolve the virialized, small scale structures in turbulent GMCs. A lower resolution (0.5 pc) can provide a reasonable estimate of the stellar mass in some cases, but the gravitational binding of gas clumps will in general be more sensitive to the specific properties of turbulence. As sink particles form and accrete gas in local dense clumps, we also investigate how small scale structures are affected by spatial resolution using Fourier analysis. We first generate a two-dimensional gas column density map for each simulation output by projecting along the z axis, as illustrated in Figure <ref>. We then perform a Fourier transform of these gas maps to obtain the power spectrum as a function of wave number. To focus on the small scale structures relevant to star formation, we remove the signal from scales larger than λ=5 pc, which is the typical clump size in the 1 pc resolution runs, and reconstruct the map by performing an inverse Fourier transform of the small scale signals only. The ratio of the total mass of the regenerated gas map to the initial cloud mass is shown in Figure <ref>. Roughly speaking, 40% of the gas mass in these clouds is concentrated at λ<5 pc until the dense gas is dispersed by strong turbulence at t/≳ 2. Note that these fractions are at least twice as large as the total star formation efficiencies in S1–S3, and thus include some non star-forming gas. Figure <ref> shows that gas mass fractions in the three high resolution runs are in good agreement. By contrast, in all 1 pc resolution runs the gas fraction is initially smaller and then becomes too large at later times (t/∼ 2). This suggests that the clumps are slowly collapsing at lower resolutions while in higher resolution runs where the mass fraction does not increase significantly, sink particles accrete gas more rapidly, removing mass from the dense regions. Taken together with Figure <ref>, we conclude that achieving a resolution of at least 0.25 pc is necessary to resolve the small scale structures feeding sink particles. It is interesting to compare this scale length with the sonic length of the clouds. The theory of supersonic turbulence suggests that the sonic length (l_ s) can be approximated as l_ s = l_0 (σ_l_0/c_ s)^-2, where σ_l_0 is the velocity dispersion measured on the scale of the cloud diameter (l_0) <cit.>. On scales larger than l_ s, supersonic compressive turbulence leads to rapidly varying density fluctuations in space, which in principle must be resolved in simulations to accurately model star formation. To this end, we compute the sonic length in our runs, using the initial diameter of the cloud (40 pc), the mean gas temperature, and the 3D velocity dispersion within the cloud. We find that l_ s is about 0.2–0.3 pc, indicating that the star-forming structures begin to be resolved in the run with =0.25 pc, consistent with our interpretation from Figure <ref>. Note that l_ s in our runs is larger than the typical sonic length of GMCs in the Milky Way (l_ s∼0.1 pc) <cit.>, due to our clouds having a larger temperature (∼ 100 K instead of ∼ 10 K). We attribute this difference to both an underestimate of the cooling channels in our reduced chemical networks and the low overall metallicity we assume for our clouds. Follow-up simulations, which we plan to perform in the near future, will include more detailed chemistry <cit.> to better capture the turbulent structure of star-forming clouds. Finally, we compare our results with previous findings. <cit.> showed that the thermal Jeans length should be resolved by at least 4 cells to avoid spurious fragmentation in a collapsing cloud. <cit.> further argued, by simulating a magnetized gas cloud at different resolutions, that the thermal Jeans length should be resolved by 32 cells to obtain convergence of turbulent energies. Since the thermal Jeans length of the cloud in our simulations is λ_ J≈ 4.7 pc, these resolution requirements correspond to ≲ 1.18 pc and ≲ 0.15 pc, respectively. Our results suggest that an intermediate value of 20 cells per thermal Jeans length is enough to model star formation in turbulent clouds in hydrodynamics simulations. This is not surprising, given that our clouds are more complex than those studied in <cit.> but we ignore magnetic fields. In their MHD simulations of a mini dark matter halo of ∼ 10^6, <cit.> claimed that as many as 64 cells per thermal Jeans length are needed to accurately capture the small scale dynamo and hydrodynamic properties of Pop III star-forming clouds. We further discuss resolution effects on star formation histories in a cosmological environment in the next section. § RESULTS In this section, we present results from cosmological zoom-in simulations performed using the two star formation models previously described. We assess whether bursty star formation can enhance feedback strength and regulate star formation. We quantify burstiness for each model and compare feedback strengths by analyzing when and where SN explosions occur and their collective effect on star-forming gas clumps and galactic outflows. §.§ Cosmological initial conditions The initial conditions for the simulations are generated using MUSIC <cit.>, with cosmological parameters (Ω_ m=0.3111, Ω_Λ=0.6889, Ω_ b=0.04897, H_0=67.66 Mpc^-1, n_ s=0.9665, σ_8=0.8102) consistent with the Planck 2018 results <cit.>. We first perform dark matter-only simulations within a volume of (5 cMpc)^3 using 128^3 particles and identify halos with a virial mass ∼ 10^9 at z=6. To minimize artificial delay in star formation due to numerical diffusion <cit.>, we choose a DMH with a small bulk velocity of v≈ 8 at z=6. Dark matter particle mass within the zoom-in region is 490 . At this mass resolution, the main halo is resolved with more than 1.6×10^6 dark matter particles. We ascertain that there is no coarser resolution particle inside 2.5 r_ vir of the main halo. The simulation box is covered by root grid consisting of 128^3 cells. Cells within the zoom-in region are refined if the total mass they contain (M_ dm+M_ bΩ_ m/Ω_ b) becomes greater than 8 m_ dm where M_ dm and M_ b are the masses of dark matter particles and baryons, respectively, and m_ dm is the mass of the dark matter particle originally assigned to the level, or if the thermal Jeans length is not resolved by eight cells. We also ensure that the host cells of the sink and the associated cloud particles are always maximally refined. In addition, we use a passive scalar (with a value of 1 or 0) to track the gas initially present in the zoom-in region, and allow star formation to proceed only if this scalar is greater than 0.1. To study the effect of resolution on star formation models, we examine simulations with five different resolutions, starting from 11 pc down to 0.7 pc at z=6, as shown in Table <ref>. Note that the maximum level of refinement is allowed to be reached at all redshifts. For example, in the run, the maximum resolution is =0.44 pc at z=10. DMHs are identified using the AdaptaHop algorithm <cit.>. The virial radius of the halo r_ vir is defined as the radius of a sphere within which the mean density is Δ_ critρ_ crit(z), where Δ_ crit=18π^2+82x-39x^2 is the virial overdensity <cit.>, x≡Ω_ m/(Ω_ m+a^3Ω_Λ) - 1, and ρ_ crit(z) is the critical density of the universe at a given redshift. The halo virial mass is then =4π r^3_ virΔ_ critρ_ crit(z)/3. Because simulated galaxies are often affected by strong stellar feedback and mergers, we define the center of the galaxy by calculating the center of mass of star particles within 0.2 r_ vir, where r_ vir is determined iteratively. §.§ Star formation history and burstiness We begin by describing the merging history and SFH of our simulated halo of mass 10^9 at z=6 from the fiducial run with a resolution of 4.8 cpc (or 0.7 pc at z=6). In Figure <ref> we show the projected gas distributions of the most massive progenitor at five different epochs. At z≈9 two galaxies embedded in DMHs of masses 10^8 and 3×10^7 merge and undergo a starburst. The SFR rises up to ≈ 0.04 or 5×10^-8 yr^-1 for the specific SFR, as can be seen in Figure <ref>. Subsequent SN explosions expel gas out of the galactic center and drive significant galactic outflows. As a result, star formation is quenched for ∼ 100 Myr until gas cools down and recollapses at z≈7.4. Afterwards, star formation events become more episodic (peaks at z≈ 7, 6.5 and 6), as stellar feedback tends to disrupt the star-forming clouds. Although not explicitly shown, at z=7 two small halos cross the virial sphere and merge at z∼ 6 (with a total mass ratio of 6:1), but no dramatic starburst occurs. This is because the satellite galaxy is gas deficient at the time of the merger due to its own stellar feedback, as illustrated by the giant bubble near the virial sphere (upper middle panel for in Figure <ref>). At z=6, the main simulated galaxy has a stellar mass of 2×10^6 and a mean stellar metallicity of 0.005 Z_⊙. Figure <ref> compares star formation histories in the two runs: gravo-thermo-turbulent model (magenta) and sink particle algorithm (green). This figure also shows UV absolute magnitudes (1450 < λ/Å < 1550; M_1500) for each galaxy obtained by interpolating SEDs from BPASS <cit.> to account for the age and metallicity, as well as the mass of each star particle. The AB magnitude system is used <cit.> and dust absorption is ignored as the galaxy is very metal-poor. We find that the peaks of individual star formation events generally tend to be higher in than in . In the very early phase of its evolution (z≳ 10), when the halo is less massive than ≲ 10^8 star formation is very stochastic and the reverse may happen, but as the halo becomes more massive, star formation histories become more bursty in the model than in . This can also be seen in M_1500 at lower redshifts, where the main galaxy peaks in the run are twice as bright at 1500 Å as those in the run. As we will show later, the burstier nature of the galaxies simulated with the sink algorithm for star formation is also a characteristic of the runs with different resolutions or feedback strengths. To be more quantitative, we parameterize the burstiness of star formation by calculating the ratio <cit.>: B=σ/μ-1/σ/μ+1, where μ and σ are the mean and standard deviation of the offset from the star formation main sequence (Δ_ MS). <cit.> assumed a power law of the form d/dt ∝^α to define the main sequence and measure Δ_ MS. However, this method cannot be applied to our dwarf galaxies because star formation is too stochastic to fit with a power law. Instead, we calculate the offset from an average SFR of 0.005 between 6<z<12 and compute σ and μ using SFRs averaged over 1 Myr. Note that B ranges from -1 to 1, with 1 being maximally bursty. We find that B=0.37 and B=0.21 in the and runs respectively, confirming our inference from Figure <ref> that star formation is indeed more bursty in . Figure <ref> shows the stellar-mass-to-halo-mass relation for the main progenitor galaxies at different redshifts. Despite the fact that the burstiness of star formation is different, we find that the final stellar masses of the two runs at z=6 are nearly identical. For a given halo mass, the stellar masses in the run are smaller most of the time, but during the merger-driven or local star formation events the galaxy creates more stars than . At z>9, both models convert 0.1% of the total baryon into stars (≈ 0.001 f_ b M_ halo), and the fraction then increases up to one percent as the halo mass increases. A similar trend is found in the Renaissance simulation <cit.>, although our predicted stellar masses are on average smaller by a factor of two. It is also interesting to note that our simulated galaxy masses are comparable to the stellar-mass-to-halo-mass estimates from local dwarf galaxies <cit.>. In contrast, the predicted stellar masses from our simulations are larger than those from the FIRE simulation <cit.> or those inferred from the extrapolation of the local stellar mass-to-halo mass ratio obtained by the abundance matching technique <cit.>. The different amount of burstiness in the two star formation runs means that the mass-to-light ratio (M_⋆/L_1500) of the galaxies may be different. Indeed, during active star formation phases, galaxies can become UV-bright in a manner not necessarily correlated with their overall stellar mass. To obtain the M_⋆/L_1500 ratio, we calculate the total stellar mass within the virial sphere of the dark matter halo host and compare it with the total luminosity emitted at 1450 Å<λ <1550 Å. We find that in , M_⋆/L_1500 varies from 0.06 to 2.9 M_⊙/L_⊙ at 6<z<12, with an average of 0.93 M_⊙/L_⊙, while this ratio turns out to be smaller (0.71 M_⊙/L_⊙) in the run. We also measure that the minimum M_⋆/L_1500 value in , which occurs during the first starburst (z∼9), is 1.5 times smaller than in . This supports the claim of <cit.> that a recent surge in star formation provides a plausible explanation for the presence of UV-bright galaxies detected by JWST at redshifts greater than 10 <cit.>, although a top-heavy initial mass function <cit.>, increased star formation efficiency during the earliest stages of galaxy formation <cit.>, accreting black holes <cit.> or non-standard cosmological models <cit.> should be considered as possible alternatives. §.§ Feedback strength In order to understand the impact of bursty star formation on the feedback strength, we present in Figure <ref> two different metrics associated with SN explosions. The top panel shows the hydrogen number density of the host cell in which the SNe explode. While gas metallicity distributions are almost indistinguishable between and (not shown), we see that the number of SNe exploding at the highest densities (log≳ 2.5) is significantly reduced in the model. As we will discuss later (Section <ref>), this is partly due to the fact that stars form in less dense environments in the runs (<log>=5.6 vs 6.0). Furthermore, we find that the average number of star particles younger than 3 Myr per cell (if present) is 3.8 and 2.5 in SINK-Fid and respectively, indicating that stars form in a more clustered fashion and that radiation feedback is thus stronger in the run. As a result, SNe explode at densities that are about a factor of two lower in (log n_ H∼ -4.1) than in , and more momentum is injected into the ISM per SN explosion in (<P_ SN>≈ 1.8×10^6) than in (<P_ SN>≈ 1.3×10^6) (Equation <ref>). The lower panel of Figure <ref> shows how SN explosions are correlated in time. We compute the PDF of the time gap between two randomly selected SNe exploding in the zoom-in region. This simple calculation ignores the spatial distribution of the SNe, but we argue that time gap information is closely related to individual star formation events, since star formation in our dwarf galaxy is generally bursty and clustered. Figure <ref> shows that SNe explode together rather than independently. This feature is more pronounced in , where the stars form in a more clustered fashion. Although not dramatic, both density of SN host cell and time gap suggest that stellar feedback is stronger in , compared to . We now turn to the HI column density distribution (N_ HI), which is another way of measuring the effect of stellar feedback. Most stars form from dense clumps, but these are rapidly dispersed or ionized if feedback is strong. Figure <ref> shows the distributions of the mean N_ HI within 50 pc of each star particle, averaged over 6 directions, as a function of stellar age. In the first 3 bins, where the particles are younger than 1.3 Myr, SN explosions have not yet occurred (although they may have taken place in the neighborhood) and stars in both models are mostly embedded in a dense environment with an average density of ≳ 10^3. However, the column density gradually decreases with increasing stellar age due to radiation feedback. At t≳ 10 Myr, N_ HI decreases even further due to SN explosions. Interestingly, in the dense gas is disrupted faster than in as more clustered and coherent stellar feedback leads to a more dramatic decrease in N_ HI. The column density increases again for t≳ 30 Myr, with neutral gas in recovering more slowly. Physical properties of the outflowing gas are shown in Figure <ref>. The outflows are measured on a spherical surface at 0.3 r_ vir or 1.0 r_ vir, which is uniformly divided into 49,152 pixels (Nside=64) using the HEALPix algorithm <cit.>. We calculate the outflow rate by summing over the pixels with positive radial velocities, as Ṁ_ g,out=∫ρ_ gas v_ rad Θ(v_ rad) dA_, where v_ rad is the radial velocity of the cells, Θ is the Heaviside step function, and dA_⊥ is the area of each pixel with normal vector parallel to the direction of the radial velocity. We find that the evolution of outflow rates (or fluxes) generally follows that of the SFRs shown in Figure <ref>, with the outflows being more continuous and of larger amplitude. The flux-weighted outflow rates measured at 6<z<10 on the inner sphere at 0.3 r_ vir are 0.40 and 0.25 in the and runs, respectively, indicating that the galaxy is more effective at generating outflows. In the galaxy, the flux-weighted outflow measured at 0.3 r_ vir is denser (=0.55 vs. 0.36) and cooler (log T/K=4.11 vs. 4.23) than in (second and fourth panel). This means that the outflows in the carry more mass overall, although they are more intermittent, because star formation is more bursty. Outflows in the inner region of the two galaxies move at velocities comparable to the escape velocity of the DMH (flux-weighted, ∼30–40) (third panel). As such, some fraction of the ejected gas is recycled back into the ISM, rather than expelled from the DMH. Indeed, the outflow rates in the galaxy are more than halved when measured at the virial sphere (0.14 vs. 0.17 ) rather than 0.3 r_ vir. In contrast, the differences in the properties of the outflowing gas on the virial sphere between the two models are not very pronounced. We attribute this to the fact that there are some random SN explosions from the infalling satellites in the outer halo, which complicate the interpretation at the virial sphere. Nevertheless, the outflow rates measured at both radii are much larger than the SFRs, giving <Ṁ_ g,out> / < Ṁ_ star>≈ 10–25 in and . Note that these mass-loading factors are comparable, albeit slightly lower, to those of other simulated galaxies of similar stellar masses <cit.>. Finally, there is no clear difference in the metallicity of the outflows between the two star formation models (bottom panel of Figure <ref>). The flux-weighted mean metallicities are ≈ 0.01 Z_⊙ for both runs, which are similar to the metallicities of their ISM (dotted lines in the bottom panel). This is not very surprising, given that the outflows entrain ∼ 50–100 times the mass of the stellar ejecta (20% of the stellar mass). As a result, the metallicities of the outflows tend to increase with time, following the chemical evolution of the galaxies. However, these turn out to be insensitive to feedback from different star formation models, as the and galaxies produce similar amounts of stars (and thus metals) in our simulations. § DISCUSSION In this section we discuss why stars form differently in the runs with two different star formation models. We also assess how the models respond to different input SN energies. Finally, we examine how the burstiness, feedback strength and density distributions of the star-forming sites change with resolution. §.§ Differences in star formation criteria In the previous section we have seen that the two star formation models lead to different burstiness. To better understand the differences between these models, we plot in Figure <ref> the hydrogen number density of the cells in which star formation occurs (n_ H,SF). Also shown as a gray line is the minimum gas mass (556) required in a gas cell to produce a star particle of mass 500 in the runs, as the code imposes a 90% maximal conversion limit of gas into stars for numerical stability. We find that star particles in the run form at densities that are on average a factor of 5.7 higher than in (< log n_ H,SF>≈ 5.2 vs 6.0). This is partly due to the fact that in a large amount of gas has already been transferred to the sink particles from accretion over multiple time steps at the time of formation, whereas in cell gas mass is instantly converted into stars. However, even if the sink mass is included in the calculation of n_ H,SF, the average density in is still larger by a factor of 2.6 than in . If we restrict our analysis to the period after the strong burst at z∼9 (6<z<8.5), where 63 percent of the stars are formed, the difference becomes even more pronounced (< log n_ H,SF>≈ 4.8 vs 5.9). These results suggest that at a given minimal star particle mass, the subgrid model requires more gravitationally bound clouds to form stars than [ would produce stars in regions of lower density if a smaller star particle mass is assumed. However, it is important to note that cells with lower masses or densities often fail to satisfy Equation <ref>. Looking at Figure <ref>, only 30 out of 6,503 pop II stars are formed in cells with mass ≤ 600. This suggests that n_ H,SF in the model is primarily determined by Equation <ref> rather than by the choice of the minimum star particle mass.]. This is further illustrated in the top panel of Figure <ref> where we plot λ_ J,turb/ as a function of gas density for the run. Here each point represents a sink particle from different snapshots, color coded by accretion rate. As expected, sink particles accrete actively in dense regions (≳ 10^4, upper panel), occasionally forming stars at ∼ 10^6. Efficient accretion events occur near λ_ J,turb∼, but also in regions where the local turbulent Jeans length is resolved by a few tens of cells. This can be attributed to young star particles heating the gas or star-forming clumps exhibiting local turbulence fed by e.g., external accretion and/or galactic fountains. It is also equally possible that turbulent and thermal support in our simulated star-forming clumps may be overestimated due to finite resolution and physical ingredients (see below). Nevertheless, we find that local collapse motion can facilitate accretion onto the sink particle. If the turbulent Jeans length criterion (λ_ J,turb/ < 1) were applied to , as implemented in , it would result in a significant delay in star formation for these highly accreting sink particles until the gas densities reach a threshold for local gravitational instability. Conversely, when the accretion rates of potential star-forming sites are measured in the run (not shown), as done in Section <ref>, we see that the high accretion events with Ṁ_ acc≳ 10^-2 do not necessarily occur in a cell where λ_ J,turb/<1, but are mostly distributed over λ_ J,turb/≲ 10. A possible interpretation is that a cell may be considered locally Jeans unstable if the turbulent Jeans length is resolved by less than ∼ 10 cells. Therefore, other subgrid multi-freefall models with a more relaxed Jeans length criterion may share more similarities with our sink particle approach <cit.>. A more important feature of the runs is the predominant formation of star particles from high accretion events. In the run, ∼ 70% of star particles are formed through accretion rates of ∼ 10^-3–10^-1. The high accretion rates results in the formation of star particles with masses of 500 within a short timescale of ∼0.005–0.5 Myr. We also find that these regions exhibit a more efficient conversion of gas into sinks compared to what is predicted by the multi-freefall approach (bottom panel of Figure <ref>). Here, ε_ ff, acc≡Ṁ_ acct_ ff/M_ gas is computed by replacing dM_*/dt with Ṁ_ acc in Equation <ref>, whereas ε_ ff, cell is measured for the host cell of sink particles using Equation <ref>. The plot reveals that in regions where sink particles are actively accreting with ε_ ff, acc∼ 1, ε_ ff, cell is estimated to be significantly lower (≪ 1), with a maximum value of ≈ 0.5. Although ε_ ff, acc does not precisely coincide with the of a turbulent box, these results indicate that more stars are likely to form for a given gas mass when using the sink algorithm compared to the multi-freefall model. We argue that the elevated accretion efficiency (ε_ ff, acc) in the run is primarily due to the rapid gravitational collapse of dense star-forming clumps. For the highly accreting events, the accretion rates are mostly calculated using the flux scheme, and we find that the inward radial velocity surrounding sink particles at ∼10^4 is ∼ 3–5, which is several times higher than the local sound speed. In such environments where dense gas converges, it is difficult to avoid efficient gas collapse[We have also calculated accretion rates using the Bondi scheme and found that, on average, the accretion rates are only a factor of 2–3 lower than those based on the flux accretion scheme.]. However, it should be noted that the rapid collapse seen in our simulations may be somewhat overestimated due to the absence of magnetic support against gravity. In addition, the turbulence support is also likely to be underestimated at the resolution scale due to the adaptive derefinement/refinement of the computational grid. Accounting for these factors could potentially suppress the gas accretion rate on to sink particles, suggesting that ε_ ff, acc in this study may be considered as an upper limit[We have tested using a GMC simulation that arbitrarily reducing the accretion rate by a factor of two has no dramatic effect on the accreted mass. As the density of the collapsing cloud increases, so does the accretion rate, mitigating the artificial reduction.]. On the other hand, there is also a possibility of underestimating ε_ ff, cell in the model. To determine ε_ ff, cell using Equation <ref>, the local Mach number and α_ vir must be known. Since the model does not explicitly model turbulence <cit.>, we derive the velocity dispersion on resolution scale (Δ x_ min) using kinematic information from six local neighbors. However, since the turbulence strength is scale-dependent <cit.>, turbulence measured from immediate neighbors (l∼ 3 Δ x_ min) may be overestimated by a factor of ∼ 1.7 (or ∼ 3 for α_ vir for supersonic turbulence). Furthermore, our sound speed is probably overestimated due to the lack of low-temperature coolants such as dust, which often results in a virial parameter of α_ vir≳10. The λ_ J,turb< condition would then inhibit star formation in these environments (see Figure <ref>). Even if the condition is relaxed, the use of Equation <ref> may still be questionable, since many of the star formation events in the runs occur in transonic regimes (ℳ≲ 2) where shock-induced multi-freefall models may not apply <cit.>. In short, simulations geared towards better capturing the cosmic turbulence cascade are needed to better estimate in different collapse environments and thus improve the accuracy of the GTT scheme. §.§ Effect of supernova energy In most of our simulations with resolutions between 0.7 and 11 pc, the galactic stellar masses in the two star formation models do not differ by more than a factor of 2, despite the fact that star formation is more bursty and the final momentum from SNe is larger in the runs. This suggests that galaxy growth in this dwarf-sized DMH regime is well regulated by SN feedback regardless of the star formation model <cit.>. However, it is possible that the simulations underestimate the impact of SNe, because we do not consider additional pressure from cosmic rays <cit.>, or top-heavy initial mass functions <cit.>, or hypernovae <cit.>. Motivated by these caveats, and to confirm that our main conclusion about burstiness remains the same with different feedback strengths, we re-run the and simulations with an enhanced SN explosion energy of E_ SN=5×10^51 erg. We will call these runs , as opposed to the runs with E_ SN=10^51 erg. To reduce the computational cost, we adopt a maximum refinement which is three levels lower (), corresponding to =5.5 pc at z=6, but still yields reasonably similar burstiness and stellar masses (see next section). Figure <ref> shows the growth of galactic stellar mass down to z=6 in the simulations. Although star formation in the early phase is slightly different, the two halos with the and SINK models produce a similar amount of stellar mass by z=6. As in the fiducial runs, the simulation produces greater number of stars over short periods of time compared to its counterpart, as demonstrated by the more pronounced and concentrated peaks in its star formation history (see top panel of Figure <ref>). When the SN energy is increased by a factor of 5, both models produce fewer stars due to stronger explosions than in the runs, while maintaining the same trend in burstiness (the bottom panel of Figure <ref>). However, the exact response to increased SN energy in the final stellar mass is different. In the model, stars continue to form between the two mergers at 550 Myr < t < 750 Myr, while the run completely suppresses star formation for ∼ 150 Myr. In addition, the stellar mass formed during the first merger at z∼ 9 decreases to a level similar to that of the run. Consequently, the final stellar mass in is further reduced by a factor of three compared to . The impact of strong feedback can also be seen in the HI column density on cloud scales (50 pc) measured around each star particle (N_ HI,cl, Figure <ref>). Consistent with the fiducial runs, the run is more effective at dispersing dense gas clouds, and the distribution of N_ HI,cl is skewed to lower values than in . More importantly, when the feedback strength is boosted (), the typical N_ HI,cl becomes lower for stars with ages greater than ∼ 5–10 Myr in the runs. In contrast, N_ HI,cl is already reduced around stars younger than 1 Myr in the run. This may seem counter-intuitive, given that the minimum lifetime for SN progenitors is much larger (≈ 4 Myr). However, stars form in clusters, with stars formed first pre-reducing the column density around the others. This may also have happened in , although the explosions may not have been powerful enough to disperse dense clouds in that case. Figure <ref> clearly suggests that bursty star formation via the sink particle algorithm may become increasingly efficient at disrupting GMCs and driving powerful winds when stellar feedback is intrinsically stronger. Figure <ref> also demonstrates that stars form in a more clustered fashion in the runs than in . We measure the ratio between the cumulative number of young stars (N_ star(<t)≡ M_ star(<t)/m_*, min) and the number of cells containing at least one star particle (N_ cell(<t)), as a simple proxy for the degree of clustering, as a function of stellar age. Note that this ratio increases with higher clustering, and a ratio of 1 means one star particle with m_*, min per star-forming cell. We find that the typical number of young stars per cell is about five times larger in the runs than . This is because the sink particles can accrete very efficiently and form multiple star particles in dense regions, whereas the turbulent Jeans length condition and the low SFE of the model tend to prevent a single cell from forming a large number of stars. For example, the typical velocity of the sink particles we measure from the simulation is 8–10, and it thus takes ∼ 0.5 Myr to move out a cell of size =5.5 pc. If the accretion rate remains high (Ṁ_ acc∼10^-2), it could easily yield 5,000 of stars (or 10 star particles with m_*, min) formed in the cell during that time. Of course, the exact amount of stellar mass per cell depends on the SN feedback strength as well as grid resolution, as briefly discussed in Section <ref> and as can be seen in Figure <ref>. However, the comparison between the ratios N_ star/N_ cell from simulations with different feedback energies and resolutions suggests that stars form more coherently in space and time in the runs. Thus, we argue that more clustered star formation, via collective feedback processes, may help prevent the galactic center from continuously forming stars over long periods of time and produce overly massive bulges <cit.>. §.§ Resolution effect In Section <ref> we show that a spatial resolution of ∼ 0.25 pc is necessary to achieve a reasonable convergence of star formation efficiency in turbulent clouds. However, running cosmological simulations at such a high resolution is prohibitively expensive, even with the zoom-in technique. We therefore had to settle for a lower resolution of 0.7 pc (0.4 pc) at z=6 (z=11) in our fiducial run. It is thus important to assess the extent to which our main results have converged. Figure <ref> illustrates that while quantitatively convergence has not yet been reached, our key findings exhibit the same trend regardless of resolution. First, the predicted stellar masses within the main halo at z=6 are reasonably similar (in general better than a factor of two (top panel), except for the lowest resolution run (, =11 pc)), suggesting that star formation is self-regulating. Second, the burstiness parameter (Equation <ref>) is always higher in the runs than in the runs (second panel). Note that burstiness tends to decrease with increasing resolution as gas is able to collapse further and its subsequent disruption by stellar feedback occurs in multiple clumps of smaller mass. <cit.> also showed analytically that the SFR variability increases with decreasing number of gravitationally bound clouds. Accordingly, UV luminosities of the galaxies generally vary more dramatically and are therefore, for a given stellar mass, often larger than those of their counterparts. The third panel of Figure <ref> shows the column density distribution of neutral hydrogen measured within 50 pc from each star particle. Again, the typical column densities around young stars with ages of 7.5 ≤ t/ Myr≤ 42, corresponding to the sixth and seventh bins in Figure <ref>, are lower in the runs. Together with the burstiness parameter from the second panel, we conclude that bursty star formation clears local star-forming clumps earlier, and that the subsequent expansion of SN remnants is thus less hindered by the prevailing ISM conditions. Finally, the last panel displays the hydrogen number density distribution within grid cells where star particles form (n_ H,SF). Unsurprisingly, the typical density of star-forming cells increases with increasing resolution, because, as previously mentioned, the gravitational collapse of star-forming clouds can be tracked further. Nevertheless, our conclusion that star particles are born in lower density environments in the runs than in still holds for most resolutions. Again, the exception is the lower resolution run which shows a pronounced tail of high density star-forming cells (n_ H,SF∼ 10^5). This occurs during the first merger-triggered starburst at z∼ 9: a single event with SFR close to 0.1 which accounts for the build-up of more than 50% of the total stellar mass at z∼ 6. This also significantly increases the burstiness (second panel), but is very likely a stochastic rather than a systemic outcome. Taken together, we conclude that the self-regulating nature of star formation and feedback in our runs allows for a reasonable amount of convergence over a wide range of resolutions. However, we caution that this may turn out to be a reflection of our simulated halo being less prone to over-cooling. For instance, using a simple density-based star formation model with a mechanical SN feedback scheme, <cit.> showed that star formation is only regulated in one of the five dwarf galaxies they simulate. In this respect, further investigation of more (massive) systems is needed, which we intend to pursue in a forthcoming paper. § SUMMARY In this study, we have investigated the impact of star formation on the growth of a dwarf galaxy hosted by a DMH with mass of 10^9 at z=6. We have compared a sub-grid model based on a local gravo-thermo-turbulent () condition <cit.> and a more direct approach which evaluates gas accretion onto sink particles, based on the algorithm developed by <cit.> (). Performing a series of simple numerical experiments with ramses <cit.>, we calibrated the parameters required by the sink particle algorithm and then applied this latter to cosmological radiation-hydrodynamic simulations. Our main results can be summarized as follows: * Regardless of the numerical accretion scheme (flux-based or Bondi-Hoyle) used, the analytic accretion rate of collapsing isothermal spheres <cit.> is better matched when setting a small accretion radius (=) (Figure <ref>). However, differences in accretion rates when using larger accretion radii (=4) are generally very small ( ≲ 3%). * With the accretion radius fixed at =2, six different giant molecular clouds (GMCs) simulated at five different resolutions from 1 to 0.0625 pc (Figure <ref>) show that the stellar mass formed during t=2 begins to converge at a resolution of ∼ 0.25 pc (Figure <ref>). Gas clumps in turbulent GMCs (S1) simulated at low resolutions (0.5–1 pc) contract more slowly than their higher resolution run counterparts (Figure <ref>–<ref>), leading to delayed formation of sink particles and lower overall stellar masses. * In cosmological zoom-in simulations of a dwarf galaxy, star formation histories in the model are more bursty than in the model, for a wide range of resolutions (0.7 < < 11 pc). The main reason for this behaviour is rapid gas accretion onto sinks in collapsing star-forming clumps. Furthermore, at moderate densities, where the thermal Jeans length is locally resolved, star formation is not allowed in the subgrid model, whilst sinks continue to accrete at significant rates and form stars. (Figure <ref>). * The bursty nature of the model makes SN explosions more coherent in space and time (Figure <ref>), thus increasing feedback impact for a given amount of stars formed (Figures <ref>, <ref>, and <ref>). As a result, the number of SNe exploding at ≳10^3 is dramatically reduced in the runs. Due to more coherent feedback, the runs disrupt gas clouds more efficiently (Figures <ref> and <ref>), thus generating denser and stronger outflows (Figure <ref>). * Whilst the final galaxy stellar mass at z=6 is similar in both models (Figure <ref>), star formation in requires gravitationally bound clumps, with SN explosions generally occurring in high density regions and leading to more spread out star formation. By contrast, the model allows for a large number of stars to form in a spatially clustered way over a short period of time (Figure <ref> and <ref>). * When the amount of energy released by SN is artificially increased by a factor 5 (E_ SN=5×10^51 erg), the bursty nature of the SINK model becomes more pronounced (Figure <ref>). The combined effect of clustered star formation and stronger individual SN explosions disperses star-forming clumps more easily, and star formation is thus more efficiently regulated in the model. This results in a galaxy stellar mass lower by a factor 3 in the run than in its counterpart. Using numerical experiments, we have demonstrated that modelling star formation with a sink particle algorithm, a more direct method than multi-freefall-based models, increases burstiness and, as a result, helps alleviate the over-cooling problem which has crippled (and still does) galaxy formation simulations for decades <cit.>. In a separate attempt, Han et al. (in prep) have also performed RHD simulations of an idealised disk with properties mimicking NGC 300 and find that the disruption of GMCs is considerably more efficient in their run using the sink algorithm as well. However, we draw the attention of the reader to the fact that only one halo has been studied in the work we present here and therefore that further investigation of more (massive) halos is needed, as it is well established that star formation variability depends on galaxy mass and gas fraction <cit.>. Finally, although we have covered as wide a range of resolutions as possible and found reasonable convergence in stellar mass, we caution this is not the case both for ISM structure and stellar distribution, as can be seen in Figure <ref>. Future simulations are needed to better understand the formation and evolution of ISM structures on (sub)sonic scales by resolving gas fragmentation when cooled by different metallic species, molecules and dust <cit.> in a magnetized medium, which ultimately leads to the formation of individual stars. § ACKNOWLEDGMENTS TK is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2022R1A6A1A03053472 and 2022M3K3A1093827), and acted as the corresponding author. CK and DH are partly supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (2020R1C1C1007079). The supercomputing time for numerical simulations was kindly provided by KISTI (KSC-2023-CRE-0162), and large data transfer was supported by KREONET, which is managed and operated by KISTI. This work was also performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. aa
http://arxiv.org/abs/2407.12767v1
20240717174604
Helical Spin Dynamics in Commensurate Magnets: a Study on Brochantite, Cu$_4$SO$_4$(OH)$_6$
[ "S. E. Nikitin", "Tao Xie", "A. Gazizulina", "B. Ouladdiaf", "J. A. Rodríguez Velamazán", "I. ~F. ~Díaz-Ortega", "H. Nojiri", "L. M. Anovitz", "A. M. dos Santos", "O. Prokhnenko", "A. Podlesnyak" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
These authors contributed equally to this work Quantum Criticality and Dynamics Group, Paul Scherrer Institut, CH-5232 Villigen-PSI, Switzerland These authors contributed equally to this work Neutron Scattering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner-Platz 1, Berlin D-14109, Germany Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany Institut Laue-Langevin, 71 avenue des Martyrs, F-38000 Grenoble, France Institut Laue-Langevin, 71 avenue des Martyrs, F-38000 Grenoble, France Institute for Materials Research, Tohoku University, Sendai, 980-8577, Japan Departamento de Química y Física-CIESOL, Universidad de Almería, Ctra. Sacramento s/n, Almería, Spain Institute for Materials Research, Tohoku University, Sendai, 980-8577, Japan Chemical Sciences Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA Neutron Scattering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA Corresponding author: prokhnenko@helmholtz-berlin.de Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner-Platz 1, Berlin D-14109, Germany Corresponding author: podlesnyakaa@ornl.gov Neutron Scattering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA § ABSTRACT We report the direct observation of a commensurate-ordered antiferromagnetic (AFM) state but incommensurate helical spin dynamics in the natural mineral brochantite Cu_4SO_4(OH)_6 through neutron diffraction and neutron spectroscopy measurements. Inelastic neutron scattering measurements reveal magnon-like excitations with considerable dispersion along the c-axis and almost flat branches in other principal directions, indicating the strong one-dimensional character of the magnetic correlations. We experimentally observe the effect of the uniform Dzyaloshinskii-Moriya (DM) interaction, which elevates the degeneracy of the spin-wave modes shifting them in opposite directions in reciprocal space. The system has a commensurate AFM ground state, stabilized by the anisotropic symmetric Heisenberg exchange interactions, and quasi-one-dimensional chiral spin dynamics due to the antisymmetric DM interaction. Employing linear spin-wave theory, we were able to construct an effective Heisenberg Hamiltonian. We quantify both the symmetric exchange parameters and the DM vector components in Cu_4SO_4(OH)_6 and determine the mechanism of the magnetic frustration. Our work provides detailed insights into the complex dynamics of the spin chain in the presence of uniform DM interaction. Helical Spin Dynamics in Commensurate Magnets: a Study on Brochantite, Cu_4SO_4(OH)_6 A. Podlesnyak ===================================================================================== § INTRODUCTION Nowadays, the development of next-generation magnon-based information carriers for computing devices <cit.> boosted interest in fundamental properties of the magnon excitations in noncollinear antiferromagnets, such as spin spirals and skyrmions <cit.>. Magnon, a quantized spin wave, is expected to transmit a Joule heat-free spin current and is considered a core of spin-wave-based information nanotechnology <cit.>. Interatomic exchange of a pair of interacting spins is generally described by the exchange matrix that can be decomposed into the interaction J_ii and antisymmetric component 𝐃_ij, known as the Dzyaloshinskii-Moriya (DM) interaction <cit.>. Competition between the symmetric and antisymmetric interactions can give rise to chiral magnetic orders and significantly impact the emerging spin dynamics <cit.>. Dynamical spin-spiral correlations in AFM one-dimensional (1D) S=1/2 systems are of special interest, given that a spin chain is the simplest of the spin models and a promising candidate for exotic magnetic phases induced by frustration and quantum fluctuations <cit.>. The spectral properties of such a system are nontrivial because of quantum fluctuations arising from the combination of low spin value with reduced dimensionality, which can be further enhanced by frustrated next-nearest neighbor coupling. The DM interaction between two spins, 𝐒_i and 𝐒_j, reads ℋ_DM = -𝐃_ij· ( 𝐒_i ×𝐒_j ), where the direction of the DM vector 𝐃_ij = D𝐧_ij is constrained by symmetry <cit.>. Depending on the crystallographic symmetry, there are two fundamentally different patterns for the 𝐃_ij. Staggered vectors 𝐃_ij pointing to alternating antiparallel directions along a chain, induce a canting of magnetic moments in a classical AFM, resulting in a weak ferromagnetic moment <cit.>. Uniform DM interaction is characterized by parallel orientation of the 𝐃_ij for all bonds within a chain. The uniform 𝐃_ij pointing along the easy axis tends to stabilize a spiral spin structure and splits the spin-wave modes shifting the dispersion curves in opposite directions in the reciprocal space <cit.>. Inelastic neutron scattering (INS) is the leading technique for the detailed characterization of the spin-wave spectra across the entire Brillouin zone (BZ). Hence, the INS can reveal both the symmetric Heisenberg exchange and the antisymmetric DM interaction. Though, the experimental confirmations of theoretical findings remain challenging because of the difficulty in obtaining suitable model compounds. To the best of our knowledge, only a few spin-chain compounds that display spiral spin dynamics due to the uniform DM interaction have been experimentally explored <cit.>. Sometimes, nature supplies us with surprise gifts. Emerald-green crystals of natural brochantite , named after the geologist André Brochant de Villiers, attract the attention of mineral collectors around the globe. The most common form of brochanite has monoclinic symmetry P2_1/a (Nr. 14) with a=13.140, b = 9.863, c = 6.024 Å, β = 103.16^∘ as determined from X-ray measurements on natural single crystals <cit.>. One has to mention the order-disorder nature of this mineral. In layered materials such as brochantite, neighboring layers can be arranged in many different geometrically and energetically equivalent ways leading to the appearance of either disordered or fully ordered sequences <cit.>. The single crystal results agree well with the x-ray and neutron scattering data obtained on hydrothermally prepared synthetic polycrystalline forms <cit.>. This work reported also a magnetic order in brochantite below 7.5 K and proposed an antiferromagnetic structure with the moments aligned along the a-axis in the magnetic cell of the same size as the nuclear one. The scientific interest in brochantite stems from the presence of geometrically frustrated Cu^2+ (S=1/2) magnetic chains (Fig. <ref>), which turn such a system into a low-dimensional "quantum" magnet that can exhibit emergent phenomena and promotes exotic magnetic states at low temperatures. It is worth mentioning that synthetic Zn-brochantite, ZnCu_3SO_4(OH)_6, features well-isolated 2D kagome planes and exhibits emergent behavior of quantum spin liquids <cit.>. Note that the synthetic brochantite was obtained as a finely ground powder only <cit.>. Attempts to promote crystal growth were unsuccessful. Here, we use thermodynamic measurements, neutron diffraction, INS, and linear spin-wave theory (LSWT) calculations to show that the brochantite is a rare realization of a zigzag spin-chain system with a delicate balance between the symmetric and antisymmetric exchange interactions. Below the Néel temperature brochantite exhibits quasi-one-dimensional chiral spin dynamics due to the uniform DM interaction while having the commensurate AFM long-range ground state stabilized by anisotropic Heisenberg exchange. Our data and analysis of the magnetic excitation modes in brochantite could be a guide for further experimental work and theoretical developments on low-dimensional systems with competing symmetric and antisymmetric interactions. § EXPERIMENTAL DETAILS The high-quality single crystals of were obtained commercially. Refinement of single-crystal x-ray diffraction data, obtained using Bruker-D8 Advance diffractometer at Helmholtz-Zentrum Berlin (HZB), demonstrated a single brochantite phase. Magnetization and specific heat measurements were performed at the CoreLab Quantum Materials using Physical Properties Measurement System (PPMS) (Quantum Design) at HZB. High-field magnetization measurements were performed using a 30 T pulsed magnet and a ^4He bath cryostat at the Institute for Materials Research, Tohoku University (Sendai) <cit.>. For the characterization of the crystal and magnetic structures, we measured neutron diffraction at the Institute Laue Langevin in Grenoble (France). We used the neutron Laue single-crystal diffractometer Cyclops and two neutron four-circle diffractometers, D9 and D10 <cit.>. The crystal structure was refined using FullProf software package <cit.>. INS measurements were performed at the time-of-flight Cold Neutron Chopper Spectrometer (CNCS) <cit.> at the Spallation Neutron Source at Oak Ridge National Laboratory (ORNL). The data were collected on a single crystal sample with a mass around 0.5 g, which was aligned in three different orientations: (hk0), (h0l) and (0kl) scattering planes. We used two fixed incident neutron energies of E_i=12.0 meV (λ_i=2.61 Å) and E_i=3.32 meV (λ_i=4.96 Å) resulting in a full-width-at-half-maximum (FWHM) energy resolution at the elastic position of 0.7 meV and 0.11 meV respectively. The measurements were performed using the rotating single-crystal method. All time-of-flight datasets were combined to produce a four-dimensional scattering-intensity function I(𝐐,ħω), where 𝐐 is the momentum transfer and ħω is the energy transfer. For data reduction, analysis, and linear spin wave theory calculations, we used the Mantid <cit.>, Horace <cit.> and SpinW <cit.> software packages. § RESULTS AND ANALYSIS §.§ Bulk Properties We start the presentation of our results with a brief overview of the thermodynamic data measured on . Figure <ref>(a) displays the temperature dependence of the magnetization, M(T), measured in an external field of 100 Oe in zero- (ZFC) and field-cooled (FC) regimes. In the insert of Fig. <ref>(a), one can see a clear transition represented by an abrupt down- (up-)turn of the magnetization in ZFC and FC data, respectively. The transition temperature determined using the derivative corresponds to 6.3(1) and 6.0(1) K for the ZFC and FC curves. The magnetization collected within 30–150 K shows a broad bump around 60 K which can be attributed to short-range low-dimensional AFM correlations. Both low- and high-temperature features have been observed in the synthetic  sample <cit.>. However, contrary to the synthetic powder sample, the natural mineral does not show a magnetization anomaly at 18 K confirming that it was not intrinsic to the compound in agreement with the interpretation of Ref. <cit.>. The high-temperature part of the magnetization can be fitted with the Curie-Weiss law giving a Curie-Weiss temperature of Θ_CW = -79(5) K pointing to dominant AFM interactions in the sample. The temperature dependence of the specific heat is shown in Fig. <ref>(b). A sharp λ-shape peak is observed at 6.2(1) K indicating a transition into the magnetically ordered state in agreement with magnetization. Our data are in good agreement with those from Refs. <cit.>. We further note that the magnetic ordering temperature in brochantite is comparable with that reported in some other spin-chain Cu-based minerals, such as dioptase <cit.> or antelerite <cit.>. The field dependence of the magnetization measured at base temperature is depicted in Fig. <ref>(c). To get an absolute magnetization value, the pulsed-field data were calibrated by the data measured at DC field using PPMS. While for the field applied along the c^*-axis the magnetization increases linearly with the field up to 30 T, there are two anomalies observed for the field applied along the other two crystallographic directions. For the field applied along the a^*-direction, there is a metamagnetic-like transition at B_C≈ 5 T and for Bb^* a weaker jump in magnetization is detected at 13.9 T (in both cases the critical field, B_ c, is determined from the derivative, dM/dB). This suggests that the AFM easy axis lies in the ab-plane close to the a^*-direction. §.§ Neutron diffraction Detailed investigations of the crystal structure have been carried out by means of single-crystal neutron scattering experiments at the Institute Laue Langevin (ILL) in Grenoble (France). For this purpose, a cube-shaped crystal with a side size of 4 mm has been cut from a larger crystal. In the beginning, the reciprocal space survey at 15 and 1.5 K, i.e. above and below the magnetic ordering temperature, has been carried out using Laue diffraction at the Cyclops instrument. Our findings at 15 K are in agreement with the literature <cit.>. The obtained lattice parameters are a=13.122, b = 9.838, c = 6.030 Å, β = 103.39^∘. Similar to the other works we have found significant broadening and doubling of the peaks in the a^∗-direction as a result of the disordered nature of the mineral under investigation <cit.>. Further studies of the crystal structure have been performed at D9 hot neutron four-circle diffractometer at the ILL operated with a wavelength of λ = 0.837 Å. In total, 496 Bragg reflections at 2 K, 709 at 12 K, and 147 reflections at room temperature (as reference) have been recorded. In agreement with the Laue data, we found that the sample exhibits order-disorder structural transitions and twinning, as evidenced by the observation of peak broadening and doubling, respectively <cit.>. The recorded data were used for the refinement of the crystal structure. Here, an averaged structure has been treated, i.e. the intensity of both twins was integrated and the disorder has not been considered. Data analysis proves that the space group P2_1/a can describe the structure measured at 12 K. Turning to the magnetic order, the Laue data show no difference between the patterns measured at 15 and 2 K. The same observation is valid for the D9 data between 12 and 2 K. This means no additional (structural and/or magnetic) reflections appear below the magnetic transition at 6.3 K. This points out that the magnetic contribution is quite small as expected for the Cu^2+ ion, and located on the top of the nuclear peaks. Further investigations of the magnetic structure have been carried out using D10 neutron four-circle diffractometer at the ILL which is optimized for studying magnetic structures. This instrument provides relatively high flux at 2.36 Å and low intrinsic background due to the energy analysis option. Here, 203 Bragg reflections at 2 K and 59 at 15 K have been recorded. The first observation is that at 15 K some small forbidden reflections in the space group P2_1/a are detected as, for example, (100), (300), and (-102). Taking into account these reflections, the true symmetry of should be lower than anticipated and could be described by space group P-1. This is a completely new finding which implies that the crystal structure of brochantite should be revised for either correction or looking for a symmetry lowering between RT and 15 K <cit.>. Our data-set focused on a low momentum transfer part of the reciprocal space leaves this problem out of the scope of this work. However, because the deviations from the parent P2_1/a space group are small and for the sake of simplicity we will perform the representation analysis of possible magnetic structures using P2_1/a space group. At 2 K, in the magnetically ordered phase, the increase of the intensity due to magnetism is noticeable only on the (110) and (120) reflections, see Fig. <ref>(a). The change in intensity of the remaining reflections is within the error bars. However, the temperature dependence of the two relevant reflections shows clearly the magnetic transition at about 7 K in agreement with the bulk data. This allows us to conclude that the magnetic propagation vector is 𝐐_ m=(0,0,0). This is in agreement with the results obtained by neutron powder diffraction on the synthetic sample <cit.>. Unfortunately, the presence of 16 independent magnetic ions renders the proper refinement of the magnetic structure, in this case, impossible. However, by using all the available bulk and neutron data and testing a suite of reasonable quantitative magnetic structures, a model of the magnetic structure can be proposed. Table <ref> contains the irreducible representations (irreps) which can be used to define possible magnetic structures with 𝐐_ m=(0,0,0). According to Vilminot et al. <cit.>, the ground state magnetic structure can be described by almost linear FM Cu-chains running along the c-axis. In turn, nearest-neighbor chains couple into an AFM zig-zag double chain, which is in agreement with irrep Γ_3 (magnetic SG P2_1^'/a^') applied to all four Cu-sites. Interestingly, to describe their NPD data the authors assumed different Cu-moment sizes within the chains. However, this would lead to a substantial uncompensated FM contribution which should be detected by magnetization measurements. As the latter is not the case, the models to be considered should only be the AFM ones. Nevertheless, we have tested this model with equal moment sizes and found that it does not describe our experimental data. Our data indeed suggest that the moments are arranged FM in the chains along the c-axis and these chains are coupled AFM. However, contrary to the Vilminot's model, the inversion symmetry should be primed (i.e. combined with time-reversal), leaving two possible magnetic space groups P2_1/a^' and P2_1^'/a corresponding to Γ_2 and Γ_4, respectively. Given the very limited number of reflections with magnetic contribution and cross-correlation between the moment size and its orientation, the unique determination of the magnetic structure is impossible. Taking into account the magnetization data, one would assume the moment direction to be close to the a-axis, resulting in the magnetic SG P2_1^'/a (Γ_4). Restricting ourselves to a collinear magnetic structure, one gets the one shown in Fig. 3(b) with an estimated magnetic moment of about 0.30±0.15 μ_B. To obtain this value, the size of the magnetic moment was adjusted such that the respective growth of the nuclear (110) and (120) reflections corresponds to the one observed experimentally. In addition, the symmetry analysis shows that the antisymmetric DM components of the nearest (NN) and next-nearest neighbor (NNN) exchange matrix are allowed for Cu-Cu interaction. Therefore, we would rather expect an equal or very similar moment size with possible spin-spiral correlation given by the uniform DM interactions. §.§ Inelastic neutron scattering While the neutron diffraction and thermodynamic measurements in brochantite provide evidence for a spin chain system with a possible DM interaction, the details of the exchange interactions have to be determined by INS. The constant energy slices in (hk0),(h0l) and (0kl) scattering planes obtained using INS measurements on a single crystal of are summarized in Fig. <ref>. Two different incident neutron energies were used, E_ i = 12.0 meV and 3.32 meV, to combine a large accessible energy-transfer range at higher energies with an improved resolution at lower energies. As shown in Fig. <ref>(a-d), the dispersion along the l direction is strong, while little dispersion is observed in other, h and k directions. Note, that the weak dispersion along the h direction can be the result of structural disorder. The considerable dispersion along l and almost flat branches along both h and k directions indicate strong 1D character and weak interchain coupling. Another distinct feature of the measured magnetic excitations is the checkerboard-shaped pattern in (hk0) constant-energy slices of S(𝐐,ħω) [Fig. <ref>(e-h)]. A similar phenomenon was previously observed in the quasi-one-dimensional compounds with 1D zig-zag chain, CoNb_2O_6 <cit.> and YbFeO_3 <cit.>. The chain buckling leads to the BZ folding due to an increase of the magnetic unit cell that, in turn, results in the checkerboard pattern of the structure factor of the magnetic excitations. In , pairs of Cu atoms connected to each other, Cu1-Cu2 and Cu3-Cu4, give rise to nearly linear chains coupled by J_2, which in turn couple into the zig-zag double chain via J_1, as shown in Fig. <ref>(c). The experimentally observed checkerboard pattern suggests that J_1 ≫ J_2, i.e. is attributed to the formation of the zig-zag magnetic chains along the c direction. The experimental spectra along l direction consist of strong magnon-like modes merging into an extensive continuum. The magnetic branch could be traced up to an energy transfer of ∼7 meV due to strongly reduced intensity at higher energies. The magnons' minima are located at l = 0 ± 0.19 r.l.u. suggesting spiral spin-chain dynamics in spite of experimentally observed commensurate long-range AFM order. A finite spectral gap Δ = 1.0(1) meV was observed in zero-field spectra at temperatures below T_N, which implies the presence of anisotropic couplings. In low-dimensional systems with weak magnetic order such as weakly-coupled spin chains, the low-energy excitations can be considered as conventional magnons. The high-energy sector in turn is dominated by deconfined spinons and there is a crossover regime between the two limits <cit.>. As shown above, the low-energy part of the INS spectrum is dominated by sharp spin-wave excitations. In contrast, the high-energy spectrum of brochantite demonstrates broad continuous excitations at energies above 5 meV, which can be clearly seen in constant-energy cuts, Fig. <ref>(c), and we interpret as the presence of broad continua as a possible manifestation of deconfined spinons <cit.>. §.§ LSWT simulations and discussion In order to describe the main properties of the observed spectra we consider a S=1/2 XXZ AFM Heisenberg chain supplemented by a symmetric anisotropic exchange interaction and an antisymmetric DM interaction. The Hamiltonian is given by ℋ = J_1∑_i (S^x_iS^x_i+1 + S^y_iS^y_i+1 + δ S^z_iS^z_i+1) + J_2∑_i 𝐒_i 𝐒_i+2 - D∑_i (S^x_iS^y_i+1 - S^y_iS^x_i+1), where J_1 and J_2 are the NN and NNN exchange couplings, δ>1 is the easy-axis anisotropy, and D is the antisymmetric DM interaction pointing along the easy axis. We start our analysis with the case D=0. This is one of the most investigated models that deals with the symmetric frustrating intrachain couplings, where the NN exchange J_1 competes with the AFM NNN J_2 > 0 exchange <cit.>. The model develops a variety of exotic phases like vector-chiral, spin-nematic, or higher-order polar phases <cit.>. The ground state of the quantum spin chain undergoes a phase transition from a phase with gapless excitation for J_2 ≲ 0.24J_1 to a gapped phase for the larger J_2 <cit.>. Incommensurate spiral correlations are present for J_2 > 0.5J_1 <cit.>, which exhibit a qualitatively similar excitation spectrum, such as was found in the double-helix material FeP <cit.>. In brochantite, considering only symmetric exchange interaction J_1 ≫ J_2, we expect a standard Néel ground state stabilized by the easy-axis magnetic anisotropy. In this approximation, the spin-spiral correlations are implausible. In case D ≠ 0, the helical spin dynamics can arise due to competition between the symmetric and antisymmetric interactions. A uniform DM vector, collinear with the easy axis and with a magnitude less than some critical value, favors spin-spiral correlations in the otherwise commensurately ordered phase <cit.>. The double-degenerate spin-wave modes with opposite angular momenta are shifted symmetrically, so their excitation energy minima are centered at l = 0 ± q, where q is the wave vector of the spiral. It is well known that linear spin-wave theory (LSWT) provides only limited qualitative agreement with the measured spin dynamics of low-dimensional quantum magnets. However, LSWT can qualitatively capture the dispersion and magnon bandwidth of the ordered magnets throughout the Brillouin zone. Besides, having low computational cost, LSWT calculations can be used to check various scenarios, including systems with multiple couplings. We use LSWT to determine the parameters of Hamiltonian (<ref>), calculating a neutron scattering cross section for the spin-ladder model, Fig. <ref>(c). Since we do not observe an upper boundary of the excitation, we use the experimental low-energy spectrum ħω < 7 meV, which exhibits a number of important features, such as branch stiffness, the anisotropy gap, and the wave vector of the spiral, that we used to determine the parameters of Hamiltonian (<ref>). We quantify the energy of the sharp magnon mode at 23 different points in reciprocal space 0 < l < 1 r.l.u. and use this experimental data set to fit the exchange couplings. The INS spectrum was calculated using an equation for the nonpolarized neutron scattering cross section as implemented in the SpinW <cit.>. The chi-square fit of the experimentally observed and calculated INS spectra yields J_1 = 10.6(2) meV, J_2 = 0(2) meV, δ = 1.05(1), and D = 3.0(2) meV. The uncertainties were estimated by comparing the exchange interactions obtained from fits with different initial parameters. One can see an excellent agreement between the calculated and experimental spectrum (Fig. <ref>), including the general shape of the dispersion curve and the spectral intensity over reciprocal space. Surprisingly, the best agreement between the simulations and the data was achieved for the NNN coupling J_2 = 0 meV and rather large D = 0.28J_1. This suggests that a single zig-zag chain model can well describe the experimentally observed spin-spiral dynamics. However, within the precision of our measurements, we cannot exclude the presence of a small NNN term. Specifically, fixing the J_2 at some non-zero value (|J_2|<2.5) and adjusting J_1 and D, we still can find a reasonably good solution with the reduced χ^2 value slightly larger (∼1-2%) when compared to the best fit, see Fig <ref>(d). Yet, any increase of the J_2 consistently worsens the fit. Clear visual disagreement between the calculated and observed spectra was found for |J_2| > 2.5 meV. It is important to mention that the magnon excitations observed in both the h and k directions exhibit weak dispersion. This indicates that interchain exchange interactions are present and stabilize long-range magnetic order at T_ N = 6.2 K, as expected for weakly-coupled spin-chain system <cit.>. The interchain exchange can cause a splitting of the degenerate modes, even in the absence of an applied magnetic field. However, this splitting could not be unambiguously resolved in our experiment due to its small magnitude. § CONCLUSIONS Summarizing, we have investigated the magnetic properties of the natural mineral brochantite using magnetic neutron diffraction, thermodynamic measurements, and inelastic neutron scattering. The key feature of the spin subsystem of brochantite is the uniform DM chiral coupling that leads to spin-spiral dynamics. Nonetheless, as the magnitude of antisymmetric interaction is below some critical value, the ground state of is commensurate AFM, stabilized by easy-axis exchange anisotropy. The LSWT calculation shows good agreement with the observed low-energy magnon excitation, enabling us to quantify the proposed model's symmetric and antisymmetric exchange interactions. While LSWT correctly predicts the ground state and single magnon excitation, it fails to reproduce the observed high-energy continuum. The quantum effects due to low dimensionality and low spin value must be considered to describe the full experimental spectra. We note that the quantum effects also cause the bandwidth renormalization and the exchange interaction deduced by LSWT should be divided by π/2 factor to obtain the true interaction strength <cit.> in the case of a simple Heisenberg model. DM interaction qualitatively modifies the spinon behavior and produces new phenomena such as backscattering interaction between the spinons <cit.> and unusual spin excitations <cit.>. This calls for further experimental and theoretical studies including polarized INS in fields and density matrix renormalization group calculations; this work is in progress. § ACKNOWLEDGMENTS The authors thank Daniel Pajerowski for the helpful discussions, Jong Keum for assistance with x-ray Laue measurements. Work at ORNL was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Science and Engineering Division. X-ray Laue alignment and magnetization measurements were conducted at the Center for Nanophase Materials Sciences (CNMS) at ORNL, which is a DOE Office of Science User Facility. This research used resources at the Spallation Neutron Source, a DOE Office of Science User Facility operated by ORNL. Work by LMA was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division. SEN acknowledges European Union Horizon 2020 research and innovation program for Marie Sklodowska-Curie Grant No. 884104 for financial support. OP acknowledges the GIMRT Program of the Institute for Materials Research, Tohoku University (Proposal No. 20K0507).
http://arxiv.org/abs/2407.13465v1
20240718123922
A note on Hilbert 16th Problem
[ "Armengol Gasull", "Paulo Santana" ]
math.DS
[ "math.DS" ]
A note on Hilbert 16th Problem]A note on Hilbert 16th Problem Armengol Gasull and Paulo Santana] Armengol Gasull^1 and Paulo Santana^2 ^1 Departament de Matemàtiques, Facultat de Ciències, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona, Spain ; and Centre de Recerca Matemàtica, Edifici Cc, Campus de Bellaterra, 08193 Cerdanyola del Vallès (Barcelona), Spain armengol.gasull@uab.cat ^2 IBILCE–UNESP, CEP 15054–000, S. J. Rio Preto, São Paulo, Brazil paulo.santana@unesp.br [2020]Primary: 34C07. § ABSTRACT Let ℋ(n) be the maximum number of limit cycles that a planar polynomial vector field of degree n can have. In this paper we prove that ℋ(n) is realizable by structurally stable vector fields with only hyperbolic limit cycles and that it is a strictly increasing function whenever it is finite. [ [ Received: date / Accepted: date =================================== § INTRODUCTION AND STATEMENT OF THE MAIN RESULTS Consider the planar polynomial system of differential equations X=(P,Q) given by ẋ= P(x,y), ẏ=Q(x,y), where the dot means the derivative in relation to the independent variable t and P, Qℝ^2→ℝ are polynomials. To system (<ref>) corresponds a polynomial vector field X=P∂/∂ x+Q∂/∂ y in the phase plane of the variables x and y. In this paper we make no distinction between system (<ref>) and its respective vector field. The degree of X is the maximum of the degrees of P and Q. Given n∈ℕ, let 𝒳^n be the set of the planar polynomial systems (<ref>) of degree n, endowed with the coefficients topology. Given X∈𝒳^n, let π(X)∈ℤ_⩾0∪{∞} be its number of limit cycles (i.e. isolated periodic orbits). In his famous address to the International Congress of Mathematicians in Paris 1900, David Hilbert raised his famous list of problems for the 20th century <cit.>, with the second part of the 16th problem being about the limit cycles of planar polynomial vector fields. Hilbert asks if there is a uniform upper bound for the number of limit cycles of polynomial vector fields of degree n. More precisely, given n∈ℕ let ℋ(n)∈ℤ_⩾0∪{∞} be given by, ℋ(n)=sup{π(X) X∈𝒳^n}. Under this notation the second part of Hilbert's 16th problem consists in obtaining an upper bound for ℋ(n) and it is yet an open problem. Even for the quadratic case, it is not known if ℋ(2)<∞. However, advances has been made and lower bounds for ℋ(n) have been found. For small values of n, the best lower bounds so far are ℋ(2)⩾ 4 ChenWang1979,Son1980, ℋ(3)⩾ 13 <cit.> and ℋ(4)⩾ 28 <cit.>. In general, it is known that ℋ(n) increases at least as fast as O(n^2ln n) <cit.>. However, although the known lower bounds are given by strictly increasing functions, this does not imply that ℋ(n) itself is strictly increasing. In our first main result we prove this fact. Given n∈ℕ, it holds ℋ(n+1)⩾ℋ(n)+1. In particular, it follows from Theorem <ref> that if ℋ(n_0)=∞ for some n_0∈ℕ, then ℋ(n)=∞ for every n⩾ n_0. The proof of Theorem <ref> is essentially a consequence from the fact that given X∈𝒳^n, we can embed X into 𝒳^n+1 and bifurcate one more limit cycle, while the others persist. This persistence follows from our second main result. To state it properly we will remind the notion of structural stability and comment on its particularities when it is restricted to the polynomial case. Roughly speaking, a smooth vector field is structurally stable if small perturbations do not change the topological character of its orbits. The hallmark work on this area is due to Peixoto <cit.> and his characterization theorem, which states that a C^1-vector field on a closed (i.e. compact and without boundary) two dimensional manifold is structurally stable if, and only if, the following statements hold. * It has at most a finite number of singularities, all hyperbolic. * It has at most a finite number of periodic orbits, all hyperbolic. * It does not have saddle connections. Moreover, the family of structurally stable vector fields is open and dense in the set of all C^1-vector fields. For the structural stability of polynomial vector fields endowed with the coefficients topology there are two main characterizations, given by Sotomayor <cit.> and Shafer <cit.>. The former defines structural stability of X∈𝒳^n as the structural stability of its Poincaré compactification. The latter does not make use of this embedding and thus deals with new objects, such as saddles at infi­ni­ty. Hence, they obtained different sets of necessary and sufficient conditions for structural stability. Yet, there are many similarities. Let X∈𝒳^n. In both cases for X to be structurally stable, statements (a) and (c) above are necessary and also the following weak version of statement (b). (b') It has at most a finite number of periodic orbits, none of even multiplicity. So far it is not known if non-hyperbolic limit cycles of odd multiplicity are possible for a structurally stable vector field in the polynomial world. More precisely, there is the following open question. [<cit.>] If X∈𝒳^n has a non-hyperbolic limit cycle of odd multiplicity, then is X structurally unstable in 𝒳^n? Question <ref> was explicitly raised by Sotomayor <cit.> and Shafer <cit.> and kept both of them from obtaining necessary and sufficient conditions for structural stability in 𝒳^n. For more details, we refer to <cit.>. Another important similarity between both works is the fact that structural stability is a generic property. That is, if we let Σ^n⊂𝒳^n be the family of the structurally stable elements, then Σ^n is open and dense, independently of the two approaches. Therefore, from now on we denote by Σ^n the set of structurally stable vector fields of degree n under either one of these two definitions. Let Σ^n_h⊂Σ^n be the family of structurally stable vector fields such that all their limit cycles are hyperbolic. In our second main result we prove that ℋ(n) is realizable by the elements of this family. For n∈ℕ, the following statements hold. * If ℋ(n)<∞, then there is X∈Σ_h^n such that π(X)=ℋ(n). * If ℋ(n)=∞, then for each k∈ℕ there is X_k∈Σ_h^n such that π(X_k)⩾ k. Finally, due to its relation with the possible case of ℋ(n)=∞, we also include at the end of this note a proof for the following folklore result: a planar analytic vector field has an enumerable number of limit cycles. The paper is organized as follows. In Section <ref> we recall some properties of rotated vector fields and prove how they can be used to transform non-hyperbolic limit cycles in hyperbolic ones. The main theorems are proved in Section <ref>. In Section <ref> we prove the folklore result and provide some further remarks. § ROTATED VECTOR FIELDS Given a planar polynomial vector field X=(P,Q), let X_α=(P_α,Q_α) be the one-parameter family given by P_α=Pcosα-Qsinα, Q_α= Qcosα+Psinα, with α∈ℝ. Observe that X_0=X and that X_α defines a completed family of rotated vector fields, see Duff <cit.>. Throughout out this paper, X_α will always denote the family given by (<ref>). In his seminal work Duff <cit.> studied the properties of X_α. In particular, he proved the following result that we simply state for family (<ref>), but that holds for more general 1-parametric families of 𝒞^1 vector fileds. Let X_α be the family of rotated vector fields (<ref>) and suppose that X_α_0 has a limit cycle γ_α_0. Then: * If γ_α_0 has odd multiplicity, then it is persistent for |α-α_0| small and it either contracts or expands monotonically as α varies in a certain sense. * If γ_α_0 has even multiplicity, then for |α-α_0| small it splits in two limit cycles, one stable and the other unstable, as α varies in a certain sense. If α varies in the opposite sense, then γ_α_0 disappears and no other limit cycles appear in its neighborhood. We observe that Theorem <ref> does not provide information about the hyperbolicity of the limit cycles involved. In the following preliminary result we can provide this information because we are in the polynomial setting. In fact, it is not difficult to see that it holds even for the case in which X is analytic. Let X_α be the family of rotated vector fields (<ref>) and suppose that X_α_0 has a limit cycle γ_α_0. Then, for |α-α_0|>0 small enough, all the limit cycles detailed in Theorem <ref> that bifurcate from γ_α_0 are hyperbolic. For simplicity, let us assume α_0=0. If γ_0 is hyperbolic, then there is nothing to prove. Hence, suppose that γ_0 is not hyperbolic. Let I⊂ℝ be a small neighborhood of α_0=0 and Σ be a small normal section of γ_0, endowed with a coordinate system s∈ℝ such that s=0 at p, where {p}=γ_0∩Σ. Let D I×Σ→ℝ be its associated displacement map. Since X_α is analytic in (x,y;α), it follows that D is well defined and analytic. Let T>0 be the period of γ_0 and let γ_0(t) be the parametrization of γ_0 given by the flow of X_0 and such that γ_0(0)=p. It follows from Perko <cit.> that, for some C∈ℝ\{0}, ∂ D/∂α(0,0) =C∫_0^T(e^-∫_0^tdiv(γ_0(τ)) dτ)X_α∂ X_α∂α(γ_0(t);0) dt =C∫_0^T(e^-∫_0^tdiv(γ_0(τ)) dτ)(P^2+Q^2)(γ_0(t);0) dt0. Therefore, from the Implicit Function Theorem we have that there is a unique function α=α(s), with α(0)=0, such that D(α(s),s)=0. Moreover, since D is analytic, it follows that α(s) is also analytic. Differentiating (<ref>) in relation to s we obtain, ∂ D/∂α(α(s),s)α'(s)+∂ D/∂ s(α(s),s)=0. From (<ref>) we have that ∂ D/∂α(α(s),s)≠0 for |s| small. Hence, it follows from (<ref>) that, α'(s)=-∂ D/∂ s/∂ D/∂α(α(s),s). Since γ_0 is not hyperbolic, it follows that ∂ D/∂ s(0,0)=0 and thus from (<ref>) we have α'(0)=0. Since α' is an analytic function, either 0 is an isolated zero of α' or α'(s)≡0 (and in particular α(s)≡0) in a neighborhood of s=0. Let us discard this second possibility. In this case, from (<ref>), D(0,s)≡0 for |s| small and thus γ_0 belongs to a continuous band of periodic orbits, contradicting the definition of limit cycle. Therefore, it follows from (<ref>) that ∂ D/∂ s(α(s),s)=-∂ D/∂α(α(s),s)α'(s)≠0, for |s|>0 small. Hence, any limit cycle of X_α near γ_0 is hyperbolic, for |α|>0 small, as we wanted to prove. We observe that Perko <cit.> provided a similar result about the hyperbo­licity of the limit cycles considered at Theorem <ref>(b). For more details about the theory of rotated vector fields and its generalizations, we refer to Han <cit.>, Perko <cit.> and the references therein. § PROOF OF THE MAIN THEOREMS Given X=(P,Q)∈𝒳^n, let π_h(X) be its number of hyperbolic limit cycles. Observe that in general we have π_h(X)⩽π(X). In this paper we also work with the possibility of π(X)=∞ for some X∈𝒳^n. We choose to do this because although Il’yashenko <cit.> and Écalle <cit.> independently claimed to have proved that this is impossible, it seems that some of their results start to be under discussion. For instance, in the recent work <cit.> a possible gap was found in Il’yashenko's proof. Our results are not based on these finiteness results. Let X∈𝒳^n. Then the following statements hold. * If π(X)<∞, then there is Y∈𝒳^n such that π_h(Y)⩾π(X). * If π(X)=∞, then for each k∈ℕ there is Y_k∈𝒳^n such that π_h(Y_k)⩾ k. Let X∈𝒳^n and X_α be its respective family of rotated vector fields, given by (<ref>). Let also: * h∈ℤ_⩾0∪∞ be the number of hyperbolic limit cycles of X; * m∈ℤ_⩾0∪∞ be the number of non-hyperbolic limit cycles X of odd multiplicity; * m^±∈ℤ_⩾0∪∞ be the number of non-hyperbolic limit cycles γ of X of even multiplicity and such that γ bifurcates in two hyperbolic limit cycles for ±α>0 small. Observe that π(X)=h+m+m^++m^-. Suppose first π(X)<∞. Without loss of generality, suppose m^+⩾ m^-. It follows from Proposition <ref> that X_α has at least h+m+2m^+ hyperbolic limit cycles for α>0 small enough. Hence, if we take Y=X_α, then Y∈𝒳^n and π_h(Y)⩾ h+n+2m^+⩾ h+n+m^++m^-=π(X). If π(X)=∞, then h, m, m^+ or m^- are equal to infinity. In any case we apply the same reasoning on an sequence of vector fields having an increasing number of limit cycles, obtaining the final desired sequence of vector fields. Suppose first ℋ(n)<∞ and let Z∈𝒳^n be such that π(Z)=ℋ(n). It follows from Proposition <ref> that there is Y∈𝒳^n such that π_h(Y)⩾π(Z). Hence, it follows from the definition of ℋ(n) that, π(Y)=π_h(Y)=π(Z)=ℋ(n). Hence, every limit cycle of Y is hyperbolic and any vector field in 𝒳^n, close enough to Y, has also exactly ℋ(n) limit cycles, all of them hyperbolic. In particular, there is an arbitrarily small perturbation X∈Σ^n_h of Y such that π(X)=ℋ(n). Suppose now ℋ(n)=∞. Observe that there is a sequence (Z_j), with Z_j∈𝒳^n, such that π(Z_j)→∞ and π(Z_j)<∞ for every j∈ℕ, or there is Z∈𝒳^n such that π(Z)=∞. In either case it follows from statement (a) or (b) of Proposition <ref>, respectively, that for each k∈ℕ there is Y_k∈𝒳^n such that π_h(Y_k)⩾ k. Therefore, for each k∈ℕ we can take a small enough perturbation W_k∈Σ^n of Y_k such that π_h(W_k)⩾ k. It follows from the definition of Σ^n that π(W_k)<∞. Moreover, some of these limit cycles may be non-hyperbolic and with odd multiplicity. Thus, it follows similarly to the proof of Proposition <ref>, from the structural stability of W_k and from the fact that Σ^n is open and dense in 𝒳^n, that we can take a small enough rotation X_k∈Σ^n of W_k such that the following statements hold. * The hyperbolic limit cycles persist. * The non-hyperbolic limit cycles become hyperbolic. * X_k and W_k are topologically equivalent. In particular, it follows from (iii) that we do not have the bifurcation of new limit cycles and thus we conclude that X_k∈Σ_h^n and π(X_k)⩾ k. We now prove a technical lemma that we will need to proof Theorem <ref>. Let X∈𝒳^n and B⊂ℝ^2 a closed ball centered at the origin. Then there is an arbitrarily small perturbation Y of X having a regular point p∈ℝ^2\ B such that ℓ∩ B=∅, where ℓ is the straight line p+sY(p), s∈ℝ. It follows from Shafer <cit.> that we can take an arbitrarily small perturbation Y∈𝒳^n of X such that Y has at most a finite number of singularities. Let Y=(P,Q) and let P_i and Q_i, i∈{0,…,n}, be homogeneous polynomials of degree i such that P=P_0+…+P_n and Q=Q_0+…+Q_n. Replacing Y by an arbitrarily small perturbation if necessary, we can also suppose P_n(1,0)Q_n(1,0)≠0. Let p=(x,0), x>0. Since Y has at most a finite number of singularities, there is x_0>0 such that if x>x_0, then p is a regular point of Y. Let ℓ^+ and ℓ^- be the two straight lines tangents to B and passing through p. Let θ=θ(x) be the angle between ℓ^± and the x-axis and observe that, lim_x→∞θ(x)=0. Let also φ=φ(x) be the angle between ℓ and the x-axis, which is given by φ(x)=arctanQ(x,0)/P(x,0), see Figure <ref>. Since P_n(1,0)Q_n(1,0)≠0 it follows that, lim_x→∞φ(x)=arctanQ_n(1,0)/P_n(1,0)≠0. As a consequence, |φ(x)|>|θ(x)| for x>0 big enough and thus ℓ∩ B=∅. Suppose first ℋ(n)<∞. It follows from Theorem <ref>(a) that there is Z∈Σ_h^n such that π(Z)=ℋ(n). Let B⊂ℝ^2 be a closed ball centered at the origin and such that all the limit cycles of Z are in the interior of B. From Lemma <ref> and the structural stability of Z, we can suppose that Z has a regular point p∈ℝ^2\ B such that p+sZ(p)∉B for every s∈ℝ. Let Y=(P,Q)∈Σ_h^n be the vector field obtained from Z by translating p to the origin. Let X=(R,S)∈𝒳^n+1 be given by R(x,y)=(ax+by)P(x,y), S(x,y)=(ax+by)Q(x,y), with a=-Q(0,0) and b=P(0,0). Let ℓ⊂ℝ^2 be the line given by ax+by=0 and observe that X and Y are equal on each connected component of ℝ^2\ℓ, except by the rescaling of time characterized by dt/dτ=ax+by. It follows from Lemma <ref> that B∩ℓ=∅ and thus π_h(X)=π(X)=π(Y). Observe that ℓ is a line of singularities of X. In particular, the origin is a singularity of X and its Jacobian matrix is given by, DX(0,0)=([ aP(0,0) bP(0,0); aQ(0,0) bQ(0,0) ])=([ ab b^2; -a^2 -ab ]). Hence, DX(0,0)=0 and Tr DX(0,0)=0. Let X_ε,δ=(R_ε,S_δ) be given by R_ε(x,y)=(ax+(b+ε)y)P(x,y), S_δ(x,y)=((a+δ)x+by)Q(x,y), and observe that we can take |ε|>0 and |δ|>0 small enough such that the following statements hold. * All the hyperbolic limit cycles inside B persist. * The origin is an isolated singularity. * DX_ε,δ(0,0)>0 and Tr DX_ε,δ(0,0)=0. Hence, the origin is a monodromic singularity of X_ε,δ. Let L_1 be its first Lyapunov constant (see Adronov et al. <cit.>). Except perhaps by an arbitrarily small perturbation on the nonlinear terms of X_ε,δ, we can suppose L_1≠0. Therefore, we can take another small enough perturbation W∈𝒳^n+1 of X_ε,δ such that a limit cycle bifurcates from the origin, while the others persist. Hence we obtain π(W)⩾π(Y)+1=ℋ(n)+1, and thus ℋ(n+1)⩾ℋ(n)+1. Suppose now ℋ(n)=∞. It follows Theorem <ref>(b) that there is a sequence (Z_k), with Z_k∈Σ_h^n, such that π(Z_k)→∞. Since π(Z_k)<∞, we can apply the above reasoning on each Z_k obtaining a sequence (W_k), with W_k∈𝒳^n+1, such that π(W_k)→∞ and thus proving that ℋ(n+1)=∞. § FINAL REMARKS AND A FOLKLORE RESULT Theorem <ref> is not the first known result about recurrence properties of ℋ(n). It follows from the proof of Christopher and Lloyd <cit.> that ℋ(2n+1)⩾4ℋ(n). Roughly speaking, given X∈𝒳^n, the authors translate all the limit cycles of X to the first quadrant and thus apply the non-invertible transformation (x,y)↦(u^2,v^2), followed by the rescaling of time dt/dτ=2uv. Hence, obtaining Y∈𝒳^2n+1 with a diffeomorphic copy of X in each open quadrant. The challenge of Theorem <ref> has been to relate ℋ(n+1) with ℋ(n). It is much more easy for example to prove that ℋ(n+2)⩾ℋ(n)+1. Indeed, given X∈𝒳^n let Y=(x^2+y^2)X∈𝒳^n+2 and observe that Y is equivalent to X except at the origin, where it has an extra degenerate singularity. Hence, similarly to the end of the proof of Theorem <ref>, we can take a small perturbation of Y creating an extra limit cycle. We end this note with the following folklore result. Let X be a planar analytic vector field. Then X has an enumerable number of limit cycles. In particular, ℋ(n)⩽ℵ_0 for every n∈ℕ. If X has no limit cycles, then there is nothing to prove. Suppose therefore that X has at least one limit cycle and let Γ={γ_a}_a∈ A be an indexation of all its limit cycles, A≠∅. For each a∈ A, set δ_a=inf{d(γ_a,γ_b) b∈ A, b≠ a}, where d(γ_a,γ_b) is the usual distance between the compact sets γ_a and γ_b, d(γ_a,γ_b)=min{||q_a-q_b|| q_a∈γ_a, q_b∈γ_b}. Since X is analytic, it follows that γ_a must be isolated (see <cit.>) and thus δ_a>0 for every a∈ A. Let N_a⊂ℝ^2 be the open δ_a/2-neighborhood of γ_a, a∈ A. Observe that if a≠ b, then N_a∩ N_b=∅ (for otherwise d(γ_a,γ_b)<max{δ_a,δ_b}). For each a∈ A, choose r_a∈ N_a∩ℚ^2 and define i(a)=r_a. Observe that r_a≠ r_b if a≠ b. Hence, we have an injective map i A→ℚ^2 and thus A is enumerable. Notice that Proposition <ref> is optimal for the analytic case. For instance, the planar analytic vector field ẋ=-y+xsin(x^2+y^2), ẏ=x+ysin(x^2+y^2), has infinitely many limit cycles, given by x^2+y^2=kπ, with k∈ℤ_>0. § ACKNOWLEDGMENTS This work is supported by the Spanish State Research Agency, through the projects PID2022-136613NB-I00 grant and the Severo Ochoa and María de Maeztu Program for Centers and Units of Excellence in R&D (CEX2020-001084-M), grant 2021-SGR-00113 from AGAUR, Generalitat de Ca­ta­lu­nya, and by São Paulo Research Foundation (FAPESP), grants 2019/10269-3, 2021/01799-9 and 2022/14353-1. 99 Andronov A. A. Andronov et al Theory of Bifurcations of Dynamic Systems on a Plane, Wiley, New York & Toronto (1973). Browder F. E. Browder, Mathematical Developments Arising from Hilbert Problems, Proc. Sympos. Pure Math., volume XXVIII, part I (1976). ChenWang1979 L. Chen and M. Wang, The relative position, and the number, of limit cycles of a quadratic differential system, Acta Math. Sinica (Chin. Ser.) 22, 751–758 (1979). ChrLlo1995 C. Christopher and N. G. Lloyd, Polynomial Systems: A Lower Bound for the Hilbert Numbers, Proc. R. Soc. Lond., Ser. A 450, No. 1938, 219–224 (1995). Duff G. F. D. Duff, Limit-cycles and rotated vector fields, Ann. Math. (2) 57, 15–31 (1953). Ecalle J. Écalle, Introduction aux fonctions analysables et preuve constructive de la conjecture de Dulac, Actualités Mathématiques. Paris: Hermann, Éditeurs des Sciences et des Arts. Han1999 M. Han, Global behavior of limit cycles in rotated vector fields, J. Differ. Equations 151, No. 1, 20–35 (1999). HanLi2012 M. Han and J. Li, Lower bounds for the Hilbert number of polynomial systems, J. Differ. Equations 252, No. 4, 3278–3304 (2012). Shenko2 Y. S. Il’yashenko, Finiteness theorems for limit cycles, Translations of Mathematical Monographs, American Mathematical Society (1991). LiLiuYang2009 C. Li, C. Liu and J. Yang, A cubic system with thirteen limit cycles, J. Differ. Equations 246, No. 9, 3609–3619 (2009). Lijibin2003 J. Li, Hilbert's 16th Problem and bifurcations of Planar Polynomial Vector Fields, Int. J. Bifurc. Chaos 13, 47–106 (2003). Pei1962 M. M. Peixoto, Structural stability on two-dimensional manifolds, Topology 1, 101–120 (1962). Perko1992 L. M. Perko, Bifurcation of limit cycles: Geometric theory, Proc. Am. Math. Soc. 114, No. 1, 225–236 (1992). Perko2001 L. M. Perko, Differential equations and dynamical systems, vol. 7 of Texts in Applied Mathematics, Springer-Verlag, New York, third ed, 2001. ProTor2019 R. Prohens and J. Torregrosa, New lower bounds for the Hilbert numbers using reversible centers, Nonlinearity 32, No. 1, 331–355 (2019). Sha1987 D. Shafer, Structural stability and generic properties of planar polynomial vector fields, Rev. Mat. Iberoam. 3, No. 3-4, 337–355 (1987). Santana P. Santana, On the structural instability of non-hyperbolic limit cycles on planar polynomial vector fields, to appear in São Paulo J. Math. Sci. (2024). Son1980 S. Songling, A concrete example of the existence of four limit cycles for plane quadratic systems, Sci. Sin. 23, 153–158 (1980). Soto1985 J. Sotomayor, Stable planar polynomial vector fields, Rev. Mat. Iberoam. 1, No. 2, 15–23 (1985). Yeung M. Yeung, On the monograph “Finiteness Theorems for limit cycles” and a special case of alternant cycles, Preprint, arXiv:2402.12506 (2024).
http://arxiv.org/abs/2407.12519v1
20240717121644
Causality-inspired Discriminative Feature Learning in Triple Domains for Gait Recognition
[ "Haijun Xiong", "Bin Feng", "Xinggang Wang", "Wenyu Liu" ]
cs.CV
[ "cs.CV" ]
CLTD H. Xiong et al. Hubei Key Lab of Smart Internet, School of EIC, Huazhong University of Science and Technology, Wuhan, China {xionghj,fengbin,xgwang,liuwy}@hust.edu.cn Causality-inspired Discriminative Feature Learning in Triple Domains for Gait Recognition Haijun Xiong0000-0003-0856-8250 Bin Feng()0000-0003-2166-751X Xinggang Wang0000-0001-6732-7823 Wenyu Liu0000-0002-4582-7488 July 22, 2024 =============================================================================================================================== § ABSTRACT Gait recognition is a biometric technology that distinguishes individuals by their walking patterns. However, previous methods face challenges when accurately extracting identity features because they often become entangled with non-identity clues. To address this challenge, we propose , a causality-inspired discriminative feature learning module designed to effectively eliminate the influence of confounders in triple domains , spatial, temporal, and spectral. Specifically, we utilize the Cross Pixel-wise Attention Generator (CPAG) to generate attention distributions for factual and counterfactual features in spatial and temporal domains. Then, we introduce the Fourier Projection Head (FPH) to project spatial features into the spectral space, which preserves essential information while reducing computational costs. Additionally, we employ an optimization method with contrastive learning to enforce semantic consistency constraints across sequences from the same subject. Our approach has demonstrated significant performance improvements on challenging datasets, proving its effectiveness. Moreover, it can be seamlessly integrated into existing gait recognition methods. § INTRODUCTION Gait recognition is a long-distance biometric identification technology that uses unique walking patterns for individual recognition. It has garnered significant attention due to its wide applications in surveillance and healthcare <cit.>. However, accurately identifying individuals presents a challenge due to various factors influencing performance. There are two categories of gait recognition methods: appearance-based <cit.> and model-based <cit.>. Appearance-based approaches have gained more attention recently because of their superior performance compared to model-based approaches. However, the entanglement of discriminative identity (ID) features with non-identity (non-ID) clues makes it difficult to effectively and efficiently extract features. Here, non-ID clues such as noise, clothing, and carrying conditions, act as extrinsic confounders to identities. As depicted in <ref>, most of the current approaches focus on extracting ID-intrinsic features directly from the entangled space, which is challenging. This entanglement biases gait recognition models towards non-ID clues, ultimately compromising performance <cit.>. The core idea of this work is to establish a consistent latent space where non-ID clues can be effectively eliminated. The Structural Causal Model (SCM) provides a fundamental framework for describing causal mechanisms within a system <cit.>, with applications spanning diverse fields such as computer vision <cit.>, natural language processing <cit.>, and recommender systems <cit.>. Recently, causal analysis has been explored in gait recognition, with notable contributions from the Generative Counterfactual Intervention framework (GaitGCI) <cit.>. GaitGCI focuses on mitigating the influence of confounders during the feature learning process, but it has several limitations. Firstly, it only addresses confounders at the output stage of the backbone network, overlooking the intricate formation and effects of confounders across different stages. Secondly, its causal intervention is confined to spatial features, neglecting potential confounders in other domains such as temporal and spectral. Lastly, the absence of explicit semantic consistency constraints may compromise the consistency of learned identity representations for the same subject, hindering overall performance. In contrast, we propose to leverage the relationship between spatial and spectral domain signals. We believe that if confounders exist in spatial signals, they are also likely present in the spectral domain. We can effectively describe and separate signals by exploiting spectral representation, thereby eliminating confounders. This notion has been validated in various computer vision tasks <cit.>, underscoring its potential efficacy in gait recognition. Based on these analyses, we propose a novel module called , which is designed to mitigate the influence of confounders within spatial, temporal, and spectral domains. First, we perform the Cross Pixel-wise Attention Generator (CPAG) to capture contextual information for each pixel in the time, height, and width dimensions, thereby generating high-quality attention distributions. This ensures the effective processing of spatial and temporal information. Next, we present the Fourier Projection Head (FPH), which utilizes the Fast Fourier Transform (FFT) to project the spatial feature map into the Fourier spectral domain. This transformation preserves essential information while significantly reducing computational costs, enabling efficient processing in the spectral domain. Furthermore, we introduce contrastive learning to supervise the generated factual and counterfactual features, thus ensuring consistent semantic representations of ID-intrinsic clues across sequences from the same subject. Moreover, we propose to utilize at multiple stages of the network, not just in the final stage, to comprehensively eliminate the effects of confounders. Importantly, serves as a plug-and-play training paradigm, compatible with various gait recognition methods. We validate the effectiveness and versatility of through experiments conducted on multiple datasets, including OU-MVLP <cit.>, CASIA-B <cit.>, GREW <cit.>, and Gait3D <cit.>. The results demonstrate significant performance improvements when is integrated with various gait recognition methods, leading to state-of-the-art results on multiple datasets, with a maximum improvement of 11.1% (from 60.2% to 71.3%, refer to <ref>). The main contributions of our work can be summarized as follows: * We solve the gait recognition problem from a new perspective, , causality-based contrastive learning. This way aims to establish a consistent latent space, reducing the bias introduced by non-ID clues. * We propose , which leverages the Cross Pixel-wise Attention Generator and Fourier Projection Head to eliminate the impact of confounders in triple domains effectively. In addition, we incorporate contrastive learning to ensure the consistency of ID-intrinsic semantics information across sequences from the same subject. * Demonstrating the effectiveness and versatility of through extensive experiments, achieving state-of-the-art results on various datasets and significantly improving performance on multiple baseline methods. § RELATED WORK Gait Recognition. Gait recognition methods can be generally categorized into two groups: model-based and appearance-based. Model-based approaches <cit.> use a priori model representing human shape and dynamics, extracting gait features by fitting these models. For instance, methods in <cit.> utilize ResGCN for spatial-temporal feature extraction from skeleton sequences. In contrast, GPGait <cit.> focuses on efficient fine-grained feature extraction. On the other hand, appearance-based methods <cit.> directly extract gait features from silhouette sequences. Han  <cit.> introduced Gait Energy Images (GEIs) obtained by averaging multiple silhouettes from a gait cycle sequence. GaitSet<cit.> is the first method to treat gait as a set for learning discriminative feature representations. Huang  <cit.> presented fine-grained feature representations and multi-scale temporal motion modeling methods. SMPLGait <cit.> introduces a multimodal method, which uses 3D human meshes <cit.> to enhance silhouette information. Chai  <cit.> implemented a second-order motion extraction module and view embedding based on 3D convolution to learn motion information. Wang  <cit.> presented DyGait focusing on extracting feature representations of dynamic body parts. Additionally, some other methods <cit.> explored the influence of part-based multi-scale motion information. Causality in Computer Vision. Causality has emerged as a crucial concept in various computer vision tasks, aiding in the representation and reasoning of uncertain knowledge. Several recent works have leveraged causal models to address different challenges. Yang  <cit.> proposed a causal attention model to eliminate confounding effects in vision-language attention models, enhancing the interpretability and robustness of the model. Chen  <cit.> presented a meta-causal learning method that learns to infer the causes of domain shift between the auxiliary and source domains during training, facilitating domain adaptation and transfer learning tasks. Miao  <cit.> developed CauSSL a framework for semi-supervised medical image segmentation, which enhances algorithmic independence between two branches using causal mechanisms. Dou  <cit.> proposed GaitGCI, first introducing causality into gait recognition. GaitGCI focuses on intervening causally on the final spatial feature to mitigate confounding effects. In contrast to GaitGCI, our proposed approach eliminates the impact of confounders in triple domains, namely spatial, temporal, and spectral domains. This allows for a more comprehensive elimination of confounders and improves the discriminative ability of the gait recognition model. Contrastive Learning. Contrastive learning is a powerful technique, aiming to decrease the distance between similar samples while increasing the gap between dissimilar ones. Guo  <cit.> proposed AimCLR, emphasizing extreme augmentations to improve the universality of learned representations. Rao and Miao <cit.> presented TranSG to capture skeleton relationships and spatial-temporal semantic information. Quan  <cit.> introduced SemCL, specifically targeting object extraction, significantly improving the spatial comprehension of pre-trained models. Inspired by these, ensures semantic consistency between sequences of the same subject by incorporating contrastive learning. § METHOD §.§ Gait Recognition from Causal View [0,r, < g r a p h i c s > , Causal Graph illustration of our approach. ] Causal inference <cit.> refers to the process of determining the cause-and-effect relationship between different factors or events. Enlightened by previous excellent works <cit.>, we introduce causality into gait recognition to separate various confounders and ID-intrinsic clues. The causal graph in <ref> is constructed using SCM <cit.> to understand causality in gait recognition. In this representation, X denotes features generated by the gait model, A denotes ID-intrinsic clues, C stands for confounders (non-ID clues) entangled with A, and Y represents the ground truth, , the identity. The solid line indicates a direct causal relationship, , X→Y denotes Y is caused by X, while the dotted line indicates there exists statistical dependence between two factors. In the ideal scenario, X should be learned solely from ID-intrinsic clues, denoted as A→X→Y. However, in practical cases, the existence of confounders often entangles A with non-ID clues C, represented as (C, A) →X→Y, where (C, A) denotes that the two factors are entangled. It is difficult to distill X from confounders C due to the severe entanglement of various influencing factors with A. Performing intervention on confounders C, such as do∼operation denoted as do(C) for specifying the exact value of C and isolating its cause, allows us to eliminate the corresponding effects. In <ref>, || denotes the do∼operation. §.§ Architecture Based on the analysis of causality, we introduce to enhance the learning of discriminative features by eliminating the influence of confounders, while preserving ID-intrinsic information. The overall architecture of our approach is illustrated in <ref>. For clarity, we adopt DyGait <cit.> as the backbone to construct our approach, and the versatility of will be demonstrated in <ref>. DyGait primarily consists of several fundamental DAM blocks. Given the wide adoption of multi-scale feature representation <cit.> in various computer vision tasks, it is reasonable to infer that confounding information is distributed across multiple feature scales. Therefore, we propose incorporating at multiple stages of the network, not solely in the final stage, to effectively and comprehensively eliminate the effect of confounders. Specifically, we integrate a for each DAM to learn more discriminative feature representations across multiple feature grains. Each is responsible for eliminating the impact of confounders in triple domains by generating factual and counterfactual features simultaneously, supervised with a Factual and Counterfactual Loss. This ensures the gait recognition model remains free from confounders throughout the backbone. The experimental results in <ref> will demonstrate the effectiveness of this approach. It is crucial to clarify that is only employed during training, which incurs no additional computational cost or memory consumption during inference. §.§ The proposed is a plug-and-play module designed to eliminate the impact of confounders in the spatial, temporal, and spectral domains. It consists of two primary components: the Cross Pixel-wise Attention Generator and the Fourier Projection Head. Cross Pixel-wise Attention Generator. The Cross Pixel-wise Attention Generator, inspired by CCNet <cit.>, is introduced to establish high-quality factual and counterfactual attention. This is achieved by employing a separation operation in time, height, and width dimensions. This operation calculates correlations only between pixels that cross a specific pixel, effectively reducing computational complexity in time and space. [0,r, < g r a p h i c s > , The detailed structure of CPAG. ] The detailed structure of CPAG depicted in <ref>, involves several key steps. Given the input feature map f_i ∈ℝ^T× C× H× W, CPAG starts with a Spatial Pooling operation and two 1D CNNs applied to f_i, generating distinct feature maps Q and K, where Q, K∈ℝ^T× C. Subsequently, the temporal correlation matrix M_c ∈ℝ^T× T is generated through a matrix multiplication operation and a softmax layer. The process is formulated as: Q = Conv1D(SP(f_i)), K = Conv1D(SP(f_i)), M_c = Softmax(QK^𝖳 / γ), where SP(·) denotes the Spatial Pooling operation, and γ is a learnable scaling parameter to balance the learning magnitude of the key-query dot product. Simultaneously, CPAG performs an average pooling operation on f_i along the channel dimension. It is then horizontally mapped into V∈ℝ^T× H× W' via the Horizontal separate FC layer (H-FC) W_H ∈ℝ^H× W× W', where W' denotes the width of the i-th DAM's output f_i+1. After obtaining the temporal correlation matrix M_c and feature map V, CPAG utilizes matrix multiplication, Vertical separate FC layer (V-FC), and channel repeat operations to produce the output a_i, denoted as: a_i = Repeat((M_c V)^𝖳W_V ), where a_i ∈ℝ^T× C'× H' × W', and C' and H' refer to the channel and height of f_i+1, respectively. W_V ∈ℝ^W'× H× H' represents the V-FC operation like H-FC, and Repeat(·) represents the channel repeat operation. In naive self-attention <cit.>, the computational complexity of the matrix multiplication is quadratic to the spatial-temporal resolution of the input, i.e., 𝒪(2t^2h^2w^2c) for a feature map with t× c× h× w pixels. In contrast, our approach, which involves the separation operations of time, height, and width, significantly reduces the computational complexity to 𝒪(t^2(c+hw)). Therefore, CPAG dramatically lowers the computational cost compared to Self-Attention. Furthermore, horizontal and vertical decoupling enables CPAG to aggregate pixels at different locations along the horizontal and vertical directions, facilitating enhanced spatial interaction at the pixel level. This, in turn, generates high-quality distributions of factual and counterfactual attention. Fourier Projection Head. The distribution of information in the spatial domain is often scattered, posing challenges for effective extraction. In contrast, the Fourier spectrum typically concentrates most of the energy in the low-frequency region, making it more amenable to information extraction. Motivated by this observation, we introduce the Fourier Projection Head to transform the spatial feature map into the Fourier spectral domain through FFT. The subsequent step involves a Low-frequency Selector (LFS) to reduce dimensionality while preserving essential information. As depicted in <ref>, FPH takes x_i = a_i ⊙f_i+1∈ℝ^T× C× H× W as input and first employs FFT to convert x_i into the Fourier spectrum. Subsequently, the LFS is applied to the Fourier spectrum with a window of k× k to reduce dimensionality. The information is then projected into a vector x^F_i through an FC layer W_P ∈ℝ^2C× 2C_o and a Temporal Pooling (TP) operation, where C_o is the output dimension. The overall FPH process can be formulated as: x^real, x^imag = FFT(x_i), x^k = LFS_k× k(x^realcx^imag), x^F_i = TP(x^k W_P), where x^F_i ∈ℝ^C_o× k^2, c denotes concatenating the real and imaginary components, and LFS_k× k represents the low-frequency selector with a window of k× k, defined as: LFS_k× k(x) = Sigmoid(x[...,h-k/2:h-k/2+k, w-k/2:w-k/2+k]). §.§ Loss Function We introduce the Factual and Counterfactual Loss as a supervision mechanism for each in our approach. For the i-th , we obtain the factual feature x_f, and counterfactual feature x_cf. They are used along with the anchor feature x_a, the output of FPH taking f_i+1 as input, to formulate contrastive learning. Here we have omitted the symbol i for simplicity. Specifically, we employ InfoNCE <cit.> to minimize the gap between entangled and ID-intrinsic features while increasing the gap between entangled and confounder features, ensuring consistent semantic representations of ID-intrinsic clues across sequences from the same subject. This is formulated as: ℒ_NCE^fcf = -∑_x^+ ∈S_flogsim(x_a, x^+)/sim(x_a, x^+) + ∑_x∈S_cfsim(x_a, x), where sim(a,b) = a·b/‖a‖‖b‖ represents the similarity between two feature vectors a and b. S_f and S_cf denote the sets of factual features and counterfactual features, respectively, from the subject that x_a belongs to. Additionally, inspired by the work of Tang  <cit.> in alleviating context bias through the total direct effect (TDE) in causal inference for unbiased scene graph generation, we eliminate the impact of confounders by maximizing TDE and the factual probability. This is defined as: Y_f = P(Y|X = x_f) = W_c x_f, Y_cf = P(Y| do(X = x_cf)) =W_c x_cf, TDE = Y_f - Y_cf, ℒ_ce^fcf = ℒ_ce(TDE, y) + ℒ_ce(Y_f, y), where W_c and y denote the ID classifier and the ground truth, respectively. The minimization of ℒ_ce(Y_f, y) and ℒ_ce(TDE, y) enforces the factual feature x_f comprising all ID-intrinsic information, and the counterfactual feature x_cf only comprising non-ID clues, respectively. Thus, the influence of confounders can be eliminated by the TDE operation. The Factual and Counterfactual Loss of the i-th is expressed as: ℒ_fcf^i = ℒ_NCE^fcf + ℒ_ce^fcf. The overall loss function is the combination of the triplet loss ℒ_tri <cit.>, cross-entropy loss ℒ_ce, and multi-stage Factual and Counterfactual Loss ℒ_fcf: ℒ = ℒ_tri + ℒ_ce + ∑_i=1^3 λ_i ·ℒ_fcf^i, where λ_i is a hyper-parameter to balance the differences among features at various stages. § EXPERIMENTS We conduct extensive experiments on four popular datasets, , OU-MVLP <cit.>, CASIA-B <cit.>, GREW <cit.>, and Gait3D <cit.>, to demonstrate the effectiveness of the proposed approach. §.§ Datasets OU-MVLP <cit.> encompasses a significant number of indoor gait samples, totaling 10307 subjects. Each subject is represented by 14 view angles, evenly distributed between 0^∘ to 90^∘ and 180^∘ to 270^∘, with each angle containing two sequences (Seq#00-01). According to the protocol outlined in <cit.>, a total of 5153 subjects are employed as the training set, while the remaining 5154 are allocated for testing. CASIA-B <cit.> comprises 124 subjects, each captured from 11 distinct view angles evenly spread across a range from 0^∘ to 180^∘. Within each view angle, there are three different walking conditions: normal walking (NM#1-6), walking with bags (BG#1-2), and walking with coats (CL#1-2). Following the established protocol detailed in <cit.>, the initial 74 sequences are used for training. GREW <cit.> is a large-scale wild dataset containing a vast array of 26345 subjects and 128671 sequences. GREW is divided into three distinct parts: the training set (20000 subjects), the validation set (345 subjects), and the test set (6000 subjects). Gait3D <cit.> is a wild dataset comprising 4000 subjects and 25309 sequences. Following its protocol, 3000 subjects are designated for the training set, while the remaining subjects are allocated for the test set. §.§ Implementation Details Our approach is implemented using the PyTorch framework <cit.>, and all experiments are conducted on NVIDIA GeForce RTX3090 GPUs. For all experiments, the default input silhouette size is 64× 44. Baseline Models. To demonstrate the effectiveness of , we employ DyGait <cit.> as a backbone for comparison with some advanced methods. Additionally, to showcase the versatility of , we also use GaitSet <cit.>, GaitPart <cit.>, GaitGL <cit.>, GaitGCI <cit.>, and GaitBase <cit.> as baselines, which illustrates the improvements achieves across various frameworks on diverse datasets. Training Strategies. The experimental settings align with the established training strategies of baseline models, including the choice of the optimizer, scheduler, iteration count, and batch size. This way ensures a fair and consistent evaluation framework across all models. More training strategies are provided in the supplementary material. Hyper Parameters. To adapt our approach to the unique characteristics of each dataset, we tailor certain parameters accordingly. Specifically, for CASIA-B, we set the output dimension of FPH, denoted as C_o to 128, and the weights (λ_1, λ_2, λ_3) in <ref> to (0.05, 0.10, 0.15), respectively. For OU-MVLP, GREW, and Gait3D, we increase C_o and the weights (λ_1, λ_2, λ_3) to 256 and (0.1, 0.2, 0.3), respectively. Additionally, for all datasets, we set the number of s to 3, and the window size k× k of LFS in FPH to 7×7. §.§ Comparison with State-of-the-art Methods In this section, we compare our approach with several methods <cit.>. More experimental results are provided in the supplementary material. [0,r, 0.5! Method Venue NM BG CL Mean GaitSet <cit.> AAAI19 95.0 87.2 70.4 84.2 GaitPart <cit.> CVPR20 96.2 91.5 78.7 88.8 GLN <cit.> ECCV20 96.9 94.0 77.5 89.5 GaitGL <cit.> ICCV21 97.4 94.5 83.6 91.8 3DLocal <cit.> ICCV21 97.5 94.4 83.7 91.8 CSTL <cit.> ICCV21 97.8 93.6 84.2 91.9 LagrangeGait <cit.> CVPR22 96.9 93.5 86.5 92.3 MetaGait <cit.> ECCV22 98.1 95.2 86.9 93.4 DANet <cit.> CVPR23 98.0 95.9 89.9 94.6 GaitBase <cit.> CVPR23 97.6 94.0 77.4 89.6 GaitGCI <cit.> CVPR23 98.4 96.6 88.5 94.5 HSTL <cit.> ICCV23 98.1 95.9 88.9 94.3 DyGait <cit.> ICCV23 98.4 96.2 87.8 94.1 Ours - 98.6 96.4 89.3 94.8 , Comparison results of Rank-1 (%) with SOTA methods on CASIA-B, excluding identical-view case. ] Evaluation on OU-MVLP. As indicated in <ref>, we compare with previous methods on OU-MVLP. Our proposed approach outperforms the existing methods across most view angles (9 out of 14), which demonstrates its effectiveness. Specifically, it surpasses GaitGCI <cit.> and DyGait <cit.> by 0.2% and 0.3%, respectively, establishing SOTA performance and affirming its effectiveness. Evaluation on CASIA-B. As illustrated in <ref>, our approach undergoes a comparative analysis with published methods on three walking conditions under identical evaluation settings. Observing the table, our approach achieves an average accuracy of 94.8%, which is 0.2% and 0.3% higher than DANet <cit.> and GaitGCI <cit.>, respectively, establishing itself as the SOTA. Furthermore, compared to DyGait <cit.>, the integration of yields remarkable performance improvement, with an average improvement of 0.7%, particularly evident under the CL condition, where it increases by 1.5%. This underscores the strong discriminative feature learning capability of our approach, attributed to the effective elimination of confounders. It is important to note that the performance improvements on OU-MVLP and CASIA-B may not appear particularly significant, with Rank-1 accuracy improvement of 0.3% and 0.7% respectively. This is likely due to the relative simplicity of these datasets, where performance tends to reach saturation. However, the benefits of will become more pronounced when applied to more challenging datasets, as demonstrated in the following experiments. Evaluation on GREW and Gait3D. Validation of the effectiveness of our approach on two wild datasets, namely GREW and Gait3D, is presented in <ref>. Compared with DyGait <cit.> and GaitCSV <cit.>, our approach outperforms by 6.6% and 13.1% on GREW, and by 3.4% and 0.6% on Gait3D, respectively. This highlights the capability of our approach to eliminate the influence of confounders effectively, capture more discriminative features, and contribute to better performance. When considering the results collectively from <ref>, <ref>, and <ref>, it is evident that other methods experience a significant performance drop on wild datasets due to increased confounders. However, our approach still achieves 78.0% Rank-1 accuracy on GREW, establishing itself as the SOTA, and demonstrating strong robustness across diverse scenarios. §.§ Ablation Study In this section, we conduct ablation experiments on GREW and Gait3D to verify the design of . Versatility of . In <ref>, we investigate the versatility of with different gait recognition models, including GaitSet <cit.>, GaitPart <cit.>, GaitGL <cit.>, GaitBase <cit.>, GaitGCI <cit.>, and DyGait <cit.>. From <ref>, we can summarize the following valuable findings: (1) When integrated with our approach, all methods exhibit performance improvements on four datasets, indicating the efficacy of . (2) Particularly noteworthy is the significant performance improvement observed on GREW, with a maximum improvement of 11.1% (from 60.2% to 71.3%) when using GaitGCI without its causal module CIL as the backbone. [0,r, 0.45! Method GREW Gait3D Baseline 71.4 66.3 Baseline + CPAG 73.3 67.1 Baseline + CPAG + FPH 76.0 68.6 Ours 78.0 69.7 ,Study the effectiveness of proposed modules in on GREW and Gait3D, including CPAG, FPH and Factual and Counterfactual Loss. ] Effectiveness of proposed components. To assess the effectiveness of the proposed components, we conduct ablation experiments on GREW and Gait3D. As shown in <ref>, it can be observed that the integration of proposed modules leads to consistent performance improvements. The CPAG contributes to the improvement of recognition performance, with an average improvement of 1.9% on GREW compared to the baseline. Additionally, employing FPH and Factual and Counterfactual Loss together yields 5.7% performance improvement on GREW, underscoring the effectiveness of these components. [0,r, 0.45! Method 1-st 2-nd 3-rd GREW Gait3D Baseline 71.4 66.3 a 73.6 67.0 b 74.2 67.5 c 75.1 68.4 d 75.4 68.6 e 76.5 69.1 f 77.1 69.3 g 78.0 69.7 ,Impact of multiple stages. ] Impact of multiple stages. In <ref>, we analyze the impact of the number and position of s in terms of accuracy. From the results, we note that: (1) The impact of using at different positions on performance varies. The deeper layer it is used at, the more significant performance improvement is achieved. This phenomenon indicates the importance of eliminating confounders closer to the output layer. (2) The best performance is achieved when s are used simultaneously at multiple positions, resulting in a 6.6% improvement compared to the baseline on GREW. This highlights the complementary nature of when integrated at different positions within the network. [0,r,0.4! Method Cross-Entropy InfoNCE GREW Gait3D Baseline 71.4 66.3 a 76.0 68.6 b 77.3 69.2 c 78.0 69.7 , Impact of loss function. ] Effectiveness of factual cross-entropy loss and InfoNCE loss. As reported in <ref>, we conduct experiments to investigate the effectiveness of factual cross-entropy loss and InfoNCE loss. We find that both losses contribute to improved recognition performance, and using them together results in even higher performance, demonstrating their complementary properties. [0,r,0.4! 2*k× k 2c|GREW 2cGait3D 2-5 Rank-1 Rank-5 Rank-1 Rank-5 3× 3 76.4 85.2 66.7 81.6 5× 5 77.0 86.4 68.3 83.9 7× 7 78.0 87.8 69.7 85.2 9× 9 77.2 86.6 68.9 84.7 11× 11 76.3 84.9 67.4 82.5 , Impact of the window size k× k in FPH. ] Impact of the window size k× k in FPH. To study the impact of the value k, we conduct five experiments detailed in <ref> with k of 3, 5, 7, 9, and 11, respectively. The average accuracy initially increases as k rises, reaching a peak, and then degrades with larger k. This trend indicates that very small k may not adequately explore discriminative features, while very large k may distract discriminative features with less discriminative features. Based on the results in <ref>, k=7 is chosen for the best performance. §.§ Qualitative results Visualization of feature distributions. We employ t-SNE <cit.> to visualize the feature distributions between the baseline and our approach on CASIA-B, OU-MVLP, GREW, and Gait3D. As shown in <ref>, we observe that the feature distributions produced by our approach are more compact for the same subject compared to the baseline. Consequently, identities become easier to distinguish. These visualizations confirm that our approach effectively eliminates the interference of confounders and extracts the discriminative features that are genuinely relevant for identification. This provides additional support for the effectiveness of our approach. Visualization of heatmaps. As illustrated in <ref>, we present visualizations of the feature heatmaps processed after . The objective is to illustrate the superiority of our approach qualitatively. A notable observation is that counterfactual attention tends to focus more on confounders such as clothing and bagging, which are unrelated to identity. In contrast, factual attention concentrates on ID-intrinsic information. This observation demonstrates that can effectively eliminate the influence of confounders. § CONCLUSION In this work, we present a discriminative feature learning module based on causality. effectively eliminates the impact of confounders in spatial, temporal, and spectral domains. Thorough quantitative and qualitative experimental analyses demonstrate the effectiveness and versatility of our approach. Future work will explore the application of causality in other computer vision tasks, such as action recognition, person re-identification, and image restoration. Limitation. FPH can improve performance while reducing time costs, but feature selection is fixed. Next, we consider using the attention mechanism for adaptive selection. § ACKNOWLEDGEMENTS This work was supported by the National Key R&D Program of China under project 2023YFF0905401. splncs04
http://arxiv.org/abs/2407.12309v1
20240717041709
MEDFuse: Multimodal EHR Data Fusion with Masked Lab-Test Modeling and Large Language Models
[ "Thao Minh Nguyen Phan", "Cong-Tinh Dao", "Chenwei Wu", "Jian-Zhe Wang", "Shun Liu", "Jun-En Ding", "David Restrepo", "Feng Liu", "Fang-Ming Hung", "Wen-Chih Peng" ]
cs.CL
[ "cs.CL" ]
Both authors contributed equally to this research. [1] National Yang Ming Chiao Tung University Hsinchu Taiwan pnmthaoct, dcongtinh@gmail.com University of Michigan Michigan USA chenweiw@umich.edu National Yang Ming Chiao Tung University Hsinchu Taiwan jzwang.cs09@nycu.edu.tw Shanghai University of Finance and Ecomonics Shanghai China kevinliuleo@gmail.com Stevens Institute of Technology Hoboken New Jersey USA jding17@stevens.edu Massachusetts Institute of Technology Massachusetts USA davidres@mit.edu Stevens Institute of Technology Hoboken New Jersey USA fliu22@stevens.edu Far Eastern Memorial Hospital New Taipei Taiwan philip@mail.femh.org.tw National Yang Ming Chiao Tung University Hsinchu Taiwan wcpengcs@nycu.edu.tw § ABSTRACT Electronic health records (EHRs) are multimodal by nature, consisting of structured tabular features like lab tests and unstructured clinical notes. In real-life clinical practice, doctors use complementary multimodal EHR data sources to get a clearer picture of patients' health and support clinical decision-making. However, most EHR predictive models do not reflect these procedures, as they either focus on a single modality or overlook the inter-modality interactions/redundancy. In this work, we propose MEDFuse, a Multimodal EHR Data Fusion framework that incorporates masked lab-test modeling and large language models (LLMs) to effectively integrate structured and unstructured medical data. MEDFuse leverages multimodal embeddings extracted from two sources: LLMs fine-tuned on free clinical text and masked tabular transformers trained on structured lab test results. We design a disentangled transformer module, optimized by a mutual information loss to 1) decouple modality-specific and modality-shared information and 2) extract useful joint representation from the noise and redundancy present in clinical notes. Through comprehensive validation on the public MIMIC-III dataset and the in-house FEMH dataset, MEDFuse demonstrates great potential in advancing clinical predictions, achieving over 90% F1 score in the 10-disease multi-label classification task. <ccs2012> <concept> <concept_id>00000000.0000000.0000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Applied computing Health care information systems [300]Computing methodologies Artificial intelligence Information systems  Data mining 20 February 2024 [revised]12 March 2024 [accepted]5 June 2024 MEDFuse: Multimodal EHR Data Fusion with Masked Lab-Test Modeling and Large Language Models Wen-Chih Peng =========================================================================================== § INTRODUCTION Electronic Health Records (EHRs) are widely adopted in healthcare, documenting a wealth of heterogeneous patient data comprised of tabular records and unstructured clinical notes. Tabular records encompass essential medical concepts such as diagnoses, medications, and laboratory test results, providing a structured overview of a patient's health. In contrast, clinical notes are extensive, free-text documents written by healthcare providers, offering a more detailed and nuanced account of the patient's history, clinical findings, and progress. The vast volume and diversity of multimodal data within EHRs present a unique opportunity for deep learning technologies to improve the prediction and management of diseases <cit.>. Nevertheless, the heterogeneous nature and large quantity of redundancy in multimodal EHR inputs pose significant challenges for medical AI practitioners to effectively distill and fuse clinically meaningful information for disease prediction. The primary question at hand is: can we effectively obtain and integrate useful representations for different EHR modalities to improve clinical predictions? Current research in deep EHR modeling <cit.> often focuses on single data modalities, often neglecting the integration of significant insights from unstructured medical notes and lab tests. This oversight can limit the model from learning a more comprehensive view of patient health conditions. Lab tests consist of high-dimensional, usually discrete tabular data; however, the conventional approach models structured EHR data as numerical vectors, overlooking complex interactions between individual variables and does not consider their interactions <cit.>. More recent work has moved towards deep learning architectures like Bert and LLMs. LLMs fine-tuned on clinical data have shown promise in unstructured clinical notes in understanding tasks like answering medical questions and making few-shot predictions <cit.>. However, there is a large body of evidence showing that LLMs are still having a hard time capturing the nuances of numerical lab test data and are underperforming on tabular prediction tasks <cit.>. Another significant challenge in fusing information from different types of EHR data is How do we distill the overlapping clinically important features from both modalities? The information contained within different modalities can be categorized as either modality-specific or modality-shared <cit.>. For example, a patient's dietary habits would be considered information specific to the clinical notes modality; hypertension record and lab test value would be regarded as modality-shared information. Existing efforts like multimodal EHR contrastive learning <cit.> have primarily focused on integrating the modality-shared information by emphasizing the inherent consistency through alignment techniques. However, this approach often leads to the common information dominating the alignment and integration process, resulting in the distinctive perspectives offered by each modality being disregarded. Lab tests and clinical notes also possess highly different noise-to-information ratios, making it hard to distill useful joint representation from the noises and redundancy present in EHR. Therefore, there is an urgent need for methods to extract the diverse yet collaborative perspectives both modalities offer for informing therapeutic decision-making. In this work, we propose MEDFuse, a novel Multimodal EHR Data Fusion diagnostic model consisting of modality-specific embedding extractors followed by a disentangled transformer for multimodal fusion. Our model integrates embeddings between fine-tuned LLMs on unstructured clinical text and masked lab-test modeling models pre-trained on structured laboratory results. We further utilize a disentangled transformer optimized by mutual information loss to decouple modality-specific and modality-common information and learn meaningful joint representations for downstream prediction tasks. The key contributions of our work are as follows: * We propose a novel diagnostic model integrating structured lab test data and unstructured clinical notes, utilizing embeddings from fine-tuned LLMs and Masked Lab-Test Modeling, enhancing understanding of diverse clinical information. * We improved joint patient representation by incorporating a disentangled transformer module to effectively separate and integrate modality-specific and shared information, leading to better prediction outcomes across multiple diseases. * We conducted empirical evaluations to illustrate our model’s effectiveness through EHR datasets on various metrics. § RELATED WORK §.§ EHR For Multi-Label Disease Prediction Most recent works in medical Multi-Label Text Classification (MLTC) entirely rely on medical texts. For instance, Kim et al. <cit.> introduced a convolutional attention network designed to extract meaningful document representations across varying text lengths. Recent developments in LLMs, such as those discussed by Luo et al. (2022) and Elliot et al. <cit.>, utilize extensive data from medical literature for domain-specific tasks such as natural language inferencing. Additionally, some studies have employed graph neural networks (GNNs) to organize sequences from Electronic Medical Records (EMR) into hierarchical graphs <cit.>, or to integrate entity relationships from text using attention mechanisms in neural networks <cit.>. Nevertheless, many of these studies overlook the potential advantages of integrating medical expert knowledge from official guidelines and critical blood tests. A combined approach that harnesses both unstructured and structured data could offer extra help to offset issues like label and data scarcity in the medical domain. §.§ Extraction of Clinical Relevant Information from Multimodal EHR Recent work has leveraged self-supervised learning methods, like contrastive pretraining of clinical notes <cit.> and prompt-based large language modeling <cit.>, to facilitate multimodal learning of EHR data. The former encourages the alignment between paired patient data via contrastive loss, and the latter usually directly converts the structured data into text by prompt templates and feeds it into LLMs. However, if the data fusion process focuses solely on aligning the common information, such as diabetes history (text) and blood glucose levels (lab), the rich, modality-specific insights like exercise habits may be overlooked. This can limit understanding of the patient's health and impact predictive models and clinical decisions. Therefore, it is essential to develop multimodal EHR data fusion techniques that can effectively capture and integrate both modality-specific and modality-shared information. § METHOD §.§ Overview Given the clinical notes and lab tests of the patient’s current and historical visits, MEDFuse integrates clinical notes and lab test data to create a comprehensive patient representation for accurate multi-disease prediction. Firstly, as illustrated in Figure <ref>, in the Multimodal Embedding Extraction stage, textual data from clinical notes, including detailed patient information and medical history, are filtered and structured. Simultaneously, abnormal numerical data from various lab tests, such as triglyceride levels, HDL cholesterol, ALT (SGPT), and glucose levels, are extracted and formatted into textual data by prompt templates. The filtered clinical text is then processed by fine-tuned LLMs to generate embeddings that capture its semantic meaning. In parallel, the raw structured tabular data are processed using a domain-specific masked lab-test model to create embeddings representing the quantitative lab data. The two embeddings are then passed through the disentangled transformer module for multimodal fusion and final disease prediction. §.§ Multimodal Embedding Extraction §.§.§ Fine-tuning LLMs on Unstructured Text Clinical notes, comprising diverse fields derived from physicians' diagnoses, form the textual component of the dataset. We filtered the text by Chief Complaint, Present Illness, Medical History, and Medication on Admission. These specific fields are crucial for accurately predicting a patient's disease. To integrate this tabular data with the textual clinical notes, we converted the tabular data into a textual format, a process referred to as tabular feature extraction. This method involves extracting abnormal lab test results and formatting them into a text template — “These are abnormal results recorded: ITEMID <ITEMID>: <VALUE> <VALUEUOM>; ITEMID <ITEMID>: <VALUE> <VALUEUOM>; ...;”. Here, <ITEMID> refers to the specific lab test names, <VALUE> indicates the test values, and <VALUEUOM> denotes the units of measure for the test values. Inspired by the recent success of fine-tuning the language models <cit.> for classification purposes, we fine-tuned various LLMs for disease prediction. Our best-performing backbone is the publicly accessible Medical-Llama3-8B model <cit.>, which is fine-tuned from Meta-Llama-3-8B <cit.>. It is trained on a comprehensive medical chatbot dataset and optimized for addressing health-related inquiries. We extracted latent vector representations from the final layer of the Llama decoder, which was originally engineered for autoregressive prediction of subsequent tokens. These extracted vectors were subsequently processed through feed-forward neural layers, effectively transforming them into a label space. The output from these transformations, in the form of logits, was then utilized to perform discriminative classification based on labels. This method aims to harness the latent embedding of LLMs to achieve targeted, efficient task adaptation. §.§.§ Masked Lab-Test Modeling The Masked Lab-Test Modeling (MLTM) module extends the Masked Autoencoders (MAE) <cit.> framework to reconstruct masked components based on observed components in EHR data. MLTM consists of a encoder that maps observed values to their representations and a decoder that reconstructs the masked values from the latent representations. To account for the inherent incompleteness in the imputation task, MLTM employs an additional masking approach in the training to make sure a uniform 75% value is masked out. The encoder applies a learnable linear encoding function wx + b to each unmasked x and passes through a transformer architecture, while the decoder operates on the embeddings of both observed and masked values. Positional encoding is added to the embeddings to preserve the lab test positions. The reconstruction loss is defined as the mean square error between the reconstructed and original values on the re-masked and unmasked sets. MLTM is designed with an asymmetric architecture, using a deep encoder and a shallow decoder to extract useful lab-test representations. §.§ Disentangled Transformer Module Initially, features from each modality are multiplied by the Kronecker product to approximate a joint distribution, C = A ⊗ B ∈ℝ^(m×(a)×(b)), effectively capturing the pairwise interactions. Self-attention is applied to Z_a and Z_b to obtain S_a and S_b, controlling the expressivity of each modality and preventing noisy features. Subsequently, the common information of the joint distribution is extracted via cross attention of Q_c, K_c+K_a+K_b, and V_c+V_a+V_b to model modality-common features S_c. To preserve modality-specific information, we minimize the Mutual Information (MI) loss between concatenated S_a+S_b and S_c. As the computation of mutual information is intractable, we calculate a variational upper bound called contrastive log-ratio upper bound (vCLUB) as an MI estimator to achieve MI minimization. Given two variables a and b, the L_v^CLUB(a,b) is calculated as follows <cit.>: L_v^CLUB(a, b) = 𝔼_p(a, b)[log q_θ (b|a)] - 𝔼_p(a)𝔼_p(b)[log q_θ (b|a)] = 1/N^2∑_i=1^N∑_j=1^N[ log q_θ (b_i|a_i) - log q_θ (b_j|a_i) ] We employ an MLP q_θ(b|a) to provide a variational approximation of q_θ(b|a), which can be optimized by maximizing the log-likelihood <cit.>: L_estimator(a, b) = 1/N∑_i=1^Nlog q_θ(b_i | a_i). The MI Loss is then calculated as: MI Loss = L_v^CLUB(S_a + S_b) + L_estimator(S_a + S_b, S_c). After optimizing the mutual information between the modality-specific information and the modality-common information, we utilize dense fusion <cit.> to enable denser interaction between modalities. Instead of directly connecting a prediction classifier on top of the fused representation S_c, we learn deeper representations of the clinical notes and lab test features and add skip connections to concatenate with the fused representation, forming a final fused embedding: h_a=f_a (S_a) and h_b=f_b(S_b) where f_a and f_b are fully-connected layers. This final representation not only aggregates the modality-specific features but also incorporates the modality-common representation from the previous stage of the network: h_final=concat(h_a,S_c,h_b). Finally, a dense block g is used to generate y=g(h_final), and the model is trained by optimizing the prediction loss (focal loss for multilabel prediction). This allows for dense interaction of features from each modality, aggregating information across different stages of the network. The final loss optimizes a combination of the prediction objective and the mutual information loss, controlled by a hyperparameter λ with a value range of [0,1]. In this case, we choose a value of 0.1. Loss_final = L_objective(g(h_final)) + λ * MI(concat(S_a, S_b), S_c) § EXPERIMENTS §.§ Datasets and Metrics To evaluate the performance of the methods under comparison, we employed two real-world EHR datasets: MIMIC-III <cit.> and FEMH. We collected five years of EHRs from the Far Eastern Memorial Hospital (FEMH) in Taiwan from 2017 to 2021. The dataset includes 1,420,596 clinical notes, 387,392 lab results, and over 1,505 lab test items. The FEMH Research Ethics Review Committee [https://www.femhirb.org/] approved the study, and all data were de-identified. We selected patients with at least two recorded visits from each dataset. For the multi-label classification task in MIMIC-III, we identified the top 10 most prevalent conditions: “Hypertension, uncomplicated”, “Cardiac arrhythmias”, “Fluid and electrolyte disorders”, “Congestive heart failure”, “Diabetes w/o chronic complications”, “Chronic pulmonary disease”, “Valvular disease”, “Renal failure”, “Hypertension, complicated”, and “Other neurological disorders”. In the FEMH dataset, the top 10 most common diseases include “Hypertension”, “Diabetes”, “Heart disease”, “Cancer”, “Cerebrovascular Disease”, “Kidney Disease”, “Liver Disease”, “Asthma”, “Hyperlipidemia”, and “Lung Disease”. We applied several established multi-label classification metrics to assess model performance such as Macro-average and weighted-average F1-Scores, precision, recall, and accuracy on the test dataset <cit.>. §.§ Experimental Results Table <ref> and Table <ref> illustrate the training and validation performance of various models, highlighting the effectiveness of our proposed method on the MIMIC-III and FEMH datasets, respectively. In Table <ref>, our approach outperforms baseline models such as Bert <cit.>, Mistral-7B-v0.1 <cit.>, Llama-2-7B-hf <cit.>, Meta-Llama2-13B <cit.>, Meta-Llama3-8B <cit.>, and Medical-Llama3-8B <cit.> across all key metrics. Specifically, MEDFuse shows significant improvements over the best-performing LoRA fine-tuned LLM, Medical-Llama3-8B. On the test set, our model performs 1.49% better in macro F1 score, and similar trends are observed in other metrics. Table <ref> shows MEDFuse consistently outperforms Medical-Llama3-8B on the FEMH dataset. For example, training and validation in precision is a 1.53% increase, in the recall is a 2.07% increase, the training accuracy is 0.9311 (0.55% increase), and validation accuracy is 0.9296 (0.41% increase). These results validate the robustness and generalizability of our approach, underscoring its potential for accurate and reliable clinical predictions across diverse datasets. §.§ Ablation Study We conducted an ablation study to examine the contributions of various components in our proposed method, which integrates Medical-Llama3 with a transformer module, utilizing lab tests (LABTEXT) and clinical notes (TEXT). The results highlight clear performance contrast when any component is omitted. Removing both the transformer and LABTEXT results in a 4.81% drop in training precision and a 4.40% decrease in validation precision. The most substantial performance reduction occurs when both the transformer and TEXT are excluded, leading to a 29.76% decrease in training precision and a 30.66% decrease in validation precision. This underscores the indispensable role of TEXT and the transformer in our method. Even when only TEXT is removed, performance significantly deteriorates, with a 17.14% decline in training precision and a 14.60% decline in validation precision. These findings illustrate that each component contributes significantly to the model's overall efficacy. Our full model, combining LLMs and MLTM, demonstrates the highest performance, with a training accuracy of 0.9535 and a validation accuracy of 0.9122. § CONCLUSION In conclusion, we have presented a novel multi-disease diagnostic model that integrates multimodal data, closely mirroring real-life clinical decision-making. By combining fine-tuned LLMs with domain-specific transformers, we achieved enhanced synthesis of structured and unstructured medical data. Using a disentangled transformer further refined this integration, significantly improving disease prediction accuracy. Our experimental results across two practical EHR datasets demonstrated the proposed model's robustness and effectiveness. In future work, we aim to extend our model to cover more complex and rare diseases, enhance its interpretability for clinical use, and evaluate its performance on larger, more varied datasets. We will also explore the integration of real-time and other data modalities <cit.> to further align our model with dynamic clinical environments. ACM-Reference-Format
http://arxiv.org/abs/2407.12941v1
20240717181240
Robotic Arm Manipulation with Inverse Reinforcement Learning & TD-MPC
[ "Md Shoyib Hassan", "Sabir Md Sanaullah" ]
cs.RO
[ "cs.RO", "I.2.9" ]
: Critique-Based Hallucination Judge Binjie Wang2,5Steffi Chern4,5Ethan Chern1,5Pengfei Liu1,3,5  Corresponding author 1Shanghai Jiao Tong University2Fudan University 3Shanghai Artificial Intelligence Laboratory 4Carnegie Mellon University 5Generative AI Research Lab (GAIR) July 22, 2024 ==================================================================================================================================================================================================================================================================== § ABSTRACT One unresolved issue is how to scale model-based inverse reinforcement learning (IRL) to actual robotic manipulation tasks with unpredictable dynamics. The ability to learn from both visual and proprioceptive examples, creating algorithms that scale to high-dimensional state-spaces, and mastering strong dynamics models are the main obstacles. In this work, we provide a gradient-based inverse reinforcement learning framework that learns cost functions purely from visual human demonstrations. The shown behavior and the trajectory is then optimized using TD visual model predictive control(MPC) and the learned cost functions. We test our system using fundamental object manipulation tasks on hardware. Keywords: inverse RL, LfD, TD-MPC, visual dynamics models, keypoint representations § INTRODUCTION Research on learning from demonstrations is booming because it allows robots to quickly acquire new skills. In inverse reinforcement learning (IRL), for example, demonstrations might assist in a number of ways by having the robot attempt to deduce the objectives or reward from the human demonstrator. The majority of IRL techniques call for expensive to obtain demonstrations that link action and state measurements. With the use of visual examples, we move closer to model-based inverse reinforcement learning for basic object manipulation tasks. It is believed that model-based IRL techniques are more sample-efficient and have the potential to facilitate generalization <cit.>. However, their model-free equivalents have had greater success so far in robotics applications with unknown dynamics in the actual world <cit.>. Model-based IRL still faces the following significant obstacles: An inner and an outer optimization step are the two nested optimization issues that make up model-based inverse reinforcement learning. Given a cost function and transition model, a policy is optimised by the inner optimisation problem.The majority of earlier research <cit.> presumes that this robot-environment transition model is known; in reality, the robot usually lacks access to such a model. In order for the inner step to optimize a policy that closely aligns with the observed demonstrations, the outer optimization step seeks to maximize the cost function. Measuring the impact of changes in cost function parameters on the resulting policy parameters makes this step very difficult. This optimization step is approximated in previous work [5, 8, 1] by minimizing a manually created distance metric between policy rollouts and demonstrations. Although this approximation makes the outer optimization step feasible, learning the cost function may become unstable as a result. Our work addresses these issues and makes model-based IRL from visual demos possible. We pre-train a dynamics model so that the robot can anticipate how its actions would alter this low-dimensional feature representation. 1) We train keypoint detectors <cit.> that extract low-dimensional vision features from both the robot and human demos. Once The robot can utilize its own dynamics model to optimize its actions to attain the same (relative) latent-state trajectory after observing a latent-state trajectory from a human demonstration. 2) By differentiating through the inner optimization step, we used an inverse reinforcement learning technique that makes learning cost functions possible. The IRL algorithm is based on the latest developments in gradient-based bi-level optimization <cit.>. This technique enables us to calculate the gradients of cost function parameters in relation to the inner loop policy optimization phase, resulting in an optimization process that is more stable and efficient. We assess our method by gathering human examples of fundamental object manipulation tasks, figuring out the cost functions involved, and replicating comparable actions on a Franka Panda. § LITERATURE REVIEW In recent years, the study of cost function learning from demonstrations provided by experts has generated significant interest. As a result several approaches have been proposed to tackle the challenge of learning effective cost functions of robot tasks. §.§ Foundational Approaches in IRL <cit.> introduced apprenticeship learning via inverse reinforcement learning (IRL), which forms the foundation for many subsequent works. Their proposed method focuses on learning a policy that performs as well as the demonstrator by matching the feature expectations between the expert and the learner. This method has been built upon in many ways. For example, by incorporating deep learning techniques in more complex tasks <cit.>. §.§ Probabilistic Frameworks <cit.> proposed using relative entropy in IRL, that provided a probabilistic framework for the learning process. Such a method helps to regularize the learning process and improves the stability of the learned cost functions. In a similar fashion the KKT approach introduced by <cit.> learns cost functions by leverage the Karush-Kuhn-Tucker conditions from optimization theory. §.§ Deep Learning Techniques <cit.>, provided a significant contribution to this field by applying deep learning techniques to large-scale cost function learning for path planning. Their methodology exemplifies the effectiveness of deep networks at capturing the complex cost structures from high dimensional input spaces. §.§ Visual Learning <cit.> and <cit.> have made noteworthy contributions in the domain of visual learning by developing methods, which allow for the learning from visual data in order to predict future frames and so plan robot motion in accordance. These techniques use convolutional neural networks for processing visual information and then learning predictive models that are used for control. §.§ Meta-Learning Algorithms Another recent Advancement includes works on meta learning algorithms which aim to improve the generalization and efficiency of learning algorithms. A generalized inner loop meta-learning algorithm by <cit.> was introduced that adapts quickly to new tasks by learning and initialization which can be fine-tuned with minimal data. §.§ Visual Model Predictive Control The capability of optimizing action sequences in order to minimize task cost of a given visual dynamics model is fundamental to many robot applications. Many approaches have been developed to either learn a transition model directly in pixel space or learn it jointly in a latent space encoding with a dynamics model in that space. For instance, methods like those proposed by <cit.> have concentrated on pixel-level transitions models and the design of cost functions in evaluating progress to goal positions and success classifiers. While others have mapped visual observations to a learned pose space and utilized deep dynamics models for optimizing actions <cit.>. §.§ Inverse Reinforcement Learning It's been proven challenging to scale IRL to real-world manipulation tasks. Prior research has explored model-free approaches, which have shown success in certain applications. <cit.> used proprioceptive state measurements without considering visual feature spaces, limiting their applicability to tasks requiring visual inputs. Newer advances introduce things like gradient-based bi-level optimization, allowing for the computation of cost functions gradients as a function of the inner loop policy optimization step, which leads to more stable and effective optimization. <cit.>. §.§ Gradient-Based Visual Model Predictive Control Framework A new approach involving the training of keypoint detectiors to extract low-dimensional vision features from expert demonstrations and pre-train a dynamcis model allows the robot to predict action outcomes in that feature space. This helps the robot to optimize its actions to achieve the same latent-state trajectory observed in the demonstrations. By differentiating through the inner optimization step, this gradient-based IRL algorithm offers significant improvements over traditional feature-matching IRL methods <cit.>. By differentiating through the inner optimization step, this gradient-based IRL algorithm offers significant improvements over traditional feature-matching IRL methods <cit.>. §.§ Applications and Experimental Validation The experimental validations on robotic platforms, such as the Kuka iiwa, signify their effectiveness in real-world tasks. The combination of self-supervised data collection and expert controller data significantly enhances the training of keypoint detectors and dynamics models, which leads to a more accurate and reliable cost function learning <cit.>. Cost function learning from demonstrations involves a wide range of techniques, from the traditional IRL methods to advanced deep learning and meta-learning approaches. They holistically contribute to the development of more autonomous and capable robotic systems. § TEMPORAL-DIFFERENCE VISUAL MODEL PREDICTIVE CONTROL FRAMEWORK In this section, we describe our temporal-difference visual model predictive control approach that combines recent advances in unsupervised keypoint representations and model-based planning. The IRL framework, with a simplified illustration in Figure 1, actually consists of the following components: 1) a keypoint detector that produces low-dimensional visual representations (Human demos), in the form of keypoints, from RGB image inputs; 2) a dynamics model that takes the current joint state q, previous joint state q̇, and actions u to predict the keypoints and joint state at the next time step; and 3) a gradient-based visual model-predictive planner that optimizes actions for a given task using the dynamics model and a cost function. We will elaborate on each of these modules next. §.§ Keypoints as Visual Latent State and Dynamics Model We employ an autoencoder with a structural bottleneck to detect 2D keypoints that correspond to pixel positions or areas with maximum variability in the input data. The keypoint detector's architecture closely follows the implementation in <cit.>. For training the keypoint detector, we collect visual data D_key-train for self-supervised keypoint training (refer to Appendix A.2). After this training phase, the keypoint detector predicts keypoints z = g_key(o_im) of dimensionality K × 3. Here, K is the number of keypoints, and each keypoint is given by z_k = (z_x,k, z_y,k, z_m,k), where z_x,k and z_y,k are pixel locations of the k-th keypoint, and z_m,k is its intensity, which corresponds roughly to the probability that the keypoint exists in the image. Given a trained keypoint detector, we collect dynamics data D_dyn-train to train the dynamics model. The dynamics model learns to predict the next keypoints ẑ_t+1 and the next joint state q_t+1 based on the current joint state q_t, previous joint state q̇_t, and the action u_t. §.§ Temporal-Difference Learning in Model Predictive Control Our framework incorporates temporal-difference learning to refine the predicted trajectories. The IRL loss L_IRL is defined as the squared distance between the demonstrated trajectory τ_demo and the predicted trajectory τ̂: L_IRL(τ_demo, τ̂) = ∑_t (z_t,demo - ẑ_t)^2 The optimization problem for the IRL is expressed as: ∇_y L_IRL(τ_demo, τ̂_y) = ∇_y L_IRL(τ_demo, τ̂_y) ∇_yτ̂_y = ∇_y L_IRL(τ_demo, τ̂_y) ∇_y f_dyn(s; u_opt) = ∇_y L_IRL(τ_demo, τ̂_y) ∇_y f_dyn(s; u_init) ∇_u C_y(s_demo, f_dyn(s; u)) This optimization involves tracking gradients through the inner loop and differentiating the optimization trace with respect to outer parameters using a gradient-based optimizer such as <cit.>. The algorithm extends to multiple time steps by adapting the equations to the predicted trajectory over T time steps. § GRADIENT-BASED IRL FROM VISUAL DEMONSTRATIONS In this section, we employ a gradient-based inverse reinforcement learning (IRL) algorithm to learn the cost functions directly from visual demonstrations. This approach leverages the new advances made in gradient-based bi-level optimization in order to facilitate the stable and efficient learning of the cost function. The main components of this method involves a pre-trained visual dynamics model and a differentiable action optimization process. Algorithm <ref> outlines the steps involved in our gradient-based IRL for a single demonstration. It begins by first initializing the action sequence and rolling out the initial trajectory using the pre-trained dynamics model. Then it optimizes the action sequence through minimizing the cost function and rolls out the optimized trajectory and to update the state. §.§ Learning Cost Functions for Action Optimization In the case of any IRL, learning the cost function that can accurately reflect the demonstrated behavior is crucial. The approach below utilizes a gradient-based bi-level optimization in order to differentiate through the inner optimization step, allowing the algorithm to update the cost function parameters effectively. The inner optimization problem can be formulated as follows: u_opt = min_u C(τ, z_goal) where τ is the trajectory generated by the dynamics model f_dyn and the action sequence u. The outer optimization step aims to minimize the IRL loss with respect to the cost function parameters ψ: min_ψ L_IRL(τ_demo, τ̂_ψ) where τ_demo is the demonstration trajectory and τ̂_ψ is the predicted trajectory using the current cost function parameters. By applying the chain rule, we decompose the gradient of the IRL loss with respect to the cost function parameters and it is as follows: ∇_ψ L_IRL(τ_demo, τ̂_C_ψ) = ∇_τ̂_ψ L_IRL(τ_demo, τ̂_C_ψ) ·∇_ψ f_dyn(s, u_init - η∇_u C_ψ(s_demo, f_dyn(s,u))) The formulation allows the update of the cost function parameters iteratively, leading to an effective optimization process. §.§ Cost functions and IRL Loss for learning from visual demonstrations The specification of the IRL loss L_IRL and the cost function parameterization C_ψ are both necessary for our algorithm to work. The L_IRL calculates the difference between the latent trajectory that is displayed (τ_demo) and the one that is expected (τ̂). To keep the L_IRL as basic as possible, we use the squared distance as follows: L_IRL(τ_demo, τ̂) = ∑_t (z_t, demo - ẑ_t)^2 This equation represents the difference between the anticipated and demonstrated keypoints at each time step. As with <cit.>, we contrast three different cost function C_ψ parameterizations: C_ψ(τ, z_goal) = ∑_k( ϕ_x, k∑_t (ẑ_t, k^x - z_goal, k^x)^2 + ϕ_y, k∑_t (ẑ_t, k^y - z_goal, k^y)^2 ) where ẑ_t, k^x and ẑ_t, k^y denote the kth predicted keypoint at time-step t in the x and y dimensions, respectively. § EXPERIMENTAL EVALUATION §.§ Setup We conducted our experiments using a simulated Franka Emika Panda arm in the PyBullet environment. The task involves picking up a small cube, which is spawned at a random position on the ground, and placing it on a larger stationary cube. This setup tests the arm's ability to accurately locate, grasp, and place objects using the learned policy. §.§ Data Collection We used a collected set of 20 demonstrations by an expert. who manually controlled the Franka Panda arm to complete the task. The demonstrations were recorded at 30Hz and were downsampled to 5Hz. This was done to match the temporal resolution used in the training phase. The keypoints of the arm and the cubes were utilized from each frame to create the dataset. §.§ Training We trained the model using the dataset to learn the cost function that optimizes the action policy the best, for the placement task. Gradient-based methods were used for 5000 iterations to update the model parameters. §.§ Evaluation The performance of the trained model was tested on 10 different scenarios where the small cube was placed at random locations on the floor. The evaluation metrics are the loss and reward with respect to episodes during the training of the model. §.§.§ Quantitative Results Figure 2(l) shows the performance of the model's loss and reward with respect to the number of episodes it trains through. As can be seen the loss gradually goes down and stabilizes after a certain point and the reward threshold gradually keeps increasing as the training progresses. §.§.§ Qualitative Results Figure 2(a) to (k) illustrates the placement task at different timesteps using the learned cost functions. As can be seen the robotic franka panda arm is able to perform the task with well suited generalizability and accuracy. Thus, it can be concluded that the model is quite decently effective at performing the given task based on the expert's demonstrations. § CONCLUSION AND FUTURE WORK In this paper we proposed a gradient-based IRL framework which learns cost functions from visual demonstrations. Our methodology utilized a compact keypoint-based image representation and trains the visual dynamics model in the latent space. The extracted keypoint trajectories from both the user demos and our learned dynamics model, we've successfully been able to learn different cost functions using the proposed gradient-based IRL algorithm. The experiment still faces a few challenges. Learning a good visual predictive model is difficult and was a major challenge in this work. One workaround could be to robustify the keypoint detector using methods such as the Florence et al. one, rendering it invariant to different points of view. Moreover, the current approach assumes that demonstrations are from the robot's perspective. And so we addressed the different starting configuarations by learning on relative demos instead of absolute ones. More methods need to be explored in the future so that demonstrations can be better mapped from one context to another, like the case with Liu et al. And finally, though our experiments show improved convergence behavior for the gradient-based IRL algorithm compared to feature-matching baselines, further investigation is required. An exciting direction for future work is the incorporation of neural network processing (NLP) instructions. By integrating NLP, we could allow users to give commands in natural language which the robot would be able to understand and execute. This incorporation would make the system more user-friendly and more generalizable to a wider range of tasks, enhancing significantly the application of our framework. 99 Abbeel2004 P. Abbeel and A. Y. Ng, "Apprenticeship learning via inverse reinforcement learning," in Proceedings of the twenty-first international conference on Machine learning, 2004, p. 1. Finn2016 C. Finn, S. Levine, and P. Abbeel, "Guided cost learning: Deep inverse optimal control via policy optimization," in International conference on machine learning, 2016, pp. 49–58. Boularias2011 A. Boularias, J. Kober, and J. Peters, "Relative entropy inverse reinforcement learning," in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011, pp. 182–189. Englert2017 P. Englert, N. A. Vien, and M. Toussaint, "Inverse KKT: Learning cost functions of manipulation tasks from demonstrations," The International Journal of Robotics Research, vol. 36, no. 13-14, pp. 1474–1488, 2017. Wulfmeier2017 M. Wulfmeier, D. Rao, D. Z. Wang, P. Ondruska, and I. Posner, "Large-scale cost function learning for path planning using deep inverse reinforcement learning," The International Journal of Robotics Research, vol. 36, no. 10, pp. 1073–1087, 2017. Finn2017 C. Finn and S. Levine, "Deep visual foresight for planning robot motion," in 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 2786–2793. Ebert2018 F. Ebert, C. Finn, S. Dasari, A. Xie, A. X. Lee, and S. Levine, "Visual foresight: Model-based deep reinforcement learning for vision-based robotic control," arXiv preprint arXiv:1812.00568, 2018. Grefenstette2019 E. Grefenstette, B. Amos, D. Yarats, P. M. Htut, A. Molchanov, F. Meier, D. Kiela, K. Cho, and S. Chintala, "Generalized inner loop meta-learning," arXiv preprint arXiv:1910.01727, 2019. osa2018algorithmic T. Osa, J. Pajarinen, G. Neumann, J. A. Bagnell, P. Abbeel, and J. Peters, "An algorithmic perspective on imitation learning," arXiv preprint arXiv:1811.06711, 2018. kalakrishnan2013learning M. Kalakrishnan, P. Pastor, L. Righetti, and S. Schaal, "Learning objective functions for manipulation," in 2013 IEEE International Conference on Robotics and Automation, 2013, pp. 1331–1336. finn2016guided C. Finn, S. Levine, and P. Abbeel, "Guided cost learning: Deep inverse optimal control via policy optimization," in International conference on machine learning, 2016, pp. 49–58. wulfmeier2017large M. Wulfmeier, D. Rao, D. Wang, P. Ondruska, and I. Posner, "Large-scale cost function learning for path planning using deep inverse reinforcement learning," The International Journal of Robotics Research, vol. 36, no. 10, pp. 1073–1087, 2017. abbeel2004apprenticeship P. Abbeel and A. Y. Ng, "Apprenticeship learning via inverse reinforcement learning," in Proceedings of the twenty-first international conference on Machine learning, 2004, p. 1. higher E. Grefenstette et al., "Higher: A library for differentiable higher-order optimization," in Neural Information Processing Systems, 2020. keypoint_paper T. Jakab, A. Gupta, H. Bilen, and A. Vedaldi, "Unsupervised Learning of Object Keypoints for Perception and Control," Advances in Neural Information Processing Systems, 2018.
http://arxiv.org/abs/2407.12340v1
20240717063237
Teaching Quantum Informatics at School: Computer Science Principles and Standards
[ "Giulia Paparo", "Regina Finsterhoelzl", "Bettina Waldvogel", "Mareen Grillenberger" ]
physics.ed-ph
[ "physics.ed-ph", "quant-ph", "K.3.2" ]
Quantum Informatics at School G. Paparo et al. Schwyz University of Teacher Education, Goldau, Switzerland {giulia.paparo,bettina.waldvogel,mareen.grillenberger}@phsz.ch University of Konstanz, Konstanz, Germany regina.finsterhoelzl@uni-konstanz.de Lucerne University of Teacher Education, Lucerne, Switzerland Lucerne University of Applied Sciences and Arts, Rotkreuz, Switzerland Teaching Quantum Informatics at School: Computer Science Principles and Standards Giulia Paparo10000-0002-4782-8337 Regina Finsterhoelzl 20000-0002-0899-4957 Bettina Waldvogel10000-0002-0658-1032 Mareen Grillenberger1,3,40000-0002-8477-1464 July 22, 2024 ================================================================================================================================================================== § ABSTRACT The application of knowledge from quantum physics to computer science, which we call quantum informatics, is driving the development of new technologies, such as quantum computing and quantum key distribution. Researchers in physics education have recognized the promise and significance of teaching quantum informatics in schools, and various teaching methods are being developed, researched and applied. Although quantum informatics is equally relevant to computer science education, little research has been done on how to teach it with a focus on computer science concepts and knowledge. In this study, we position quantum informatics within Denning's Great Principles of Computing and propose Quantum Informatics Standards for secondary schools. § INTRODUCTION The application of quantum physics to computer science has the potential to open new avenues of thought and endeavors within computer science. From quantum computers that could outperform classical supercomputers at solving computationally hard problems, to quantum key distribution that could make it physically impossible for a third party to eavesdrop unnoticed, quantum informatics has the potential to have a major impact on computer science and society as a whole <cit.>. Most of these technologies are still in development, and the timeframe and applications for their large-scale implementation are not yet predictable. However, this does not affect the theoretical change in thinking and skills that quantum technologies require. It is important to start thinking about how to introduce students to this new way of thinking at an early stage, for several reasons. First, teaching quantum informatics in schools means laying the foundation for an informed society that can consciously discuss the future of these technologies and their applications. Second, learning about quantum information can help students develop new perspectives, challenge what they think they already know, and develop a curious approach to the nature of computation. Finally, from an educational perspective, it is a wonderful example to explore how to teach complex and inherently multidisciplinary topics in school. The aim of this paper is to support the teaching of quantum informatics in lower and upper secondary schools by providing an overview of the subject and defining principles and standards, as an orientation in the development of educational material. To this end, we ask the following questions: How can quantum informatics be positioned from the perspective of computer science education? and What are the most important learning outcomes for secondary school students learning about quantum informatics from a computer science education perspective? To answer these questions, we first suggest how quantum informatics could be positioned within Peter Denning's Great Principles of Computing <cit.>, and then propose Quantum Informatics Standards for secondary schools. § RELATED WORKS In the past years, there has been a rapidly growing interest in the teaching of quantum informatics. This interest sparked especially from physics educators, who saw in quantum technologies the possibility to access quantum mechanics, a notorious hard to teach subject at school, in a more tangible and direct way <cit.>. Many interesting approaches for teaching quantum informatics at school were developed and evaluated, <cit.>, <cit.>, <cit.>. In addition, a variety of serious games <cit.>, visualization tools <cit.>, zines (self-published DIY magazines) <cit.>, and role-playing <cit.> approaches were developed that could be used to engage high school students along with the broader public. We will present in more depth some of these approaches in section <ref>. It is important to note that these were mostly small-scale interventions carried out by a group of experts, and that more extensive research, as well as teaching guidelines, training, or evaluation criteria for teachers are still needed. An important step towards establishing a common language as well as an orientation for educational programs was the formulation of the European Framework for Quantum Technologies <cit.>. However, this framework aims to map competencies and skills for quantum technologies, and therefore includes topics of little relevance from a computer science perspective, such as quantum sensing. At the same time, it does not give much space to relevant topics, such as quantum information theory, and focuses mainly on higher education. On the contrary, Key Concepts for Future Quantum Information Science Learners <cit.> and more directly its expansion for computer science <cit.> target more clearly quantum informatics from a computer science perspective and at high school level. The first <cit.> offers a good overview and a basis for educators, and it is expanded and evaluated further for Computer Science learning outcomes and activities <cit.>. The latter, however, is aimed at pre-college and beyond, and the level of knowledge and skills required is not suitable to younger students. Seegerer <cit.> discussed quantum computing as a topic in computer science education and made a proposal for its central concepts, ideas, and explanatory approaches. In our previous work, we systematically analyzed the key concepts of quantum informatics further and applied them for the formulation of competencies within guidelines of the German Informatics Society <cit.>. Although the results are in German, the presented mapping of the key concepts of quantum informatics should also be understandable for English speakers, as it consists mostly of language-independent technical terms. The above-mentioned works constitute a first good step for supporting the teaching of quantum informatics. Standards for quantum informatics education at the lower and upper secondary level are still needed, and this work aims to fill this gap, building upon some of the previously defined key concepts and competencies (<cit.>, <cit.>, <cit.>, <cit.>). § TERMINOLOGICAL REMARKS Generally, it is distinguished between quantum technologies of the first generation and second generation. The first generation is defined by technologies that could be built thanks to our understanding of quantum physics. This definition includes technologies, such as lasers, transistors and global positioning systems (GPS). The second generation of quantum technologies builds on the capability to control and manipulate the properties of quantum systems, for example, to build quantum computers and quantum sensors. Naturally, as computer science educators, we are looking further than the technology alone. As Michaeli <cit.> pointed out, the best term to define the field of research describing the knowledge and applications related to these new technologies from the perspective of computer science is in German Quanteninformatik. We translated that as quantum informatics instead of quantum computer science to indicate a broader view on information processing not limited to computers alone. Other commonly used terms to describe the field are Quantum Information Science (QIS), Quantum Information Science and Technologies (QIST), and Quantum Computing. All of these terms intersect in some aspects and at the same time suggest a different focus (on information theory or computation). The term quantum informatics, although less commonly used, describes more directly what we are interested in: We want students to understand, use, and think about the changes in computer science (informatics) brought about by the application of our knowledge of quantum theory. For these reasons, we prefer the term quantum informatics in this article and hope it will be broadly used in the future. § METHODOLOGICAL APPROACH In order to position quantum informatics from the perspective of computer science, we used the Great Principles framework of Peter Denning <cit.> as a way to categorize a new technology within overarching principles that could then serve as a guide for educators and curriculum developers. As described in more detail in section <ref>, we first categorized the key concepts of quantum informatics <cit.> within the Great Principles. The categorization was carried out by the first author, while the second and third authors, with expertise in quantum information theory and computer science education respectively, provided critical review. Re-categorization possibilities were discussed, as well as the addition of further concepts. The categorization was then reviewed by an external computer science expert. The newly formulated orientation within the Great Principles, as well as relevant aspects of the previously described frameworks (<cit.>, <cit.>, <cit.>), then served as a basis for the further formulation of the standards. The previously defined learning goals <cit.> were analyzed within the framework of the CSTA K-12 Computer Science Standards <cit.>. These describe a general core set of learning objectives for the K-12 computer science curriculum and are widely used for curriculum development and assessment in the USA and around the world, and thus provided an excellent orientation for an initial formulation of international quantum informatics standards. The formulated standards were iterated following the principles of the Standards Developer's Guide <cit.>. Details are explained in section <ref>. Finally, to make the learning outcomes more tangible to the reader, exemplary teaching approaches from the literature were assigned to the standards. The teaching approaches were chosen based on whether they were considered to fit the newly defined Quantum Informatics Standards, specifically whether they were designed for high school students, address one or more of the defined standards, and were suitable from a computer science perspective (see section <ref> for more details). § GREAT PRINCIPLES OF QUANTUM INFORMATICS Peter Denning defined in a compact and coherent way, the overarching, fundamental principles of computer science, what he called Great Principles of Computing <cit.>. The Great Principles framework aimed to stimulate deep structural thinking about computer science as a discipline and to provide a common language that encourages connections within computer science and across disciplines, providing existing stories while inspiring and structuring new didactic approaches. To do so, he defined seven categories (windows) based on the fundamental questions underlying computing technologies. For each window, Denning proposed principal stories to depict the richness of each principle, make it more tangible, and to indicate examples of its historical development. We carried out a categorization of quantum informatics within the Great Principles framework, in order to provide an orientation of quantum informatics within computer science, to help in structuring and seeing the principles of computer science within quantum informatics. To do so, quantum informatics concepts <cit.> were first categorized within this framework. The categorization was then critically reviewed, and re-categorization and the addition of new concepts were discussed. There was almost no need for re-categorization. As in Denning’s framework, some concepts could be assigned to more than one window, as these are regarded as overlapping rather than exclusive. However, in order to achieve a comparable balance with the principal stories of classical computing, it was necessary to add several concepts that were not originally part of our selection. This was especially the case for the windows Design and Evaluation, which were also added later to the Great Principles by Peter Denning <cit.>. As Denning states, this framework is evolving with time and is not an exhaustive representation of the field, which is even more true for a relatively young discipline such as quantum informatics. Our goal was to use the framework as a didactic tool to provide orientation, not to revise the Great Principles. While it has been possible to position the main concepts of quantum informatics within this framework, this should not be seen as an exhaustive or conclusive statement, but rather as an exploratory approach to provide guidance to computer science teachers and educators. The results are illustrated in Table <ref>. § LEARNING STANDARDS FOR QUANTUM INFORMATICS The K-12 Computer Science Framework <cit.> is divided into practices and concepts that form the basis of our Quantum Informatics Standards. We left the practices unchanged because they are overarching practices that apply to computing in general as well as to its quantum applications. The standards were then formulated following the principles of the Standards Developer's Guide <cit.> highlighted in italics. Each standard was formulated by integrating one or more practices with a quantum concept and one or more CS concepts. We formulated the standards with a high degree of rigor, meaning that the standards should represent an appropriate cognitive and content demand for students. However, these are only theoretically formulated standards that need to be tested empirically. In light of future empirical evaluation, when unsure, we rather included a topic or chose a higher standard. For reasons of manageability, we focused on fundamental concepts like quantum information and quantum algorithms, while left out topics such as quantum simulation and hardware for quantum information processing, as these were either less relevant for computer science standards or too complex for secondary schools. We wrote the standards to be specific enough to convey and measure the knowledge expected of students, and to be as jargon-free and accessible as possible for a topic as new and complex as quantum informatics (for the sake of clarity and consistency, we could not avoid using some technical terms in the standards definitions, but we have accompanied them with comprehensive explanations). Equity and diversity is a particularly important principle for our topic, since what we want to contribute to by teaching quantum informatics early is indeed a more diverse and inclusive science. In formulating the standards, we have been careful to ensure that they can be learned and demonstrated in a variety of ways, using different visualizations and representations, allowing learners to use and further develop individual learning strategies and approaches. However, this is an aspect that needs further and deeper attention in the future as more materials and approaches are developed. Last, the connection to other disciplines is intrinsic to physics and mathematics, but has not been made explicit in this first formulation of the standards. Overall, it should be considered that these standards are a first theoretical formulation, and we wish to see them used, tested and critically evaluated in order to formulate more accurate, inclusive and adequate descriptions in the future. The resulting standards are summarized in Table <ref>. For each quantum concept, we have indicated the corresponding CSTA concept. We did not establish a one-to-one correspondence with the CSTA concepts, as topics such as quantum error correction and quantum cryptography are very relevant within quantum informatics, while quantum networks and quantum internet are not yet developed enough to be suitable for secondary schools. In the following, each quantum informatics content area is described in more detail, based largely on some of the authors' previous work <cit.> and on the description of the key concepts in the literature (<cit.>, <cit.>). Quantum information: A qubit is the basic unit of quantum information. Qubits, like bits, always have a well-defined value when measured, but unlike bits, they also exhibit quantum mechanical properties such as superposition and entanglement. Superposition means that a qubit can be in a state that is a combination of 1 and 0, and only when it is measured will it be either 0 or 1 with a probability determined by its previous state of superposition. Qubits can also be entangled, that is, connected in such a way that none of the entangled qubits can be described independently of the others. If one qubit (in a maximally entangled pure two-qubit state) is measured, the state of the other entangled qubit is also instantaneously determined, no matter how far apart the qubits are. These special properties can provide computational advantages, since it is possible, for example, to influence the state of two or more different, physically distant qubits by a single operation. This understanding of qubits and the laws that they follow is the basis for further understanding of quantum informatics. In the classroom, students should learn what a qubit is and what makes it similar to bits and what makes it different. A deeper understanding of the special properties of qubits would allow students to grasp the possibilities as well as the limitations of quantum information and computation. Students should also be able to read and evaluate the accuracy of a popular article on quantum computing. Quantum error correction: The current phase of quantum computer development is called the Noisy Intermediate-Scale Quantum (NISQ) era. These processors are very error-prone and are yet too small for implementing large-scale quantum error correction codes on them. Quantum error correction and fault-tolerant quantum computation are currently a very actively researched area and fundamental for the development of large scale quantum computers. Although classical error detection and correction requires students to apply important computer science concepts such as data decomposition and representation, it is not traditionally taught in school. By learning about quantum error correction, students will learn what error correction is, why it is relevant to computer science, and why classical error correction is not applicable to quantum computing. Quantum computing systems: Different physical methods of building quantum computers are currently being explored, such as superconducting qubits or ion traps. Just as there are different ways of developing quantum computers, there are also different languages for programming them. By using one of these platforms, students can experience the principles of quantum computing and interact with a quantum computer themselves. Quantum algorithms: Calculations on quantum computers are described by quantum algorithms, and one model to describe them is that of quantum circuits. In a quantum circuit, the steps of the algorithm are described by quantum gates performed on one or more qubits and a measurement operation for readout. So far, there are only a limited number of useful algorithms that could give quantum computers a significant speed advantage in basic computing problems such as factoring large numbers. However, for many other types of computation, there are no easy ways to implement them on a quantum computer, and there is no advantage to doing so. A current challenge in quantum computing is to develop efficient as well as useful algorithms for quantum computers. In fact, students should not be expected to develop new algorithms themselves. However, they can learn about the effects of gates on qubits and how they can be put together to create an algorithm. Students can replicate some well-known algorithms and reason about their logic and properties. Quantum cryptography: Shor's algorithm showed that fully functional quantum computers (with a large enough number of qubits and a sufficiently small error rate) would be able to factor large numbers efficiently, threatening today's most widely used encryption methods. However, quantum effects can also make communication more secure based on physical principles. Quantum cryptography takes advantage of the fact that due to the principles of quantum mechanics, it is impossible for a third party to eavesdrop on the system without disrupting it and thus (probably) being detected. Cryptography is fundamental to today's digital society, and many frameworks and curricula have recognized it <cit.>. Students can learn about quantum cryptography in a course focused solely on cryptography, or while learning about quantum informatics. Students should learn why today's most widely used encryption method, RSA, is threatened by quantum computers. Also, by trying out a quantum key distribution protocol on their own, they can consider how it differs from classical cryptography and where its potential and risks lie. Impact of quantum informatics: If fully functional quantum computers are realized, they could crack the most commonly used encryption methods. At the same time, they could be used to perform process optimizations that could lead to significant efficiency gains and bring energy savings. In addition, quantum simulation has the potential to contribute to a better understanding of quantum mechanical systems, which could allow us, for example, to develop new drugs and materials. Societal implications lend themselves well to discussion with students, and one could discuss, for example, how access to quantum computing, if limited to individual governments or a few private companies, could alter power relations in society (cf. <cit.>). § EXEMPLARY TEACHING APPROACHES Lastly, we provide concrete examples of how the standards could be implemented in practice. We selected teaching approaches that fit with the newly defined Quantum Informatics Standards (selecting for content, age group and computer science applicability). In order to show how it is possible to develop different approaches to the Quantum Standards, appealing to younger students and different levels of knowledge and abstract thinking, we then classified the selected approaches based on Bruner’s modes of representation <cit.>. These approaches are meant to be examples, and as such we did not undertake a systematic review of all existing approaches, as it was not the focus of this work. Action based approaches: The role-playing approach to introduce to qubits and their properties developed by López-Incera and Dür <cit.> is a good example for an action based introduction to quantum information. The authors developed a role-playing game where some students are the qubits and others are the scientists. The qubits have to follow rules about how to position arms and legs and what to do when the scientists throw a ball at them (measure them), while the scientist have to figure out these rules. This way it is possible to explain superposition and entanglement, as well as the complexity of scientific hypotheses, in a tangible and playful way. The authors developed a similar role playing approach also for quantum cryptography <cit.> and there are several other unplugged examples to teach to quantum cryptography. Perry <cit.> developed a pen&paper way to experience the BB84 protocol, one of the main quantum key distribution protocols, which relies on qubit properties instead of mathematical complexity. Another promising enactive way to learn about quantum key distribution is to build and use the Qeygen machine <cit.>. With this analog machine, students can exchange a key using the BB84 protocol, simulate an eavesdropper, and experience the possibilities and limitations of quantum key distribution. Image based approaches: The states of a qubit are generally represented as vectors in a 2-dimensional complex space (Hilbert space), and most textbooks use the Bloch sphere (or a simplified unit circle) as a geometrical representation of qubits (<cit.>, <cit.>). However, since most high school students lack knowledge of complex numbers and linear algebra, this might not be the most accessible visualization. Often metaphors are used, such as flipping coins, balls of different colors, or a couple in love ordering wine in a restaurant. Although the metaphors have the advantage of making a connection to the everyday life of the students, as also pointed out by Seegerer <cit.>, they run the risk of being taken too literally on the one hand, and of not making the peculiarity of quantum principles sufficiently obvious on the other. Therefore, also other approaches that are closer to quantum mechanical formalism have been developed, such as the QI4Q formalism <cit.>. Here black and white marbles are used to represent qubits and the boxes the marbles pass through represent quantum gates. This approach has the advantage of explaining all necessary quantum gates correctly and in an accessible, tangible way, without using any mathematics, but also the disadvantage of having to learn somewhat arbitrary rules and of being suitable only for relatively simple algorithms (cf. <cit.>). Economou <cit.> showed how to use the QI4Q formalism to model the Deutsch algorithm, disguised as a game. This allows students not only to build a simple quantum algorithm using the properties of quantum gates, but also to observe the advantages of quantum over classical information processing. As with classical algorithms, it is valuable for students to discuss them with pseudocode or different approaches before being confronted with a formal language. Language based approaches: Many platforms offer free access (after registration) to their quantum computing power via the cloud, such as Qutech's Quantum Inspire <cit.> or IBM Quantum <cit.>. This last one is widely used for educational purposes as it provides an attractive graphical interface where you can drag and drop to simulate (and execute) the effect of different gates on one or more qubits, as well as qiskit, a Python-based software development kit. Students can build their qubit circuits and run them on a real quantum computer, experiencing quantum computing and its limitations, they can compare the simulation with the real computer and understand the need for error mitigation as well as error correcting techniques. The IBM website offers a comprehensive textbook with integrated exercises <cit.>, but it is aimed at university students with a high motivation and interest in mathematics. This could be reduced and simplified according to the students' knowledge, needs and the time available (as in <cit.>). On a symbolic, language-based level, students can also discuss risks and potentials in the development of quantum informatics. Although most learning resources mention this aspect, it is never the main focus of the teaching material on quantum informatics. Interesting approaches to thematize this can however be borrowed by other disciplines such as future studies <cit.> and integrated in a quantum informatics curriculum. The presented approaches show how an action based access to quantum information and quantum cryptography is possible for younger students (e.g. lower secondary school), while more complex and abstract topics such as quantum computer systems, quantum algorithms, and the implications of quantum informatics might be more suitable for higher secondary school. In general, we have provided one or more teaching examples for the defined quantum concepts, although not all defined learning outcomes, such as the ability to describe the concept of fault tolerance or the discussion of responses to the opportunities and risks of quantum computing, are directly addressed by existing approaches. We are convinced that it is possible to teach all the defined standards in a way that is appropriate for school, and intend to show this in the future. § CONCLUSION With this paper, we offer a contribution to the introduction of quantum informatics in schools: Its technologies and practices have been mapped and viewed from the perspective of the Great Principles framework, and a first proposal for learning standards at secondary school level, have been presented. It is important to note that this is only preliminary theoretical work and that the standards need to be tested and evaluated empirically. We are in the process of developing teaching materials and approaches based on the proposed standards and the analysis of existing teaching approaches, and hope that others will do the same. It is our goal that the standards and analysis proposed here will be useful to computer science teachers and educators who are designing new materials, planning lessons, or seeking a first orientation for teaching quantum informatics. splncs04
http://arxiv.org/abs/2407.12621v1
20240717144908
Distinguishing Isotropic and Anisotropic Signals for X-ray Total Scattering using Machine Learning
[ "Danielle N. Alverson", "Daniel Olds", "Megan M. Butala" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
§ ABSTRACT Understanding structure-property relationships is essential for advancing technologies based on thin film materials. X-ray pair distribution function (PDF) analysis can access relevant atomic structure details spanning local-, mid-, and long-range order. While X-ray PDF has been adapted for thin films on amorphous substrates, measurements of films on single crystal substrates are necessary for accurately determining structure origins for some thin film materials, especially those for which the substrate changes the accessible structure and properties. However, when measuring thin films on single crystal substrates, high intensity anisotropic Bragg spots saturate 2D detector images, overshadowing the thin films' isotropic scattering signal. This renders previous data processing methods, developed for films on amorphous substrates, unsuitable for films on single crystal substrates. To address this measurement need, we developed IsoDAT2D, an innovative data processing approach using unsupervised machine learning algorithms. The program combines non-negative matrix factorization and hierarchical agglomerative clustering to effectively separate thin film and single crystal substrate X-ray scattering signals. We used SimDAT2D, a program we developed to generate synthetic thin film data, to validate IsoDAT2D, and also successfully use the program to isolate X-ray total scattering signal from a thin film on a single crystal Si substrate. The resulting PDF data are compared to similar data processed using previous methods, demonstrating superior performance relative to substrate subtraction with a single crystal substrate and similar performance to substrate subtraction from a film on an amorphous substrate. With IsoDAT2D, there are new opportunities to expand the use of PDF to a wider variety of thin films, including those on single crystal substrates, with which new structure-property relationships can be elucidated to enable fundamental understanding and technological advances. Toward INT4 Fixed-Point Training via Exploring Quantization Error for Gradients Dohyung Kim1 Junghyup Lee1 Jeimin Jeon 1,2 Jaehyeon Moon 1,2 Bumsub Ham 1,Corresponding author July 22, 2024 ===================================================================================================== § INTRODUCTION Amorphous, nanocrystalline, and otherwise disordered thin film materials have distinct properties from their bulk states that can be beneficial in applications such as spintronics,<cit.> phase change memory,<cit.> photonics,<cit.> and broadly in semiconductor devices.<cit.> The ever-increasing demand for high-performance computers and advanced electronic technologies necessitates the continual improvement of the materials that drive them.<cit.> Atomic structure models of these materials are essential to enhance their properties, optimize their performance, and advance our fundamental understanding of the origins of compelling functional properties. However, current atomic structure characterization is limited for these types of thin films, specifically for local- and mid-range atomic structure features. Part of the challenge arises from the relative volume of a thin film and its substrate. Thin films are typically deposited or grown on a substrate via chemical or physical vapor deposition processes,<cit.> resulting in films with thicknesses between 1 nm and several microns.<cit.> The thin films intended for practical applications are typically deposited on single crystal substrates, such as Si, perovskites, and sapphire. This material selection for the substrate can affect the structure and properties of the films. These structure-property relationships can be tuned directly for implementation into semiconductor manufacturing processes.<cit.> Substrates are typically ≈500 μm thick single crystal wafers.<cit.> As substrates are the majority of the volume of thin film samples, isolating the properties and structure of the relatively small volume of the film is complex. X-ray diffraction (XRD) is a primary technique for determining the atomic structure of crystalline films, which is often used alone or in combination with transmission electron microscopy, especially for epitaxial films.<cit.> However, for amorphous or disordered materials, traditional crystallographic methods, such as XRD, can miss local structure motifs that are uncorrelated at long ranges and not captured in an average structure model.<cit.> By using only Bragg scattering, XRD analysis excludes information from diffuse scattering and is unsuitable for probing structural features of amorphous materials.<cit.> The pair distribution function (PDF), a sine Fourier transform of total scattering data, uniquely provides local-, mid-, and long-range atomic structural information, even for disordered materials.<cit.> During PDF measurements of powdered samples, scattering data are collected over a wide range of momentum transfer, Q, in transmission geometry. This requires short sample-to-detector distances and high-energy radiation, typically using synchrotron X-ray or spallation neutron sources.<cit.> Billinge and collaborators demonstrated that powder methods of X-ray total scattering, i.e., acquiring data in transmission geometry, could also be applied to thin film materials on amorphous substrates.<cit.> This involves collecting total scattering data from a thin film sample on a substrate and a substrate without a film and subtracting the latter from the former to isolate the thin film signal.<cit.> The total scattering data for the film is then Fourier transformed to yield the PDF. The smooth, directionally-independent scattering signal of amorphous substrates is advantageous for thin film PDF measurements, which makes the scaling and subtraction of substrate scattering relatively straight forward (Fig. <ref>a, b).<cit.> In contrast, scattering data for a similar thin film deposited on a (100)-oriented single crystal Si substrate has high intensity Si reflections that appear as spots in the two-dimensional (2D) image (Fig. <ref>c). With a single crystal substrate, subtraction of the scattering pattern from a blank substrate is ineffective, resulting in both residual signal and oversubtraction of the high intensity reflections. The resulting 1D scattered intensity, I, as a function of momentum transfer, Q, [I(Q)]has contributions from the substrate in some places, and unphysical negative scattered intensity elsewhere (Fig. <ref>d). This is due to differences in the relative intensities of diffuse and Bragg scattering of the substrate with and without a thin film.<cit.> The resulting subtracted signal would thus be inappropriate for generating PDF data. Therefore, an alternative to the subtraction process is needed to elucidate accurate structure-property relationships for thin film samples on single crystal substrates, which alter their structure and properties. To more effectively separate the X-ray total scattering signal of the thin film from that of its substrate, we use NMF to differentiate between the directionally-dependent and directionally-independent features. NMF and other machine learning algorithms have been previously applied in materials measurement science to classify and analyze data.<cit.> Of particular relevance to this work, unsupervised machine learning algorithms excel in treating unlabeled input datasets and, in turn, generating organized or sorted datasets of interest.<cit.> Among these algorithms, non-negative matrix factorization (NMF) is a dimensionality reduction algorithm that accepts inputs from a non-negative matrix (composed solely of positive values) and produces feature and weight matrices that reconstruct the original data.<cit.> NMF is particularly useful for X-ray scattering data, which inherently only have positive values, especially for in situ experiments that generate 20 or more patterns.<cit.> We report a novel data processing method for X-ray total scattering data from thin films on single crystal substrates, IsoDAT2D. For an individual sample at ambient conditions, X-ray total scattering data are collected as a single 2D image, which is azimuthally integrated to a single, one-dimensional (1D) I(Q) dataset.<cit.> A single dataset is incompatible with NMF, which identifies features across a series of individual datasets that occur with different weightings.<cit.> The developed method uses a novel integration process that not only provides the data necessary for NMF, but also takes advantage of the differences of scattering features from the film and substrate. Using these input datasets for NMF results in many outputted `feature' components. We use unsupervised machine learning clustering algorithms, specifically hierarchical agglomerative clustering (HAC), to group similar outputted components based on their features, which are expected to reflect their origin (i.e., film vs. substrate).<cit.> After clustering, components from one or more clusters are averaged and smoothed to thin film X-ray total scattering data of sufficiently quality for PDF analysis. We developed and validated IsoDAT2D using synthetic data generated by our novel thin film X-ray scattering data creation program, SimDAT2D. We demonstrate IsoDAT2D on experimental X-ray total scattering thin film data, successfully isolating the isotropic scattering signal of a thin film on a single crystal substrate. Using the isolated signal, PDF data quality is significantly improved relative to substrate subtraction using a single crystal substrate. Further, PDF data from IsoDAT2D are of comparable quality to PDF data from amorphous substrate subtraction. § DEVELOPED PROGRAMS §.§ SimDAT2D: Generating Synthetic 2D Scattering Data A key aspect of SimDAT2D is its use to generate synthetic X-ray scattering data from two or more distinct scattering contributions (e.g., isotropic, diffuse, anisotropic; Fig. <ref>). Each scattering contribution is generated individually and combined as a weighted linear combination into a single 2D detector image. This allows for the relative intensities to be varied and for each signal to be independently known. We created SimDAT2D to enable the generation of synthetic 2D X-ray scattering data and masks, as well as using those to perform azimuthal and rotational integrations of 2D data using the PyFAI Python library.<cit.> Isotropic signals consisting of radially symmetric rings at discrete positions over a 2D detector image are created using PyFAI's calibrant materials (30 unique options), taking into account user-defined wavelength, sample-to-detector distance, and detector type (e.g., PerkinElmer Silicon)(Fig. <ref>a). Anisotropic scattering signals, with diffraction spots at discrete positions on the 2D image, e.g., the scattering of a single crystal substrate, can be created manually as a grid of spots (Fig. <ref>b); for this, the user specifies the number of spots per line, the distance between spots, the diameter of spots (pixel width), and the Gaussian distribution of spot intensity. Alternatively, scattering patterns of any kind can be incorporated, for example, drawing upon existing tools to simulate patterns, such as SingleCrystalMaker®, which calculates single crystal diffraction patterns.<cit.> Likewise, experimentally measured 2D detector images can be used. Individual images are combined in a linear combination in with user defined relative weightings of each contribution (Fig. <ref>c). For example, to create an image that resembles a thin film on a single crystal substrate, increasing the relative weighting of the generated substrate signal results in intensity differences that qualitatively resemble experimental data. The images to be combined must have the same pixel dimensions; specifically, the program uses a 2048 × 2048 pixel grid, which is standard for many X-ray total scattering detectors. In addition to combining multiple scattering patterns, SimDAT2D scattering patterns can incorporate noise of various levels, allowing for synthetic data that more closely resembles experimental data. Noise is generated as a 2D map as a randomized normal distribution of coefficients by which synthetic data are multiplied (Fig. S6). §.§ IsoDAT2D: Identifying Isotropic Scattering Signals We developed IsoDAT2D to identify isotropic scattering data from 2D detector images comprising isotropic and anisotropic contributions. From this, thin film PDF data can be acquired. The program processes thin film X-ray total scattering data using NMF and HAC (Fig. <ref>). This combination of machine learning algorithms eliminates the need for substrate subtraction, which is effective for films on amorphous substrates but not single crystal substrates.<cit.> The first step of IsoDAT2D is a `rotation and integration' (Fig. <ref>-1), which generates tens to thousands of 1D datasets from a single 2D image, as is needed for machine learning algorithms. These 1D datasets are inputs for NMF, (Fig. <ref>-2), which identifies recurring components over the series of datasets. Next, HAC is used to sort the components generated by NMF (Fig. <ref>-3), such that one or more of the resulting clusters contains components associated with the scattering pattern of the thin film. Of those clusters, one or more comprise components that resemble isotropic scattering (Fig. S4). The components in the cluster(s) are then averaged to a single I(Q) X-ray total scattering dataset. The dataset is smoothed to reduce noise introduced through the data processing (Fig. <ref>-4), ultimately returning an output of X-ray total scattering data from which PDF data can be derived. These steps and key parameters are further discussed following, with example inputs and outputs based on data generated using SimDAT2D. Details on data generation for the demonstrative example are described in Supplementary Information (Section S1.1). §.§.§ Rotation and Integration 2D X-ray total scattering data are typically azimuthally integrated into a single 1D I(Q) dataset. In the current data processing workflow, multiple narrow integrations are used instead. These allow us to exploit the inherent difference between radially-dependent (anisotropic) patterns of Bragg spots from a single crystal substrate and radially-symmetric (isotropic) scattering from an amorphous or polycrystalline thin film (Fig. <ref>). These many discrete integrations also enable the use of machine learning methods for data processing, which require multiple input datasets.<cit.> Bragg diffraction from single crystals occurs at discrete points in space,<cit.> resulting in high-intensity spots in patterns that reflect the symmetry of the crystal structure.<cit.> In addition, high-quality single crystal substrates, such as single crystal Si, have phonon-phonon interactions that contribute weak–but detectable–diffuse scattering (Fig. <ref>c).<cit.> Powder samples can be approximated as a collection of small single crystals with random orientations relative to one another. This gives discrete points across all positions of scattering cones, creating isotropic rings in detector images.<cit.> Likewise, untextured thin films (e.g., polycrystalline, nanocrystalline, amorphous) have radially-symmetric scattering. X-ray total scattering data for PDF analysis are typically acquired at synchrotron facilities in transmission geometries over a wide Q range using 2D detectors.<cit.> Our rotational integration uses these 2D scattering images with other standard integration inputs, such as detector calibration parameters (e.g., PONI in PyFAI), sample-to-detector distance, and X-ray wavelength.<cit.> For standard X-ray total scattering measurements, masks are used to exclude features from the integration, such as signals from a beam stop and dead detector pixels, covering no more than ≈20% of the image area. In contrast, we use masks to discretize the detector area to generate many 1D linescans. For this, masks leave only a small fraction of the detector area exposed, ≈1%; thus only small image areas are integrated at a time (Fig. <ref>, right). After an integration, the image is rotated by a user-defined angle and an integration is taken over the new exposed area. This is repeated until the whole 2D area is integrated. Figure <ref> shows an example of line integrations taken at 30^∘ rotational intervals. In the resulting integrated data, a set of peaks occurs at the same position and intensity in each dataset; this is the isotropic signal from the thin film. Other features change position and intensity between datasets; these are from the anisotropic Bragg spots from the single crystal substrate. The shape and size of the integrated area can be modified by selecting the mask(s) and the number of integrations can be varied by the degree of rotation (see Supplementary Information, Section S1.2). For the NMF input, having a sufficient number of datasets is essential for identifying the isotropic scattering signal, which is enabled by the redundancy of those features across integrated input 1D datasets. To generate larger data series with various contributions of isotropic and anisotropic signals, 1D datasets from a combination of mask geometries and rotational angles can also be used. For example, masks with various numbers of sectors and differences in their widths and offsets can be used. We investigated this `multi-mask' approach in application to SimDAT2D-generated synthetic data, the results of which are described in the Supplementary Information (Section S1.3). The frequent occurrence of features enhances the likelihood of the desired signal being successfully identified by NMF.<cit.> To balance redundancy and computational expense we find that a minimum of 360 integrated datasets per 2D image should be used (Fig. S3). However, with access to computational resources, the use of more input datasets improves the efficacy of signal identification describes in the Supplementary Information (Section S1.3). §.§.§ Non-Negative Matrix Factorization NMF was originally developed to reduce the amount of data required to describe a system or phenomenon. Its ability to do this can be used to identify meaningful components in a series of data. The `non-negative' in NMF reflects that inputted data, identified components, and the weights of those components contain only positive values.<cit.> The mathematical process behind NMF is designed to decrease the Euclidean distances between separate non-negative matrices. To achieve this, a Frobenius norm loss function, A - WH _loss, is implemented as a metric for the magnitude of the difference between the inputted and feature matrices. The features matrix, H, contains components that, when weighted according to matrix W, reproduce the original inputted matrix, A.<cit.> In the context of this work, the H matrix contains components that correspond to scattering from specific sample elements, i.e., the single crystal substrate and the thin film. In our use of NMF, we employ a process that minimizes the Euclidean distance using beta-divergence as the metric to iterate through different numbers of output components in the feature matrix. <cit.> To illustrate, when the original input matrix contains 360 datasets, our application runs NMF with the number of component values from 2 to 360, and then selects the smallest number of components that produce the lowest beta divergence (Fig. <ref>). We implement Scikit-learn's NMF algorithm, which is initialized with randomized parameter variables, including initializer, solver, beta-loss, and tolerance.<cit.> In our implementation of NMF, the series of 1D I(Q) integrations generated from rotation and integration are used as the original matrix, A. We use the data as integrated, without normalization, which enables us to later sort the data based on variance that would be lost with normalization.<cit.> The initializer defines the method selected to run the program, the solver determines which numerical optimizer is used, beta-loss specifies the beta-divergence minimization function, and the tolerance indicates the proximity to a specific value of beta-divergence that must be reached to end the process.<cit.> The described NMF process is repeated for a user-defined number of iterations and returns the feature matrix, H, with the minimum beta-divergence. The number of components at which the beta-divergence is minimized appears as a discontinuity in the beta-loss as a function of the number of components (Fig. <ref>). This `elbow' indicates when the amount of components required to recreate the dataset has been identified. Depending on the nature of the inputted data, the identified feature matrix may contain thousands of components. A challenge of NMF is appropriately sorting the identified components. To manage this, we implement data science tools to help identify the component(s) that correspond to the thin film X-ray total scattering signal. §.§.§ Hierarchical Agglomerative Clustering We use HAC to efficiently sort components identified by NMF (Fig. <ref>). The process groups NMF components based on similarities, creating `clusters' of like components. As a `bottom-up' approach, agglomerative clustering initiates many small clusters that are merged to create larger ones, unlike a `top-down' approach, in which a single large cluster is progressively divided into smaller structures.<cit.> Using agglomerative clustering, we are able to assign the number of initial clusters into which outputted NMF components must converge. For the implementation of agglomerative clustering in IsoDAT2D, the number of clusters is user-defined and informed by the number of scattering phenomena expected to contribute to the measured total scattering signal. The algorithm should produce separate clusters associated with each type of scattering, for example, at least two clusters for single crystal substrates, which have diffuse scattering and Bragg spots. We use variance as the cluster sorting parameter, which supports the assignment of components to appropriate clusters in our unnormalized data. This leverages high variance of scattering intensity from features associated with the thin film and the substrate, which results from their relative volumes. Specifically, our clustering approach employs Ward's method, a widely-used linkage criterion that minimizes the Euclidean distance between points in the parameter space.<cit.> Ward's method is particularly effective for our data as it efficiently accounts for the variations in scattering intensities to ensure robust clustering. This allows us to identify and separate signals of interest from the NMF output matrix components, enhancing the quality of the resulting data and providing valuable insights into the scattering phenomena. After HAC, the 1D datasets in each cluster are plotted and visually evaluated by the user to assess if the data in the cluster resemble isotropic scattering. We found there to typically be multiple clusters with these contributions (Fig. S3 & 4). Further, the identified components within the clusters contain moderate noise from previous data processing steps. To reduce processing noise and have a singular 1D dataset, an approach has been created to average and smooth the identified components. §.§.§ Averaging and Smoothing In typical total scattering data processing, azimuthally integrating 2D images improves counting statistics and the resulting data quality. Here, the individual line integrations do not have this benefit, but statistics are improved by merging clustered components, which, despite being in one cluster, are not identical. This averaging is reminiscent of the angular averaging inherent to the higher data quality achieved with azimuthal integration. <cit.> Components of the selected cluster(s) are normalized to uniform scaling and then averaged to a single 1D dataset. However, the narrow integrations and NMF results in more noise than an azimuthal integration. To address this, we implement a Savitzky-Golay filter from the SciPy library, which uses a polynomial fit to smooth high-variance regions in the data.<cit.> This reduces noise while preserving key features of the data. Additional considerations for working with noise are described in the Supplementary Information (Section S1.4). § EXPERIMENTS AND RESULTS We validated the capabilities of IsoDAT2D to separate signals of interest using synthetic data generated by our SimDAT2D program. Using these synthetic data, in which the `answer' data could be simulated directly and compared to the identified signal, we assessed the effects of parameters such as masking, noise, and integration resolution (Supplementary Information, Section S1).<cit.> Experiments using synthetic data, described in the Supplementary Information (Section S1.1) were used to qualitatively evaluate IsoDAT2D's performance and inform parameters to minimize noise and optimize isotropic signal identification. Building on the insights gained from synthetic data experiments, we applied IsoDAT2D to experimental X-ray total scattering data collected from a 300 nm thick polycrystalline film of Ge_2Sb_2Te_5 (GST) on a 500 μm single crystal Si substrate (synthesis and data acquisition information provided in Supplementary Information, Section S2.1). We qualitatively compare the I(Q) and G(r) data for this film processed using IsoDAT2D and substrate subtraction. We also compare data from a film of the same composition and thickness on an amorphous SiO_2 substrate processed using substrate subtraction. §.§ Substrate Subtraction Following previous methods, the X-ray total scattering data for GST on SiO_2 and Si substrates were isolated by integrating 2D data collected of the thin film samples and of the substrates without films, then scaling and subtracting the latter from the former.<cit.> For this process, 2D detector images were azimuthally integrated to 1D linescans with identical masks using PyFAI<cit.>, resulting in 1D linescans. The I(Q) of the substrates were scaled to maximize their intensity without exceeding the intensity of the corresponding thin film sample data. The scaled data from the substrate were then subtracted from the film-on-substrate data, leaving a signal comprising data from only the thin film (Fig. S9, Fig. <ref>). Using PDFgetX3,<cit.> the substrate-subtracted 1D linescan was Fourier transformed to PDF data with Q_max = 14 Å^-1 (Fig. <ref>). §.§ IsoDAT2D Component Identification X-ray total scattering data from the GST thin film on a single crystal Si substrate were also processed using IsoDAT2D. A single 2D detector image was sequentially rotated and integrated using several combinations of parameters (e.g., mask geometry, number of datasets, resolution, etc.). The resulting 1800 datasets with 2500 data points resulting from the use of five separate masks balanced the clarity of signal features and noise effects. For rotation and integration, five iterations of single and multi-sector masks were used with 1, 2, 3, 4, and 5 slices of 0.5 px width rotated every 1^∘ to produce 1800 1D datasets (Supplementary Information, Section S1.3). Similar to experiments with synthetic data, using these experimental data we found that this number of datasets balanced computational efficiency and data quality, especially the signal-to-noise ratio (Fig. S5). We selected a data point resolution for integration of 2500 points, which balanced signal identification and noise effects (Supplementary Information, Section S1.4). With higher data point resolutions, the program's output had significant noise. Using lower data point resolutions, the program identified signal features that were unphysical, especially peak widths and isolated signals that were not suitable for PDF analysis. From the generated 1D datasets, NMF components that resemble the GST signal of interest were identified in several clusters. Data from these clusters were averaged and smoothed to a single I(Q) dataset. These data required some further processing for the intensity to approach zero with increasing Q due to processing effects. For this, the value of the lowest intensity point was subtracted from the data. Subsequently, the dataset was multiplied by a Gaussian function to mitigate unphysical high-Q features, resulting in I(Q) data suitable for PDF analysis. The I(Q) data were transformed to G(r) with PDFgetX3 using a Q_max = 14 Å^-1.<cit.> §.§ Comparison of Data from Each Processing Method Identified X-ray total scattering data from amorphous substrate subtraction and IsoDAT2D are qualitatively similar (Fig. <ref>). In contrast, single crystal Si substrate subtraction data has unphysical features due to the oversubtraction of high intensity single crystal Bragg spots. Data for amorphous substrate subtraction are aligned with our expectations of X-ray total scattering data, especially the decrease of intensity of features with increasing Q. The IsoDAT2D-identified I(Q) signal is similar in its overall variation of intensity with Q, affirming the efficacy of the processing method. The positions of peaks in these two I(Q) signals are also consistent, with the primary difference between the identified signals being peak geometry, especially peak widths and the relative intensities of reflections in each pattern. Overall, this comparison highlights that the I(Q) data from IsoDAT2D are comparable with those from the previous method used for thin film PDF, highlighting the promise of accessing atomic structure information from thin films deposited on single crystal substrates using X-ray total scattering. <cit.> The I(Q) data from each processing method were Fourier transformed to generate PDF data (Fig. <ref> bottom) using PDFGetX3.<cit.> The PDF, G(r), were compared to one another, as well as to calculated PDFs and Te-Te and Ge-Sb partial PDFs for GST. Calculated data were based on a GST model from neutron PDF analysis<cit.> and were calculated using PDFgui (Fig. <ref>) <cit.>. Similarities and differences between the I(Q) signals are reflected in the resulting PDF data. For amorphous substrate subtraction and IsoDAT2D, PDF data are qualitatively similar, with correlations consistent with our expectations for a polycrystalline material. In contrast, PDF data from the single crystal substrate subtraction show termination ripples and high-frequency noise, making it entirely unsuitable for deriving even qualitative information about atomic structure. In the low r range, PDF data from amorphous substrate subtraction and IsoDAT2D processes miss the first correlation centered at 3 Å (Fig. <ref>a & c). For the second and third correlations, the amorphous substrate-subtracted data match calculated correlations between 4 Å and 6 Å (Fig. <ref>c). Local structure correlations in IsoDAT2D-identified data are less consistent with calculated data (Fig. <ref>a), exhibiting a single broad correlation at ≈5 Å rather than the two correlations near 4.2 Å and 5.2 Å. At higher r ranges, representing mid-range and average structure features, both the amorphous substrate-subtracted and IsoDAT2D-identified signals exhibit strong similarities with the calculated signal, accounting for most of the correlations (Fig. <ref> b, d). Overall, the PDF data from the IsoDAT2D process is of similar quality to the data from amorphous substrate subtraction. § DISCUSSION While there are established methods for identifying the scattered signal from thin films on amorphous substrates, the same approach cannot be applied to films on single crystal substrates (Fig. S9). To address this mismatch between measurement needs and capabilities, we developed IsoDAT2D to enable the measurement of atomic structures for amorphous and nanocrystalline thin films on single crystal substrates, a class of materials that have not been previously measurable with transmission geometry X-ray total scattering. IsoDAT2D effectively isolates the total scattering signal from a thin film on a single crystal substrate, from which PDF data for the film can be generated. With this approach, we enable PDF analysis of polycrystalline, nanocrystalline, and amorphous thin films on single crystal substrates without compromising Q_max, as in grazing incidence experiments.<cit.> The method we report leverages inherent differences in the 2D symmetry of the scattering patterns of the film and substrate, using hundreds to thousands of line integrations rather than a single azimuthal integration over the 2D detector image. The resulting 1D integrations are inputs for NMF, from which identified components are sorted using HAC, ultimately returning an output that is the X-ray total scattering of the isotropic component (i.e., the thin film). Using synthetic data, we found that increasing the number and variety of features in 1D integrations by using a variety of mask geometries and orientations increases the likelihood of identifying the signal of interest and decreases the noise associated with these narrow integrations. However, increasing the amount of input data also increases computational expense, especially for the currently implemented NMF algorithm, which requires both significant memory and computational power. In addition to high-fidelity identification from synthetic data, IsoDAT2D identified the thin film scattering signal from experimental data. The thin film scattering signal returned by the algorithm was consistent with that identified using conventional substrate subtraction methods from an analogous thin film on an amorphous substrate. Thin film PDF data are challenging to collect, and we find that both the amorphous substrate-subtracted and IsoDAT2D-identified scattering signals deviate from expectations in the local structure. However, their mid-range structures were consistent with one another and simulated PDF data of the published structure.<cit.> While the amorphous substrate subtraction and IsoDAT2D approaches effectively capture mid-range correlations, the local structure features from both methods differ from the calculated PDF. Since both methods have discrepancies between measured and calculated local structure, this could be a common limitation, or at least challenge, of applying PDF to thin film samples. Similarly, substrate subtraction and IsoDAT2D are so far limited to relatively thick films, with successful demonstration previously and here on films ≥ 300 thick.<cit.> However, having validated the capabilities of this machine learning data processing approach, improvements can be made to further push the boundaries of the types of samples to which PDF analysis can be applied, such as thinner films. Due to the similar strengths and shortcomings of both processing methods, we conclude that IsoDAT2D enables similar quality PDF data as the substrate subtraction method, but can uniquely be applied to films on single crystal substrates. In the illustrative example shown here, to probe the crystalline structure of a phase change material, differences in structure and, accordingly, properties are not expected to depend on the substrate. However, IsoDAT2D enables PDF data from thin film samples for which structure and function are directly affected by the substrate. <cit.> § CONCLUSIONS We developed IsoDAT2D to identify thin film X-ray total scattering signals from signal crystal anisotropic signals. Combining a novel approach for integrating 2D scattering images with NMF and HAC machine learning algorithms, IsoDAT2D effectively identifies isotropic scattering signals. To validate the algorithm's performance, we created SimDAT2D, which generates synthetic 2D images representative of scattering from thin films on single crystal substrates, allowing for the comparison of algorithm-identified signals with a known synthetic signal. Further, SimDAT2D is a valuable tool for exploring the effects of various parameters on algorithm performance, offering insights into IsoDAT2D's capabilities and informing data processing approaches specific to the nature of the film and substrate. Beyond thin film characterization, IsoDAT2D and similar approaches may be applicable to other materials systems and data in which there are constant and dynamic signals of interest, i.e., from in situ and operando scattering experiments. By advancing our understanding of structure-property relationships in application-relevant thin film materials, these tools pave the way for fundamental and technological advancements. § CONFLICTS OF INTEREST None to report. § ACKNOWLEDGMENTS The authors acknowledge the University of Florida Research Computing for providing computational resources and support that have contributed to the research results reported in this publication. This research used the Pair Distribution Function Beamline (PDF) of the National Synchrotron Light Source II, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Brookhaven National Laboratory under Contract No. DE-SC0012704. The authors would also like to acknowledge Dr. David Adams at Sandia National Laboratory for providing thin film samples. [pages=-]DNA_NMFAC_SI.pdf
http://arxiv.org/abs/2407.13507v1
20240718134203
Strength of 2D Glasses Explored by Machine-Learning Force Fields
[ "Pengjie Shi", "Zhiping Xu" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.stat-mech" ]
]Strength of 2D Glasses Explored by Machine-Learning Force Fields xuzp@tsinghua.edu.cn Applied Mechanics Laboratory, Department of Engineering Mechanics, Tsinghua University, Beijing 100084, China § ABSTRACT The strengths of glasses are intricately linked to their atomic-level heterogeneity. Atomistic simulations are frequently used to investigate the statistical physics of this relationship, compensating for the limited spatiotemporal resolution in experimental studies. However, theoretical insights are limited by the complexity of glass structures and the accuracy of the interatomic potentials used in simulations. Here, we investigate the strengths and fracture mechanisms of 2D silica, with all structural units accessible to direct experimental observation. We develop a neural network force field for fracture (NN-F^3) based on the deep potential-smooth edition (DeepPot-SE) framework. Representative atomic structures across crystals, nanocrystalline, paracrystalline, and continuous random network glasses are studied. We find that the virials or bond lengths control the initialization of bond-breaking events, creating nanoscale voids in the vitreous network. However, the voids do not necessarily lead to crack propagation due to a disorder-trapping effect, which is stronger than the lattice-trapping effect in a crystalline lattice and occurs over larger length and time scales. Fracture initiation proceeds with void growth and coalescence, and advances through a bridging mechanism. The fracture patterns are shaped by subsequent trapping and cleavage steps, often guided by voids forming ahead of the crack tip. These heterogeneous processes result in atomically smooth facets in crystalline regions and rough, amorphous edges in the glassy phase. These insights into 2D crystals and glasses, both sharing SiO_2 chemistry, highlight the pivotal role of atomic-level structures in determining fracture kinetics and path selection in materials. [ Zhiping Xu July 22, 2024 ================= § INTRODUCTION The strengths of crystals and glasses have garnered significant attention in statistical physics for their non-equilibrium nature<cit.>, and are highly sensitive to the local arrangement of atoms<cit.>. In contrast to edge cleavage in crystals, failure of glasses often involves nucleation of voids in regions not necessarily at the crack tip. The cracks can be trapped by the amorphous network, adding kinetic contributions to the energy cost of cracking, and the voids ahead of the crack tip could guide its advancement<cit.>. Recent studies show that nanostructured glasses in the form of nanocrystals<cit.> and paracrystals<cit.> can markedly enhance their fracture resistance. Introducing nanocrystalline domains<cit.> increases fracture toughness by 80%, from 0.6 to 1.1 MPa m^1/2, while paracrystalline glasses exhibit a threefold greater enhancement<cit.>. It is thus interesting to explore the failure process of nanostructured glasses by considering their atomic-level structures, beyond the continuum framework<cit.>. Direct characterization of glass structures remains challenging<cit.>. Structural description of glasses is limited to the short-range order (SRO)<cit.> and mediate-range order (MRO) parameters<cit.>. Notably, two-dimensional (2D) silica has recently been discussed in the literature since its successful synthesis. The crystalline and amorphous structures can be directly visualized by electron microscope (EM) <cit.>. Exploring 2D silica thus allows direct characterization of their failure behaviors at the atomic level from both theoretical exploration and experimental observations, which remain largely unexplored. Atomistic simulations are powerful tools to reveal atomic-scale material kinetics<cit.>. The accuracy and efficiency of their prediction are limited by the fidelity of interatomic interaction models, from first-principles calculations at the electronic-structure level to empirical force fields. Empirical models such as Sundarararaman-Huang-Ispas-Kob (SHIK)<cit.>, Du<cit.>, Bertani-Menziani-Pedone (BMP)<cit.> were developed for oxide glasses including silica and successfully applied to problems such as material discovery and mechanistic analysis<cit.>. However, the capability of modeling fracture is under question due to the presence of strong lattice distortion and dangling bonds at the crack tip and cleaved edges<cit.>, factors typically not account for force field parametrization. Recent advances in developing machine-learning force fields (MLFFs)<cit.> allow us to revisit the non-equilibrium processes of material failure<cit.> by using chemically-accurate force fields trained in the Deep Potential-Smooth Edition (DeepPot-SE)<cit.>, Neural Equivariant Interatomic Potential (NequIP)<cit.>, Neuroevolution Potential (NEP) frameworks<cit.>. The MLFF approach has recently been applied to fracture problems by expanding the training dataset to include intermediate structures during fracture, which are usually not well modeled by empirical force fields<cit.>. In this work, we develop a neural network force field for fracture (NN-F^3) tailored for 2D silica across various nanostructures, including crystals and nanocrystalline (NCG), paracrystalline (Para), continuous random network (CRN) glasses. We investigate their mechanical responses at the atomic level to gain insights into the kinetics of material failure. § METHODS §.§ Density functional theory (DFT) calculations To obtain the quantitative relations between energies, forces on atoms, and the atomic-level structures, spin-polarized DFT calculations are performed using the Spanish Initiative for Electronic Simulations with Thousands of Atoms (SIESTA) package <cit.> with numerical atomic orbitals (NAOs) at the double-ζ-plus-polarization (DZP) level <cit.>. Perdew-Burke-Ernzerhof (PBE) parameterization of the generalized gradient approximation (GGA) is used for the exchange-correlation functional <cit.>. Troulliere-Martins-type norm-conserving pseudopotentials are chosen for the ion-electron interactions <cit.>. The cut-off energy for electron wave functions is 500 Ry. The 𝐤-space is sampled by a Gamma-centered, 4×3×1 Monkhorst-Pack grid for the 144-atom model. For structures with open edges, sampling at the same 𝐤-point density is used. These settings assure a convergence threshold of 1 meV/atom. §.§ NN-F^3 Development To develop the NN-F^3 model for 2D silica, we first extend our approach from crystalline to glassy silica. Building upon our previous work on 2D crystals<cit.>, we first pre-sample the 3D space of basal-plane strain states using a low-accuracy empirical force field (Tersoff<cit.>). The strain sweeping is conducted at a rate of 1×10^-5 ps^-1 under controlled conditions of 10 K using a Nosé-Hoover thermostat with a damping constant of 1 ps. Structures generated from this pre-sampling process undergo subsequent DFT calculations. We employ the end-to-end, symmetry-preserving DeepPot-SE framework<cit.> and DeePMD-kit<cit.> for training the model. Despite these efforts, the NN-F^3 model trained on this initial dataset shows some inaccuracies. Notably, stress predictions near peak strain and Poisson ratios at high strain deviate from the reference DFT calculations<cit.>, highlighting the constraints inherent in sampling atomic-level structures generated by the Tersoff force field. To enhance predictions of atomic forces in highly distorted structures at crack tips and cleaved, undercoordinated edges, we expand the initial dataset. Implementing an active learning strategy (Training-Exploration-Labeling<cit.>), we iteratively explore the most relevant structures based on predefined criteria related to deviations in atomic forces. Initially, four NN-F^3 models are trained using different seeds for random number generation during parameter initialization,<cit.>. MD simulations using these models are conducted in the Atomic Simulation Environment (ASE)<cit.>, generating trajectories and computing atomic forces under identical loading conditions as the pre-sampling process. The Query by Committee<cit.> algorithm is employed to screen atomic-level structures, selecting those with maximum standard deviations (SD) of atomic forces exceeding 0.1 eV/Å among the four NN-F^3 models. Structures containing crack tips and open edges were identified by atoms with coordination numbers below 2 for O atoms. Additionally, structures with an SD of atomic forces greater than 0.2 eV/Å are selected. These screened structures undergo DFT calculations for labeling, enriching the dataset. Consequently, atomic-level structures containing crack tips are incorporated into the dataset following the MD exploration process. For 2D glassy silica, we employ an active-learning strategy based on the NN-F^3 model previously trained for 2D crystalline silica. Initially, we generate all 144-atom inequivalent structures, each with non-isomorphic bonding networks, through bond rotation<cit.>. This process results in 257 structures distributed across 34 distinct ring distributions. From these, we select 45 structures for active learning across the full strain space. Structures exhibiting a maximum SD of atomic forces exceeding 0.2 eV/Å (from the four NN-F^3 models) are chosen for subsequent DFT calculations. We conduct 30 iterations of active learning to achieve convergence, resulting in a reduction of the maximum SD of atomic forces to below 0.2 eV/Å. The final dataset comprises 264,420 data frames, represented in a sketch map illustrating the diversity of atomic-level structures in the training dataset, encompassing lattices under varied strain states, cleaved edges, and structures containing crack tips (Fig. S1a). While these 144-atom structures are too small to distinguish NCG, Para, and CRN structures, they effectively capture the essential structural characteristics of both crystalline and glassy silica. In training the DeepPot-SE framework, we set the sizes of the embedding and fitting networks to (25, 50, 100) and (240, 240, 240), respectively, with a cutoff radius of 8.0 Å and a smoothing parameter of 1.0 Å. The batch size is 8, and the hyperparameters pref_e and pref_f, which determine the weights of energy and force losses in the total loss function, are set to 1.0 and 10.0, respectively. We employ Adaptive Moment Estimation (Adam) optimization over 3× 10^6 batch steps, starting with a learning rate of 0.001 that exponentially decays to 1.0 × 10^-8 by the end of training. Ninety-five percent of our dataset is used for training, with the remaining five percent reserved for validation. §.§ MD simulations To simulate the fracture of 2D silica, we use the large-scale atomic/molecular massively parallel simulator (LAMMPS) <cit.>. The structures (NCG, Para, CRN) are simulated with a cubic box of 200^3 Å, which contains ∼15,000 atoms, ∼4 times larger than previous studies <cit.>. The fracture tests were conducted using an athermal quasistatic (AQS) protocol. In AQS, the structure is deformed at a strain rate of 5 × 10^-5 per step, followed by damped dynamics with a viscous rate of 1 ps^-1 to relax the structure under a force-on-atom threshold of 10^-4 eV/Å. § RESULTS AND DISCUSSION §.§ Performance of NN-F^3 for 2D Silica The performance of NN-F^3 is validated for 2D silica crystals and glasses through the predicted energies, forces, and stress-stain relations. The root mean square error (RMSE) of the energy per atom, the interatomic forces, and the in-plane stress are below 0.79 meV/atom, 37.6 meV/Å, and 24 N/m, respectively, for the validation dataset (Fig. S2a-c). The RMSE of the phonon spectrum measured from the DFT results is below 0.2 meV (Fig. S2d). The uniaxial stress-strain relations of 2D silica crystals and glasses with structures not in the 45 structures of 2D silica exhibit excellent agreement with the reference DFT calculation results (Figs. S2 and S3). These results validate the predictions of equilibrium properties. To verify the accuracy of NN-F^3 in describing the non-equilibrium fracture process, we study fracture of crystalline silica using both NN-F^3 and DFT-based Born-Oppenheimer MD simulations. Runs of 0.3 ps and 0.4 ps are conducted for the zigzag and armchair crack edges, respectively, revealing almost identical structural evolution in the NN-F^3 and DFT results (Fig. S4). §.§ Glassy structures A critical issue in the study of non-crystalline materials is whether the structural models accurately reflect those observed in experiments, particularly concerning their non-equilibrium properties. The complexities in the chemistry and atomic-level structures of glasses present a challenge in verifying whether structures generated by methods like bond swapping<cit.> or simulated annealing accurately capture the essential structural features. Theoretical models were proposed with a CRN of Si-O tetrahedra in silica and by asserting that alkali or alkaline earth metals are glass modifiers that disrupt the network<cit.>. However, a direct comparison with experimental evidence at the atomic level is difficult. Consequently, the agreement is often assessed by the physical properties of the generated structures such as the density, modulus, and radial distribution functions (RDFs)<cit.>. 2D amorphous materials such as glassy silica<cit.> and monolayer amorphous carbon (MAC)<cit.> were observed under an EM, which allow for direct determination of the atomic positions. Experimental evidence shows that compared to MAC featuring a sub-crystalline 2D structure and significant out-of-plane displacements<cit.>, amorphous silica exhibits a CRN structure and lacks out-of-plane fluctuation for its symmetric 3-atom-layer structure<cit.>. 2D glasses in a CRN structure can be generated by consecutive Stone-Wales (SW) rotations, where the topological disorder is controlled and measured through the ring statistics<cit.>. A Monte Carlo dual-switch procedure is deployed to the network of Si atoms<cit.>, the position of which is adjusted by minimizing a spring-like potential for ring-to-ring interaction in the dual space after each Monte Carlo move. We use a fictitious temperature of 10^-4 and 10^4 Monte Carlo steps and a value of α=1/3 for the Aboav-Weaire law<cit.>, which aligns with experimental evidence and suggests that small rings tend to sit around larger ones<cit.>. Structures of NCG and Para are constructed by preserving crystallites with sizes of 2.8 nm and 1.4 nm during structural transformation, respectively. Computer-generated NCG, Para and CRN structures (Fig. <ref>a-c) exhibit identical ring distributions (Fig. <ref>d). The glassy structures are rich in pentagons and heptagons, with rare quadrilaterals, which agrees well with experimental characterization using scanning tunneling microscopy (STM)<cit.>. Their RDFs (Fig. <ref>e-f) and angle distribution functions (ADFs, Fig. <ref>h, i) are also similar and align well with the experimental data. Minor discrepancies in ADF compared to experimental data can be attributed to two factors: the inaccuracy in determining atomic positions and the presence of structural defects within the materials. Pinpointing exact atomic locations, particularly for oxygen atoms, remains challenging, often resulting in uncertainties observed in STM images<cit.>. Furthermore, defects are commonly overlooked in silica model construction, yet they significantly influence residual stress distribution and O-Si-O bond angles. The RDFs and ADFs of Para, CRN, and NCG structures show minor differences (Fig. S5). The RDF comparisons between CRN and Para structures align with previous findings<cit.>. Para structures notably feature a narrower angle distribution (SD = 5.73^∘), compared to CRN (SD = 6.32^∘) and NCG (SD = 6.40^∘) structures, despite similar mean angles (108.90^∘ for Para, 108.91^∘ for NCG, and 108.99^∘ for CRN, respectively). These results suggest a higher degree of structural order in the Para structures, possibly indicative of medium-range order (MRO). To quantify the effect of nanostructuring on the MRO, a similarity order parameter s between a cluster (A) and its reference (B) is defined as<cit.> s^B = max_l,αN_B∑_i^N_A∑_j^N_Bexp[-|𝐓· r^A_i-r_j^B|^2/σ^2]/N_A∑_i^N_B∑_j^N_Bexp[-|r_i^B-r_j^B|^2/σ^2]. where N_A and N_B are the numbers of atoms in clusters A and B, respectively. Here σ=0.2 Å is a smearing parameter. The transformation matrix 𝐓 incorporates both affine scaling transformation (l) and cluster rotation (α). A crystalline cluster is chosen here as the reference (Fig. <ref>g, inset). A distinct difference in the distributions of s is identified, even though their RDFs, ADFs, and ring statistics are similar (Fig. <ref>g), suggesting the difference in lattice distortion. §.§ Void nucleation by bond breakage Crack nucleation in crystals can be driven by phonon stability, which defines their strengths<cit.>. In contrast, for glassy materials, the fate of embryo cracks is significantly influenced by the heterogeneity in both the chemistry and structures. Understanding the link between material failure via bond breakage and structural features in the equilibrium state is a fundamental inquiry in the theory of glasses<cit.>. We conduct AQS simulations under uniaxial tension along various directions by enforcing strain in the tensor form of 𝐄 = [cos^2θ 1/2sin2θ 1/2sin2θ sin^2θ, ]ε where θ represents the tensile direction, and ε is the amplitude of strain applied to the sample. Void nucleation occurs when the bond strength is exceeded<cit.>, typically detected by the Si-O bond length surpassing 2.2 Å. The nucleation strain is defined as the value at which the first bond breaks. The size effect is examined by studying representative models with 144 and 3,456 atoms, corresponding to lateral sizes of ∼ 2 nm and ∼ 10 nm, respectively <cit.>. Virial coefficients (V_θ = 𝐝_θ^T𝐕·𝐝_θ) for equilibrium, undeformed samples are computed along tensile directions (𝐝θ), following the definition of normal stresses (Fig. <ref>a). The virial tensor, which relates to the stress tensor as σ = - 𝐕/a (a is the average area of atoms), characterizes residual stress induced by material heterogeneity. The spatial distribution of atomic sites exhibiting large | V_θ| values reveals the presence of `force chains' within the amorphous bonding network of the equilibrium or undeformed state. There is a strong correlation between nucleation events and the value of V_θ (Fig. <ref>b), enabling the prediction of 2D glass strength from their equilibrium structures. Voids typically nucleate at atomic sites with high residual stress. Fig. <ref>c shows the cumulative probability density function (CDF) of V_θ at the sites of void nucleation, ranging from the smallest (the most stretched) to the largest (the most compressed) values. Unlike a uniform distribution, the shape of CDF indicates that a few highly stretched states of atomic stress in equilibrium determine the strength of 2D glasses under tension. On the other hand, the Si-O-Si bond lengths (distances between the silicon atoms) measuring the local strain also exhibit a strong correlation with the strain to void nucleation (Fig. <ref>d). The lengths of critical bonds that break in the damped dynamics follow a narrow distribution with a mean of 3.6 Å and an SD of 0.07 Å, respectively (Fig. <ref>e). The reduction in the values of virial/bond length descriptors with the size of models indicates the statistical nature (Figs. <ref>b-d). Heptagonal and octagonal rings are more prone to breakage than hexagonal rings, leading to more frequent void nucleation around larger rings (Fig. S6). These arguments made to CRN models apply to NCG and Para as well. §.§ Fracture by void coalescence and bridging The elastic responses of materials are lost after void nucleation. Catastrophic fracture may be triggered by further increasing the loading amplitude. The strength of glasses is much reduced compared to the crystals, from 26.2 N/m to 10 N/m for CRN. The stiffness decreases from 154.5 N/m to 79.9 N/m. For the glassy phases, although the SROs and MROs (e.g., RDFs, ADFs, ring statistics) are close, the strength of Para and NCG structures increases slightly by 5% and 4% from CRN, respectively (Fig. <ref>a). Significantly, the locations of fracture nucleation differ from those of void nucleation (Fig. <ref>c-k). The CDF of the minimum distances between the sites of void nucleation and the crack suggests a weak correlation (Fig. S7). This finding contrasts with crystals, where nucleation typically leads directly to fracture, indicating a more pronounced disorder-trapping effect over lattice trapping, which operates on a larger length and time scale. The edges cleaved by fracture display distinct characteristics across the three representative glassy structures. In NCG, cracks typically propagate through large crystalline grains ahead of the crack tip and cleave crystalline facets, as the energy cost of crack deflection is relatively high. In contrast, the smaller and more distorted crystalline grains in Para hinder straight crack propagation, resulting in a more tortuous path. Conversely, the highly disordered CRN structures facilitate a relatively smooth fracture path. However, the contrast in the strengths (Fig. <ref>a) and edge roughness (Fig. <ref>b) of NCG, Para, and CRN structures are not significant in our simulations, suggesting a weak nanostructuring effect. The reportedly strong toughening effects observed in experiments may thus be attributed to the chemical heterogeneity in the crystalline and glassy phases and an enhanced 3D nanostructuring effect compared to 2D<cit.>, which awaits further exploration. §.§ Fracture kinetics and dynamical effects There is a signature difference in the fracture kinetics and patterns in crystals and glasses. In crystals, an advancing crack leaves atomistically smooth crystalline facets behind, although lattice kinks joining these facets in chiral edges can lead to roughening at a length scale larger than the lattice constants. In contrast, in all three amorphous structures, the cracks become ensnared within the disordered phase, where the cleaved edges show roughness at the length scale of a single Si-O-Si bond (Fig. <ref>b). Crack propagation is facilitated by void nucleation and coalescence in the amorphous phases. In our AQS simulations, the number of voids increases first by their nucleation and then declines by aggregation (Fig. <ref>c). Propagating cracks are predominantly characterized by the relatively large voids within the samples (Fig. <ref>d, e). Following the Griffith theory, we find that the critical crack size is c_ cr = 4γ/π E ε^2 = 15.08 Å, where the values of γ and E are the edge energy densities and Young's modulus. The value of E is extracted from the stress-strain relation of the CRN glass (Fig. <ref>a), and γ is the edge energy density calculated for 2D crystalline silica that is the lower bound of the value for glasses. This critical value well separates the propagating and non-propagating voids, validating the linear elastic fracture mechanics theory at the Ångstrom scale. It should be noted that using γ instead of fracture toughness in the estimation excludes the non-equilibrium process during fracture<cit.>, thus undervaluing c_ cr. The stress field ahead of the crack tip, averaged over several samples, follows the r^-1/2 singularity. However, significant structural heterogeneity in each sample dominates the K field, and may ultimately determine the fracture strength<cit.> (Fig. <ref>f). Voids can thus nucleate at the weak spots that are not necessarily at the crack tip, and the advancing of cracks can be guided by voids nucleated in front of the crack tip and along its path. Dynamic fracture simulations are conducted to explore the additional inertial effects. A large sample of 100×50 nm^2 is stretched at strain rates ranging from 10^-5 to 10^-3 ps^-1 at 0.1 K. The Nosé-Hoover thermostat with a damping constant of 1 ps is used. At a low strain rate of 10^-5 ps^-1, fracture proceeds by edge cleavage and lattice trapping and edge cleavage, which are guided by voids nucleated ahead of the crack tip, defining the crack paths (Fig. <ref>g, Movie S1). Fragmentation intensifies by void nucleation across the sample at higher strain rates (Fig. S8, Movie S2). These phenomena have been supported by numerous simulations<cit.> and experimental studies<cit.> of glassy structures. The kinetic and dynamical effects elucidated here through atomistic simulations provide further insights into the mechanisms of failure. § CONCLUSION We discussed the strength and fracture behaviors of 2D silica by developing and using a neural network force field for fracture (NN-F^3) to simulate the failure process, which features chemical accuracy and the capability to model problems of relatively large size, such as fracture. We find that the strength of 2D glasses is determined by the heterogeneous residual stress, and can be reasonably measured by the local virials or bond lengths in the equilibrium state. Beyond this strength, voids are nucleated in the amorphous network but do not always trigger fracture propagation. Only those beyond a critical size defined by the Griffith theory can advance and proceed with continuous coalescence and bridging events. The kinetics of fracture propagation is renormalized by a disorder trapping effect in addition to lattice trapping and is often guided by voids nucleated ahead of the crack tip. These findings enrich our understanding of material failure at the atomic level, where the effects of disorder and structural heterogeneity are highlighted with the chemical complexity precluded by choosing 2D silica in the study. Further experimental studies, potentially conducted in situ, are expected to add more insights into the problem<cit.>. § SUPPLEMENTARY MATERIAL The Supplementary Material includes the sketch map of the dataset used to train the neural network force field for fracture (NN-F^3), technical details and validation of NN-F^3, and additional simulation results supporting the discussion. This study was supported by the National Natural Science Foundation of China through grants 12425201 and 52090032. The computation was performed on the Explorer 1000 cluster system of the Tsinghua National Laboratory for Information Science and Technology. § DATA AVAILABILITY STATEMENT AVAILABILITY OF DATA STATEMENT OF DATA AVAILABILITY Data openly available in a public repository that issues datasets with DOIs The data that support the findings of this study are openly available in figshare at http://doi.org/10.6084/m9.figshare.25688664. * § FIGURE CAPTIONS Fig. 1 a-c, Atomic-level structures of 2D silica nanocrystalline (NCG, a), paracrystalline (Para, b) and continuous random network (CRN, c) glasses. The inset in panel c shows the CRN structures reported in experimental studies<cit.>. The rings are colored by their number of edges, as illustrated in panel a. The inset in panel c is reprinted from Büchner et al., Chemistry: A European Journal 20, 9176–9183 (2014) with the permission of John Wiley & Sons. d, Ring distributions in NCG, Para, and CRN. e-f, Radial distribution functions (RDFs) of computer-generated CRN (e) and experimentally-identified<cit.> (f) structures. g, The probability distribution function of structural similarity, s, which is defined in the text. The reference structure is shown in the inset. h, i, Angle distribution functions (ADFs) of computer-generated CRN (h) and experimentally-identified<cit.> (i) structures. Fig. 2 a, Virial coefficients (V_θ) along the tensile directions (the arrows) in the undeformed, equilibrium structures. The positions of fracture initiation are annotated by the circles. b, Relation between the smallest value of V_θ (the most stretched state) and the strain to void nucleation for the 144-atom and 3456-atom samples. c, Cumulative probability density function (CDF) of V_θ at the sites of void nucleation, from the smallest to the largest. d, Relation between Si-O-Si bond lengths (the distance between two silicon atoms) in equilibrium structures and the strain to void nucleation. Data in panels (b) and (d) are fitted by exponential functions. e, The length distribution of bonds at the breaking event. Fig. 3 a, Stress-strain relations of 2D silica NCG, Para, CRN glasses, and crystals (stretched along the armchair direction) under uniaxial tension. The peak strain values are marked by vertical dashed lines. b, Roughness of cleaved edges, which is defined as the root mean square (RMS) of edge profiles measured from their average values. The large standard deviation (SD) in roughness reflects the nanostructuring effect as indicated in the simulation snapshots (panels e, h, and k). c-k, Simulation snapshots of the undeformed (equilibrium) (c, f, i), void-nucleation (d, g, j), post-fracture (e, h, k) states of NCG (c-e), Para (f-h), and CRN (i-k) structures. Fig. 4 a, Crack size measured in the unit of sample size, which evolves with the simulation time and strain. b, Relation between strain and the reduced crack size. c, Relation between strain and the number of voids, which is collected from 100 simulations of each model. The shadowed regions are bounded by the SDs. d, The size of propagating and non-propagating voids, measured in the unit of sample size. The critical crack size, c_ cr = 0.092 (the dashed line), is calculated from the Griffith theory (c_ cr = 4γ/π E ε^2), where the values of γ and E are the edge energy densities and Young's modulus extracted from our NN-F^3 simulations, respectively. ϵ is the applied strain. e, Snapshots of crack propagation across a simulated sample showing coalescence and bridging events. f, r^-1/2-singular K field at the crack tip and stress heterogeneity measured stress σ_yy for the CRN glass. g, Dynamical fracture patterns in the CRN structures showing the disorder-trapping effect and void-guided crack paths. The black dots represent the cleaved edges and nucleated voids during the fracture process, where the void-guided processes are highlighted in the insets.
http://arxiv.org/abs/2407.12946v1
20240717182412
On the conformal group of a globally hyperbolic spacetime
[ "Ali Bleybel" ]
gr-qc
[ "gr-qc", "math-ph", "math.MP" ]
Faculty of Sciences (I), Lebanese University, Beirut, Lebanon On the conformal group of a globally hyperbolic spacetime Ali Bleybel July 22, 2024 =============================================================== § ABSTRACT The Groups of causal and conformal automorphisms of globally hyperbolic spacetimes were studied. In two dimensions, we prove that all globally hyperbolic spacetimes that are directed and connected are causally isomorphic. We work out the consequences of this fact and obtain a (partial) classification of the causal automorphism and conformal groups of a two-dimensional globally hyperbolic space. Finally, we present a generalization for locally conformally flat globally hyperbolic spacetimes. § INTRODUCTION In General Relativity, conformal mappings are important because they preserve the causal structure (up to time orientation) and light-like geodesics up to parametrization. Zeeman, <cit.>, showed that the group of causal automorphisms of Minkowski spacetime (of dimension >2) coincides with the group G generated by orthochronous Lorentz transformations, space-time translations, and dilatations. Later, in <cit.>, Levichev generalized these results to smooth time-orientable Lorentzian manifolds M with dimensions >2. More precisely, he showed that any causal automorphism of M is automatically conformal, provided that it is past and future distinguishing. Although there are many results concerning the conformal group of a compact Lorentzian manifold (e.g.,  <cit.>), results concerning the group of conformal automorphisms of spacetime are difficult to find in the litterature (especially physics litterature). This study addresses this problem using the back-and-forth method (reminiscent of model theory). We obtain a partial result, namely for two-dimensional globally hyperbolic spacetimes. In particular, we isolate an order-theoretic condition satisfied by spacetimes causally isomorphic to two-dimensional Minkowski spacetime. These results might be of interest especially in the light of the celestial holography program. While the results obtained using our method can probably be shown via more standard methods (especially in the 2-dimensional case), it is worthwhile to note that the importance of this work lies in its potential for generalization to higher dimensional spacetimes. Furthermore, in the 2-dimensional case, this paper is largely self contained. The rest of this paper is organized as follows. Section 2 outlines the background material necessary for the rest of the paper and the notation and introductory results. In Section 3, we introduce and prove the main theorem. Section 4 provides a partial classification of the causal and conformal automorphism groups. Section 5 generalizes the results obtained in the case of locally conformally flat globally hyperbolic spacetimes. § PRELIMINARIES We assume the reader is familiar with elementary first-order logic and model theory. §.§ Order theory Let (X, ≺) be a partially ordered set, with the order relation denoted by ≺. Recall that ≺, a binary relation, is a subset of the Cartesian product X × X. Being an order relation, ≺ is a reflexive, transitive, and antisymmetric relation on X. If A is a subset of X, the induced order on A is the binary relation ≺|_A := (A × A) ∩≺. ≺|_A is an order relation on A. Given (partially) ordered sets X, Y, a partial isomorphism X →̇ Y is an order isomorphism A → B with A ⊂ X, B ⊂ Y are equipped with the induced orders from X and Y respectively. The set { x ∈ℭ| a ≺ x ≺ b} will be denoted by [a,b] and is called the Alexandrov interval with endpoints a, b. For a spacetime , an interval [a,b] is also called a (closed) causal diamond. By a pattern, we mean a finite set equipped with a strict order relation. §.§ The setting We will be using the following signature L := {↠, ≪}, where ↠, ≪ are two binary relation symbols. §.§.§ Theory We consider the theory T, which consists of all statements which are logical consequences of the following axioms: * ∀ x ( x ≪ x ) * ∀ x ∀ y ∀ z (( x ≪ y & y ≪ z) → x ≪ z) * ∀ x (x ↠ x) * ∀ x ∀ y ((x ↠ y & y ↠ x) → x=y) * .not.(x ≪ y & x ↠ y). And for each natural number n > 0, the following axiom: (6)_n ∀ x_1 ∀ x_2 …∀ x_n [(⋀_i=1^n-1 x_i ↠ x_i+1 & x_1 ↠ x_n ) ⟶⋀_j, k, 1 ≤ j ≤ k ≤ n x_j ↠ x_k ]. §.§.§ Types and quantifier-free types Let n >0 and let x̅ := (x_1, …, x_n) be a tuple of n-variables. An n-type t = t(x̅) in T is a set of formulas ϕ(x̅), such that for each formula ψ(x̅) ∈ t(x̅), there exists a model M of T and a tuple of elements (a_1, …, a_n) ∈ M^n such that ψ(a_1, …, a_n) holds in M. The case where n=1 corresponds to 1-types. We say that t(x̅) is a quantifier-free type if all formulas in t are quantifier-free. For a tuple a̅∈ M^n, n >0, we denote by tp(a̅) the set of all formulas ϕ(x̅) such that ϕ(a̅) holds in M. Let X be a set of elements of M, and let b̅ be a listing of the elements of X (so b̅ can be of infinite length). The type of a̅ over X, denoted by tp(a̅/b̅) (or, equivalently, tp(a̅/X)) is the set of all formulas ϕ(x̅, y̅) (where y̅ is a tuple of variables), such that ϕ(a̅, b̅') holds in M. Here b̅' is some tuple of elements of X of the same length as y̅. The quantifier-free type of a̅ over X, denoted by qftp(a̅/X) is the set of all quantifier-free formulas ϕ(x̅, y̅) such that ϕ(a̅, b̅') holds in M. §.§ Corresponding notions in the context of spacetime physics The causality relation in the context of spacetime physics is denoted by ≺: x ≺ y iff x ∈ J^-(y) where J^-(y) is the past causal cone of y; note that y ∈ J^-(y). We will also make use of the relation ≪: x ≪ y (Chronology) iff x ∈ I^-(y) (x belongs to the open past lightcone of y). For a globally hyperbolic spacetime, the relation ≪ is transitive and irreflexive; it follows that ≪ is antisymmetric. The relation ≪ is then a strict order. Let X ⊂ be a finite set. Then X is a strict pattern if for all x, y ∈ X, x ≠ y, x ≪ y or x ⊀ y. The point of this definition is to exclude non-open relations. The statements x ≪ y and I^-(y) ∩ I^+(x) ≠∅ are equivalent. Other equivalent statements are I^+(y) ⊊ I^+(x) and I^-(x) ⊊ I^-(y). The expression x ↠ y (used earlier in the context of the Theory T) is defined here as x ≺ y & x ≪ y. We denote by x^⊥ the set of elements causally unrelated to x, i.e. { y ∈| x ≺ y & y ≺ x}. The relation y ∈ x^⊥ is also denoted by x y. Assume a ↠ b. Then, for any c, we have a ≺ c & c ↠ b → a ↠ c b ≪ c → a ≪ c b || c → c ⊀ a The relation x ≺ y can be expressed as (modulo T): x ≺ y ≡ x ≪ y .or. x ↠ y where we used the following axiom (5): .not.(x ≪ y & x ↠ y). To see this, observe that x ↠ y ↔ (x ≺ y & x ≪ y), and the required equivalence follows. We have: Let ℳ be a globally hyperbolic spacetime. Then ℳ is a model of the theory T in the language ℒ. It is a standard fact that the relations ↠ and ≪ defined as in subsection <ref>, satisfy the axioms (1), through (5), and (6)_n of <ref> for all n>0. For details, please see <cit.>. §.§ Conformal transformations A conformal diffeomorphism ϕ: → where (, g_) and (, g_) are two n-dimensional pseudo-Riemannian manifolds of signature (p,q), 0 ≤ p ≤ q, p+q = n is a C^∞-diffeomorphism φ: → such that φ^*(g_) = e^ψ g_ , for some arbitrary C^∞-function ψ: →ℝ. § CAUSAL ISOMORPHISMS OF GLOBALLY HYPERBOLIC SPACETIMES §.§ General considerations Let 𝒞, 𝒟 be models of T. A causal isomorphism f: 𝒞→𝒟 is a bijective map satisfying ∀ x, y ∈𝒞, x ≺ y ↔ f(x) ≺ f(y) The above definition applies to globally hyperbolic spacetimes ℳ, 𝒩. Let a, b be points in ℳ, 𝒩 respectively. We have: Let ℳ and 𝒩 be globally hyperbolic spacetimes. Assume that ℳ and 𝒩 are upward and downward directed. A partial map {a}→{b} can be extended to a causal isomorphism ℳ→𝒩 if and only if for any finite tuples (a_0=a, a_2, …, a_n) and (b_0=b, b_2, …, b_n) with qftp(a_1, …, a_n) = qftp(b_1, …, b_n) (in the language L:= {≪, ↠}) the following conditions hold: and 𝒪_0 ∩𝒪_1 ∩…∩𝒪_n ≠∅⟷𝒪^'_0 ∩𝒪^'_1 ∩…∩𝒪^'_n ≠∅ where 𝒪_i is one of I^+(a_i), I^-(a_i), a_i^⊥ (respectively 𝒪^'_i is one of I^+(b_i), I^-(b_i), b_i^⊥), and, for all I, J ⊂{0,1, …,n}, ∃ x [⋀_i ∈ I a_i ↠ x & ⋀_j ∈ J x ↠ a_j] ⟷∃ y [⋀_i ∈ I b_i ↠ y & ⋀_j ∈ J y ↠ b_j ] Before proceeding to the proof of this Theorem, let us show the following: Keep the above notation and hypotheses (conditions <ref> and <ref>). Assume that c∈ℳ satisfies [⋀_i ∈ I a_i ↠ c & ⋀_j ∈ J c ↠ a_j] for some I, J ⊂{0,1, …, n}, and that, furthermore: c ∈𝒪_i_0∩𝒪_i_1∩…∩𝒪_i_k, for some open sets 𝒪_i_ℓ where 𝒪_i is one of I^+(a_i_ℓ), I^-(a_i_ℓ), a_i_ℓ^⊥, ℓ=0, 1, …, k, i_ℓ∈{0, 1, …, n}. Then, we have: ∃ d [⋀_i ∈ I b_i ↠ d & ⋀_j ∈ J d ↠ b_j] & d ∈𝒪'_i_0∩𝒪^'_i_1∩…∩𝒪^'_i_k, with 𝒪^'_i_ℓ, ℓ=0, 1, …, k are such that 𝒪_i_ℓ = I^±(a_i_ℓ) ↔𝒪^'_i_ℓ = I^±(b_i_ℓ) & 𝒪_i_ℓ = a_i_ℓ^⊥↔𝒪^'_i_ℓ = b_i_ℓ^⊥. We may suppose that I ∩ J = ∅ (otherwise c = a_m for some m ∈ I ∩ J and it suffices to take d = b_m). Observe then that I, J and {i_0, i_1, …, i_k} are disjoint. Recall that {x ∈ℳ | x ↠ e Or e ↠ x } = ℳ∖ (I^+(e) ∪ I^-(e) ∪ e^⊥)}. Let us consider the case |I|=1, |J|=0, I= {a_s} for simplicity; the other case can be treated similarly. Then, as a_s↠ c, c ∈ℳ∖ (I^+(a_s) ∪ I^-(a_s) ∪ a_s^⊥) =: ℱ; it follows that c ∈ℱ∩𝒪_i_0∩𝒪_i_1∩…∩𝒪_i_k. On the other hand, if ℱ^'∩𝒪^'_i_0∩𝒪^'_i_1∩…∩𝒪^'_i_k = ∅ where ^' = ∖ (I^+(b_s) ∪ I^-(b_s) ∪ b_s^⊥) we get 𝒪^'_i_0∩𝒪^'_i_1∩…∩𝒪^'_i_k⊂ (I^+(b_s) ∪ I^-(b_s) ∪ b_s^⊥. It can now be seen that the latter implies that there are relations among {b_i_ℓ, b_s, ℓ = 0, 1, …, n} which are nonexistent among {a_i_ℓ, a_s, ℓ = 0, 1, …, n} contradicting the hypotheses, hence the claim. §.§.§ Proof of Lemma Let ℭ, 𝔇 be dense subsets of ℳ, 𝒩 respectively. These sets are equipped with the induced relations ≪ (Chronology) and ↠ (Horismos). Back & Forth: A finite partial isomorphism ℳ𝒩 is an order isomorphism p: A → B, with A, B finite subsets of ℳ and 𝒩 respectively, equipped with the corresponding induced order relations. We will construct, inductively, a family of partial isomorphisms p_i: A_i → B_i, for i ∈ℕ where A_i, B_i are finite subsets of ℭ^', 𝔇^' (with ℭ⊂ℭ^'⊂ℳ, 𝔇⊂𝔇^'⊂𝒩 and ℭ^', 𝔇^' are countable), such that A_i ⊂ A_i+1, B_i ⊂ B_i+1 ⋃_i ∈ℕ A_i = ℭ^', ⋃_i ∈ℕ B_i = 𝔇^', qftp(a_n/â) = qftp(b_n/b̂) The union ⋃_i ∈ℕ p_i will then be an order isomorphism ℭ^'→𝔇^'. §.§ Back & forth Let p_n be a partial isomorphism mapping a_i to b_i for i=0, 1, …, n. Forth: (n is even) Let c be an element of ℭ∖ A. Let c_↓ be the set {x ∈ A | x ≪ c}. Similarly, c_↑ := {x ∈ A | c ≪ x}, and c_ := { x ∈ A | x c}. The sets c_↠ and c_↞ are defined as c_↠ := { x ∈ A| c ↠ x}, c_↞ := { x ∈ A| x ↠ c}. We will assume that c_↠ = c_↞ =∅. It follows that ⋂_x ∈ c_↓ I^+(x) ∩⋂_x ∈ c_↑ I^-(x) ∩⋂_x ∈ c_ x^⊥≠∅ By the assumptions of the Lemma, we have ⋂_x ∈ c_↓ I^+(p_n(x)) ∩⋂_x ∈ c_↑ I^-(p_n(x)) ∩⋂_x ∈ c_ p_n(x)^⊥≠∅ and it suffices to choose a d in the above intersection. Denote a_n+1 := c and b_n+1 := d. Back: (n is odd) Let d be an element of 𝒩∖ B. Let d_↓ be the set {x ∈ b | x ≪ d}. Similarly, d_↑ := {x ∈ B | d ≪ x}, and d_ := { x ∈ B | x d }. The sets d_↠ and d_↞ are defined as d_↠ := { x ∈ B| d ↠ x}, d_↞ := { x ∈ B| x ↠ d}. We will assume that d_↠ = d_↞ =∅. It follows that ⋂_x ∈ d_↓ I^+(x) ∩⋂_x ∈ d_↑ I^-(x) ∩⋂_x ∈ d_ x^⊥≠∅ By the assumptions of the Lemma, we have ⋂_x ∈ d_↓ I^+(p^-1_n(x)) ∩⋂_x ∈ d_↑ I^-(p^-1_n(x)) ∩⋂_x ∈ d_ p^-1_n(x)^⊥≠∅ and it suffices to choose a c in the above intersection. Denote a_n+1 := c and b_n+1 := d. The case where c_↠ (say) is non-empty is handled similarly by using condition (<ref>). More precisely, assume that c satisfies [⋀_i ∈ I a_i ↠ c & ⋀_j ∈ J c ↠ a_j] for some I, J ⊂{0,1, …, n}. Then by condition <ref>, there exists some d ∈𝒩 satisfying [⋀_i ∈ I b_i ↠ d & ⋀_j ∈ J d ↠ b_j]. Using Lemma <ref>, we can assume that tp(d/b̅)=tp(c/a̅). However, might be the case that d ∉𝔇, in which case we let 𝔇_n := 𝔇_n-1∪{d}, where 𝔇_n-1 was defined at the earlier stages. If, in the back stage, d_↠≠∅ (say), then a similar reasoning as above can be applied, and we define a set ℭ_n:= ℭ_n-1∪{c}, where c satisfies tp(c/a̅) = tp(d/b̅). Define ℭ^' := ⋃_n ∈ℕℭ_n, 𝔇^' := ⋃_n ∈ℕ𝔇_n and p := ⋃_k ∈ℕ p_k. Then: The map p is a total causal isomorphism ℭ^'→𝔇^'. The map p is total: let x be an element of ℭ^'. Then x = a_ℓ for some ℓ∈ℕ. Hence, x ∈ A_ℓ. It follows that p|_A_ℓ = p_ℓ is defined on x=a_ℓ. To show that p is bijective, we argue similarly: let, for the sake of a contradiction, x, y ∈ℭ^' be elements such that p(x) = p(y). By the above reasoning, there exists some i such that p_i(x)=p(x)=p(y)= p_i(y), contradicting the injectivity of p_i. Similarly, let z ∈ℭ^'. Then z=a_ℓ for some sufficiently large ℓ; then z=a_ℓ∈ B_k (for some natural number k). We have p_k(A_k) = B_k. Let us define Φ: ℳ→𝒩, x ↦lim_k p(x_k), where x_k → x as k →∞, x_k ∈dom(p). This is well defined as otherwise p ceases to be a causal morphism: The map Φ is a well-defined non-trivial causal isomorphism ℳ→𝒩. Proof of Lemma: First we show that Φ is a well defined map: let y, z ∈𝒩 be two spacetime points, such that lim_k p(x_k) =y and lim_ℓ p(x^'_ℓ)=z, where (x_k)_k, (x^'_ℓ)_ℓ both converge to x ∈ℳ. Let 𝒱 be a neighborhood 𝒱= [a,b] of x, with a, b ∈ℭ; then x_k, x^'_ℓ∈𝒱 ∀ k, ℓ > N_1, & p(𝒱∩ℭ^') ⊂ [p(a), p(b)] for some N_1 ∈ℕ. In particular p(x_k), p(x^'_ℓ) ∈ [p(a), p(b)] for all k, ℓ > N_1. Considering a sequence 𝒱_n:= [a_n,b_n], a_n, b_n ∈ℭ of nested causal diamonds 𝒱_n+1⊂𝒱_n, such that a_n, b_n → x as n →∞ we obtain lim_k→∞ p(x_k)=lim_ℓ→∞ p(x'_ℓ), thus y=z and Φ is well defined. Similarly, Φ is a causal morphism. Finally, to see that Φ is bijective, we observe that p is bijective, and that each point z ∈𝒩 is the limit of some sequence (z_k )_k: the sequence (p^-1(z_k))_k being contained in a compact subset of ℳ it converges to some limit Φ^-1(z). While back & forth can be used (in general) to show that two countable structures are isomorphic, the above result extends the isomorphism ℭ^'→𝔇^' to Φ: →. In reality, the two countable structures ℭ^', 𝔇^' that were shown isomorphic by p are both dense in ℳ, 𝒩 respectively, and that furthermore, ℳ, 𝒩 are standard pseudo-riemannian manifolds (hence is rigid over ℭ^', and similarly for ). § PARTIAL CLASSIFICATION OF AUT() FOR A TWO-DIMENSIONAL GLOBALLY HYPERBOLIC SPACETIME This section considers the case of a two-dimensional globally hyperbolic spacetime. We have: Let ℳ be a two-dimensional globally hyperbolic spacetime having non-compact Cauchy surfaces. Assume furthermore that ℳ is upward and downward directed, i.e. ℳ satisfies the following (†) For all x, y ∈ℳ, there exist u, v ∈ℳ such that x, y ≺ u & v ≺ x, y. Then a causal isomorphism exists i: ℳ→𝕄^2. Consequently, the group of causal automorphisms of ℳ is Aut(𝕄^2). Let 𝒞 be a countable dense subset of ℳ. Let A, B ⊂𝒞, A ∩ B = ∅ be finite. Assume that A ≪ B; then there exists some z ∈𝒞 such that A ≪{z}≪ B. This is done by induction on |A| and |B|. * If A = B = ∅, then the statement holds since ℳ is non-empty. * If |A| =0 then by induction on |B|: * |B|= 1: the statement follows since ℳ is globally hyperbolic with no boundary; * |B|=2: the statement follows by the assumption that ℳ is directed. * |B| >2: the statement follows easily by induction. * The case |B|=0 is treated similarly. * |A| = |B| =1: follows by the denseness of 𝒞. * |A|=1, |B|=2: write A = {a}, B= {b_1, b_2} with a ≪ b_1, b_2. Then b_1, b_2 are contained in the forward light cone with vertex a. Similarly, a is contained in the past light cones of b_1, b_2. The past light cones of vertices b_1, b_2 have nonempty intersection I_12 := I^-(b_1) ∩ I^-(b_2) (since a belongs to this intersection). Let z ∈ I_12∩ I^+(a). Then, z satisfies the required property. * A similar argument applies when |A|=2, |B|=1. * |A|=2, |B|=2: This is the base step for induction; Let A = {a_1, a_2}, B := {b_1, b_2}. Then a_1, a_2 ≪ b_1, b_2. As a_1, a_2 ≪ b_1, two null curves emanating from a_1, a_2 intersect at a point p. Then p ≪ b_1, b_2. Similarly, two null curves containing b_1, b_2 respectively intersect at some point q. We have p ≪ q; for some z ∈ [p,q], p ≪ z ≪ q. Hence, z satisfies the requirements of the Lemma. Assume the induction hypothesis holds for all sets A, B satisfying |A| = m, |B|=n. For |A|=m+1, |B|=n: let A^'⊂ A, |A^'| =m, and let {a} := A ∖ A^'. Then, by the induction hypothesis, there exists some z such that A^'≪{z}≪ B. If a ≪ z, we are done. Otherwise, to conclude, apply the result for the sets {a, z}, B. For |A|=m, |B|=n+1 a similar reasoning applies. It follows immediately: Keep the above notation. Let {a_0, a_1, …, a_n}⊂ℳ, and {b_0, b_1, …, b_n}⊂𝕄^2 with qftp(a̅) = qftp(b̅). Then 𝒪_1 ∩𝒪_2 ∩…∩𝒪_n ≠∅↔𝒪^'_1 ∩𝒪^'_2 ∩…∩𝒪^'_n ≠∅ §.§ Proof of Theorem The result follows from the above Lemmas, together with the observation that the condition qftp(a̅)= qftp(b̅) holds for the tuples a̅ = a_0, b̅ = b_0 with arbitrary a_0 ∈ℳ and b_0 ∈𝕄^2. §.§ Two-dimensional spacetimes with compact Cauchy surfaces To handle the case where a spacetime has compact Cauchy surfaces, we use the following result: Let M be a Lorentzian manifold (or a semi-Riemannian manifold) with universal covering π: M→ M. Denote by Γ the group π_1(ℳ). Then, Γ⊂Aut(ℳ) =: G (respectively Γ⊂ Conf(ℳ) =:G_1). Also, Aut(M) (or, Conf(M)) is isomorphic to (Γ)/Γ in which (Γ) is the normalizer of Γ in Aut(M) (or, Conf(M)). We obtain the following: Let ℳ be a two-dimensional globally hyperbolic spacetime having compact Cauchy surfaces. Assume furthermore that ℳ (the universal covering space of ℳ) is directed. Then the group Aut(ℳ) of causal automorphisms of ℳ is given by (ℤ) /ℤ where (ℤ) is the normalizer of ℤ in Aut(𝕄^2). §.§ Non-directed two-dimensional globally hyperbolic spacetime Let us define 𝒟 := { (x,t) ∈𝕄^2| |x|+|t| <1 }. This is the diamond I^+((0,-1)) ∩ I^-((0,+1)). Note that 𝒟 is causally isomorphic to 𝕄^2. Any two-dimensional spacetime with non-compact Cauchy surfaces can be causally isomorphically embedded into 𝒟. We denote the above embedding by ι. Let us recall the statement for = 𝕄^2: The group Aut(𝕄^2) of causal automorphisms of the Minkowski plane is given by (Homeo_≤(ℝ))^2 ⋊ S_2 We may characterize causal automorphisms of non-directed globally hyperbolic spacetimes by considering the embedding ι. Let ℳ be a two-dimensional globally hyperbolic spacetime. If ℳ is not directed, then any causal automorphism of ℳ induces a homeomorphism ∂ιℳ→∂ιℳ, where ∂ιℳ is equipped with the standard topology. By analyzing the structure of non-directed spacetimes, we obtain a more precise classification of the automorphism group: Let be a two-dimensional globally hyperbolic spacetime. Then Aut() is one of the following groups: * Γ :=(Homeo_≤(ℝ))^2 ⋊ S_2 ; * _Γ(ℤ)/ℤ ; * Γ^' := a subgroup of Γ stabilizing a countable set; * _Γ'(ℤ)/ℤ ; * Homeo_≤(ℝ) ⋊ S_2; * _Homeo_≤(ℝ)⋊ S_2(ℤ)/ℤ. Items (1) and (2) have already been investigated (they correspond to directed spacetimes with non-compact and compact Cauchy surfaces, respectively). Assume that has non-compact Cauchy surfaces. Assume furthermore that is upward but not downward-directed, and let a_1, a_2 ∈∂ be such that () ∃ b (a_1, a_2 ≺ b & ∄ x( x ∈ & x ≺ a_1, a_2)). Then any automorphism of must preserve the type tp(a_1,a_2). We have three possible cases: * There exist timelike curves γ_1, γ_2 containing a_1, a_2 respectively such that for any b_1 ∈γ_1, b_2 ∈γ_2 satisfying a_1 ≪ b_1 ≪ b, a_2 ≪ b_2 ≪ b we have () holds for b_1, b_2. * There exist null curves γ_1, γ_2 containing a_1, a_2 respectively such that for any b_1 ∈γ_1, b_2 ∈γ_2 satisfying a_1 ↠ b_1 ↠ b, a_2 ↠ b_2 ↠ b we have () holds for b_1, b_2. * For all causal curves γ_1, γ_2 with a_1 ∈γ_1, a_2 ∈γ_2, there exist some b_1 ∈γ_1, b_2 ∈γ_2 satisfying a_1 ≺ b_1 ≺ b, a_2 ≺ b_2 ≺ b and () does not hold for b_1, b_2. Cases 1. and 2. imply the existence of a small interval in ∂ (i.e., the image of an interval of ℝ by a homeomorphism ℝ→∂). By paracompactness of ∂, we conclude the existence of a countable set of b (as in () above), and hence Aut() stabilizes a countable subset of . In case 3., the group of causal automorphisms stabilizing ∂ is seen to be Homeo_≤(ℝ) ⋊ S_2: let ∂ be defined by x^- = φ(x^+), and let Φ: →, (x^+, x^-) ↦ (ϕ(x^+), ψ(x^-)) be a causal automorphism of , where ϕ, ψ are increasing homeomorphisms of ℝ. Denoting the extension of Φ to by Φ^', let Φ^': →, (x^+, x^-) ↦ (ϕ(x^+), ψ(x^-)) (with ϕ, ψ defined over ℝ). Then, on ∂ι: ψ(φ(x^+)) = φ(ϕ(x^+)); let u be a parameter for ∂, x^+ = f_+(u), x^- = f_-(u), then ϕ(f_+(u)) = f_+(φ(u)), ψ(f_-(u)) = f_-(φ(u)). We may assume that f_+ or f_- is monotone (otherwise, it can be seen that for some x ∈∂, we have I^±(x) ∩ι≠∅, impossible by Theorem 4.1 of <cit.>). It follows that φ is monotone, as required. The case where is downward but not upward directed is treated similarly. If is neither upward nor downward directed, then the same analysis still applies (with minor changes), and we also get that Aut() is one of the groups listed in the Theorem. The other cases (i.e., has compact Cauchy surfaces) follow by the application of Theorem <ref>. §.§ Group of conformal automorphisms of Let us denote by Diffeo(ℝ)_≤ the group of increasing C^∞-diffeomorphisms of the real line. In <cit.>, a characterization of the conformal automorphism group of a two-dimensional globally hyperbolic spacetime with a non-compact Cauchy surface is given. We obtain a simpler characterization of C^∞-conformal diffeomorphisms of the Minkowski plane using null coordinates x^+ = t+x, x^-= t-x (the standard metric on 𝕄^2 written in terms of null coordinates is given by -dx^+dx^-): The group Conf(𝕄^2) of conformal diffeomorphisms of the plane is given by Diffeo_≤(ℝ)^2 ⋊ D_2, where D_2 is the dihedral group, D_2 ≃ C_2 × C_2. Let Φ : 𝕄^2 →𝕄^2, (x^+, x^-) ↦ (X^+,X^-) be a conformal diffeomorphism of the plane. We have: -dX^+dX^- = -e^φ(x^+,x^-) dx^+dx^-, as Φ is a conformal map, with φ(x^+,x^-) being an arbitrary function. Writing X^+ = X^+(x^+,x^-) and X^- = X^-(x^+,x^-) we obtain -(∂ X^+/∂ x^+ dx^+ + ∂ X^+/∂ x^- dx^-)(∂ X^-/∂ x^+ dx^+ + ∂ X^-/∂ x^- dx^-) = -e^φ(x^+,x^-) dx^+dx^-, hence X^+_+X^-_+ = 0 X^+_- X^-_- = 0 X^+_+ X^-_- + X^+_-X^-_+ > 0 Finally, X^+ = X^+(x^+), X^- = X^-(x^-), and X^+_+X^-_- >0 Or X^+ = X^+(x^-), X^- = X^-(x^+), and X^+_-X^-_+ >0. The rest of the proof now follows <cit.>: any conformal diffeomorphism of 𝕄^2 is of one the forms (x^+,x^-) ↦ (f(x^+), g(x^-)) or (x^+,x^-) ↦ (f(x^-), g(x^+)), where f,g are diffeomorphisms of the real line, which are both increasing or decreasing. It follows that the group Conf(𝕄^2) is generated by elements in Diffeo_≤(ℝ)^2, space reflections (x^+, x^-) ↦ (x^-, x^+) and time reflections (x^+,x^-) ↦ (-x^-, -x^+). We still need to see that the subgroup Diffeo_≤(ℝ)^2 is normal in Conf(𝕄^2). Let p_1 denote the transposition (x^+, x^-) ↦ (x^-, x^+); for any n ∈ Diffeo_≤(ℝ)^2, n: (x^+,x^-) ↦ (f(x^+), g(x^-)) we have p_1np_1: (x^+,x^-) ↦ (g(x^+), f(x^-) where g, f are increasing diffeomorphisms of ℝ. Hence p_1np_1 ∈ Diffeo_≤(ℝ)^2 as required. Let p_2 denote the map (x^+, x^-) ↦ (-x^-, -x^+); then we have p_2np_2: (x^+,x^-) ↦ (-x^-, -x^+) ↦ (f(-x^-), g(-x^+)) ↦ (-g(-x^+), -f(-x^-)). Defining f_1, g_1 by f_1(x^+) = -g(-x^+), g_1 = -f(-x^-) it can be seen that both f_1 and g_1 are increasing diffeomorphisms of ℝ. Let now p := p_1p_2: (x^+,x^-) ↦ (-x^+,-x^-); a similar reasoning shows that pnp is in Diffeo_≤(ℝ)^2, whence the result follows. This proves the Theorem. Applying the above considerations allows us to obtain the following: Let be a two-dimensional globally hyperbolic spacetime. Then Conf() is one of the following groups: * Γ_1 := (Diffeo_≤(ℝ))^2 ⋊ D_2 ; * _Γ_1(ℤ)/ℤ ; * Γ'_1:= a subgroup of Γ_1 stabilizing a countable set; * _Γ'_1(ℤ)/ℤ ; * Γ_2:= Diffeo_≤(ℝ) ⋊ D_2; * _Γ_2(ℤ)/ℤ; * Γ'_2 := a subgroup of Γ_2 stabilizing a countable set; * _Γ'_2(ℤ)/ℤ. The same proof method for Theorem <ref> still applies after making the necessary changes at the appropriate places (extra care should be taken for conformal anti-causal diffeomorphisms). § LOCALLY CONFORMALLY FLAT GLOBALLY HYPERBOLIC SPACETIMES While a generic spacetime of dimension ≥ 3 has very few causal (or conformal) automorphisms, the locally conformally flat case is more interesting. Let ℳ be globally hyperbolic spacetime. Assume that ℳ is * simply connected ; * locally conformally flat ; * directed ; * has non-compact Cauchy surfaces ; * dim(ℳ) ≥ 3 ; Then ℳ is globally conformally flat. It suffices to check that the conditions (<ref>, <ref>) are met. Let 𝒟= [p,q] be an interval in ℳ. By compactness of 𝒟, a finite cover exists 𝒟_i, i=1, …, ℓ by conformally flat opens. The metric restricted to 𝒟_i has the form g_i := g|_𝒟_i = Ω_i (-dt^2 + ∑_i=1^n-1 dx_i^2) Then g|_𝒟 = Ω g_𝕄^n where Ω is obtained from the Ω_i using partition of unity. For any finite set A ⊂𝒟, we can then use the causal embedding 𝒟↪𝕄^n to deduce the required result. For a given spacetime , we denote by its universal covering space. Let ℳ be a globally hyperbolic spacetime. Assume that ℳ is * locally conformally flat ; * directed ; * dim(ℳ) ≥ 3 ; Let Γ := π_1(ℳ). Then Conf(ℳ) = _Conf(𝕄^n)(Γ)/Γ. Combining this with the result by A. V. Levichev (<cit.>), we obtain: Let 𝔐 be a globally hyperbolic spacetime having dimension ≥ 3. Assume that 𝔐 is locally conformally flat and that (𝔐, ≪) is directed. Then 𝔐 is conformally homogeneous. amsplain 99 CL Peter J. Cameron, Deborah C. Lockett, Posets, homomorphisms and homogeneity, Discrete Mathematics 310 (2010) 604-613. CN P.J. Cameron, J. Nešetřil, Homomorphism-homogeneous relational structures, Combinatorics, Probability and Computing 15 (2006) 91-103. GM A García-Parrado Gómez-Lobo and E Minguzzi, Product posets and causal automorphisms of the plane, Class. Quant. Gravity, vol. 28. no. 14 (2011). H Hawking, S. R., King, A. R. and McCarthy, P. J., J. Math. Phys. 17, 174 (1976). K Kim, D.-H., J. Geom. Phys. 96 (2015) 146-154. K2 Kim, D.-H., A classification of two-dimensional spacetimes with non-compact Cauchy surfaces, J. Geom. Phys. 73 (2013) 252–259. K3 Kim, D.-H., A note on non-compact Cauchy surfaces, Class. Quantum. Grav. 25, (2008) pp.238002. KP E. H. Kronheimer and R. Penrose (1967). On the structure of causal spaces, Math. Proc. Cambridge Philosophical Society, 63, pp 481501 doi:10.1017/S030500410004144X L A.V. Levichev, Prescribing the conformal geometry of a Lorentz manifold by means of its causal structure, Sov. Math. Dokl. 35, (1987) pp.452. MP2 Melnick, K. & Pecastaing, V. The conformal group of a compact simply connected Lorentzian manifold, arXiv:1911.06251v2 Z Zeeman, E. C., J. Math. Phys. 5, 490 (1964); doi: 10.1063/1.1704140
http://arxiv.org/abs/2407.12890v1
20240717080345
Modeling reflection spectra of super-Eddington X-ray sources
[ "Swarnim Shashank", "Honghui Liu", "Askar B. Abdikamalov", "Jiachen Jiang", "Cosimo Bambi", "Fergus Baker", "Andrew Young" ]
astro-ph.HE
[ "astro-ph.HE", "gr-qc" ]
0000-0003-3402-7212]Swarnim Shashank Center for Astronomy and Astrophysics, Center for Field Theory and Particle Physics and Department of Physics, Fudan University, Shanghai 200438, People's Republic of China swarnim@fudan.edu.cn 0000-0003-2845-1009]Honghui Liu Center for Astronomy and Astrophysics, Center for Field Theory and Particle Physics and Department of Physics, Fudan University, Shanghai 200438, People's Republic of China Institut für Astronomie und Astrophysik, Eberhard-Karls Universität Tübingen, D-72076 Tübingen, Germany 0000-0002-7671-6457]Askar B. Abdikamalov School of Humanities and Natural Sciences, New Uzbekistan University, Tashkent 100001, Uzbekistan Center for Astronomy and Astrophysics, Center for Field Theory and Particle Physics and Department of Physics, Fudan University, Shanghai 200438, People's Republic of China Ulugh Beg Astronomical Institute, Tashkent 100052, Uzbekistan 0000-0002-9639-4352]Jiachen Jiang Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, UK 0000-0002-3180-9502]Cosimo Bambi Center for Astronomy and Astrophysics, Center for Field Theory and Particle Physics and Department of Physics, Fudan University, Shanghai 200438, People's Republic of China School of Humanities and Natural Sciences, New Uzbekistan University, Tashkent 100001, Uzbekistan School of Physics, University of Bristol, Tyndall Avenue, Bristol BS8 1TH, UK School of Physics, University of Bristol, Tyndall Avenue, Bristol BS8 1TH, UK Cosimo Bambi bambi@fudan.edu.cn § ABSTRACT We present a relativistic disk reflection model based on the geometry calculated using analytical formulae for super-Eddington accretion flows. This model features a slim disk geometry where the inner disk thickness is proportional to radius, becoming thicker as the mass accretion rate increases. The slim disk profile reduces the brightness of the blue horn in the Fe K emission line for a fixed emissivity and significantly changes the intensity profile for a lamppost geometry. The model is constructed assuming a spherically symmetric spacetime. It can be used for any kind of sources showing fluorescent reflection features and predicted to have slim accretion disks, like slow rotating black holes in X-ray binaries, active galactic nuclei, tidal disruption events, and neutron star X-ray binaries. To show the capability of the model, we use the 2017 NICER and NuSTAR data of the ultraluminous X-ray transient Swift J0243.6+6124. § INTRODUCTION Super-Eddington accretion has become an important phenomenon to be studied in the recent years in sources where the luminosity exceeds the Eddington luminosity (L_ Edd≈ 1.26 × 10^38 M/M_⊙ erg s^-1). It has been observed in Ultraluminous X-ray sources (ULXs) <cit.> which are compact objects observed in nearby galaxies and even one Galactic pulsar Swift J0243.6+6124 <cit.>. Super-Eddington accretion has been observed in tidal disruption events (TDEs), which are transient events occurring when a star is disrupted <cit.> and accreted by a massive black hole (see <cit.>). There are also observations of super-Eddington accretion in active galactic nuclei (AGNs) <cit.> where the accretion takes place onto supermassive black holes at the centres of galaxies. Accretion beyond the critical Eddington limit is also considered an important factor of growth of massive black holes discovered in the early Universe <cit.>. Fluorescent iron lines are commonly observed in the X-ray spectra of a number of astrophysical sources, such as neutron star <cit.> and black hole <cit.> X-ray binaries (XRBs), AGNs <cit.>, and TDEs <cit.>. The source of this emission line is predicted to be the “reflection" taking place at the accretion disk (or some optically thick medium). The Comptonized photons from hot plasma around the central compact object interact with the material in the disk or outflow. This imprints the atomic data of the gas as emission and absorption lines onto the spectrum. However, due to the compact object there is strong gravity which blurs this reflection spectrum arising from gravitational redshift and Doppler boosting <cit.>. The relativistic reflection spectrum is characterized by the Compton hump and blurred elemental emission lines, out of which Fe K emission line at 6.4-6.9 keV is often the strongest feature <cit.>. Studying these reflection features is termed as X-ray reflection spectroscopy <cit.>: it provides a useful tool to probe the accretion dynamics and gravitational effects near to the compact object and remains one of the leading techniques to measure black hole spins <cit.> and even performing tests of fundamental physics <cit.>. However, the systematic effects introduced by the common assumption of a razor-thin accretion disk geometry <cit.> have not been thoroughly investigated. Some studies (e.g., <cit.>) have explored this topic, but they remain within the geometrically thin regime suitable for sub-Eddington accretion disks (H/r << 1). <cit.> found that the systematic uncertainty is small compared to the statistical uncertainty of current data. <cit.> studied these systematic uncertainties by fitting the data with the current reflection models to spectra obtained from numerically simulated disks and found that for current missions (NuSTAR) the models work well. Effects of geometrically thick disks were explored in <cit.>. Future high-resolution data from missions like AXIS, HEX-P, or Athena will require more precise models. This is particularly important in the super-Eddington regime, as more objects such as TDEs and ULXs exhibit relativistic reflection spectra in their X-ray observations (e.g., <cit.>). A full model which takes into account slim disks for super-Eddington scenarios have not been developed for studying reflection spectra (as per the authors' knowledge). We note that there are indeed existing models for the thermal emissions arising from slim disk geometries <cit.>. Recently, <cit.> studied ironline emissions from the optically thick outflows in a super-Eddington regime. In this work, we introduce a new relativistic reflection model based on a slim disk profile for super-Eddington accretion <cit.>. The article is organised as follows. In Section <ref>, we describe the model REFLUX. Section <ref> shows working of the model by fitting the NICER and NuSTAR data from the 2017 outburst of the galactic ULX pulsar Swift J0243.6+6124. Finally, we discuss the results and future directions in Section <ref>. Units used in this article are G=c=1. § MODEL DESCRIPTION To model the expected slim disk profile for a supercritical accreting source, we use the analytical disk model introduced by <cit.>. The model assumes an optically thick disk by introducing a radially dependent ratio f = 𝒬^-_ adv/𝒬^+_ vis. 𝒬^-_ adv is the advective cooling rate and 𝒬^+_ vis is the viscous heating rate <cit.>. f also depends on the accretion rate, which then controls the height of the disk in our implementation. The scale height of the disk is defined as H = [ (2N+3) B Γ_ΩΩ_0^2/ξ]^1/2 f^1/2r̃ , where Ω_0 is a multiplier to the Keplerian angular velocity (Ω = Ω_0Ω_ K), which we set to unity for having a disk with Keplerian velocity (Ω_ K), and Γ_Ω = -d logΩ/d log r is the linear approximation form of the angular velocity. We take B=1, N=3, ξ=1.5, and Γ_Ω=1.5 as detailed in <cit.>. f is defined as f(x) = 1/2[D^2 x^2 + 2 - D x (D^2 x^2 + 4)^1/2] , where D = [64 Ω_0^2/(2N+3) ξ B Γ_Ω]^1/2, x = r̃/ṁ , ṁ = Ṁ/Ṁ_ crit is the accretion rate in units of Eddington accretion rate (Ṁ_ crit = L_ Edd, where L_ Edd is the Eddington luminosity of the source), and r̃ is the disk radius. In our implementation of the model, we only vary ṁ, which gives us a family of disks whose scale heights are controlled by the value of ṁ. Also, we keep the inner edge of the disk as a free parameter, hence the r̃ represented above is the disk radius starting from the inner edge. Fig. <ref> shows the disk profile obtained by changing the value of the accretion rate. To construct a full relativistic reflection model, we assume Schwarzschild spacetime around the central object. Further we use the same methods of ray-tracing <cit.> and Cunningham's transfer function <cit.> as demonstrated in detail in <cit.> (specifically, exactly the same methods have been used to calculate the transfer functions as discussed in Sec. 3 of <cit.> for a thin disk with finite thickness). The free parameters of the model are: the inner edge of the disk, which in the current version of REFLUX varies from 6 M to 500 M, the accretion rate, which is varied from 1 Ṁ_ crit to 100 Ṁ_ crit, and the inclination of the disk, varying from 3^∘ to 70^∘. While constructing the model, we also have the outer radius of the disk up to 1000 M. For the lamppost versions, we have the height of the lamppost corona ranging from 2.2 M to 100 M. With higher values of inclination, the photons from the inner regions of the disk do not reach the distant observer, so we do not consider inclinations higher than 70^∘. In Fig. <ref>, we show the variations in iron line profiles for different parameter values, we see that the thickness of the disk produces a change in the shape of the ironlines, reducing the blue horn from thinner to a thicker disk. Fig. <ref> shows the ironline profiles for a lamppost emissivity. In the lamppost corona geometry, the emissivity profile of the disk significantly changes as inner region of the disk is more illuminated due to the thicker disk. For low corona heights and high accretion rates, the outer regions of the disk are shadowed due to the thickness of the disk. REFLUX works along with the <cit.> tables for calculating the reflection spectra in the rest frame of the gas. Tab. <ref> guides the full set of parameters available across the different sub-models (flavors) of REFLUX [With the exception of ṁ which is the accretion rate, the parameters listed in Tab. <ref> are same as that of the relxill model listed here: <https://www.sternwarte.uni-erlangen.de/ dauser/research/relxill/index.html>.]. Each of the sub-model mainly arises from using different tables. Sub-models with suffix mean they use a lamp-post emissivity. REFLUX is the base model using the angle-resolved cutoff powerlaw table. uses the table with <cit.> Comptonization for incident spectrum. uses the table, which has electron density as a free parameter and cutoff power-law with energy cutoff of 300 keV. is the relativistic line model. § SPECTRAL FITTING OF A ULX PULSAR In this section, we discuss the results of our fitting we obtained for the Galactic ULX pulsar Swift J0243.6+6124 <cit.>. The source went into a bright outburst in 2017, with its peak luminosity >10^39 erg s^-1, far exceeding the Eddington limit for a neutron star <cit.>. We analyze the quasi-simultaneous NICER (ObsID 1050390113, 1 November 2017) and NuSTAR (ObsID 90302319004, 30 October 2017) observations that have been fitted with the standard thin-disk reflection model in <cit.> (see their Fig. 7). We follow the procedure in <cit.> to process the NuSTAR (FPMA and FPMB) data. The NICER data are first processed with nicerl2 (CALDB v20240206 and geomagnetic data updated to March 2024). The source and background spectral files are then extracted using with the default background model. Spectra are grouped to ensure a minimal counts of 100 for each bin before fitting with v12.13.0c <cit.>. Best-fit parameter values and uncertainties (90% confidence level unless otherwise stated) are obtained using χ^2 statistics. As in <cit.>, we first fit the NICER (1–10 keV) and NuSTAR (3–79 keV) data with the model: . In this model, the and <cit.> component represent relativistic reflection from a razor-thin accretion disk, assuming fixed emissivity (q=3) and the corona as a point source above the black hole at a certain height (h), respectively. We fix the spin parameter at a_*=0 and the outer radius of the accretion disk at 1000 M. The best-fit parameters and the corresponding uncertainties are shown in Tab. <ref> for fixed emissivity and in Tab. <ref> with lamp-post corona geometry. At the time of this observation, the mass accretion rate of the source was at super-Eddington level. The absorption corrected X-ray flux in the 0.01-100 keV band is around 10 times of the Eddington limit. Therefore, a thick disk, instead of thin one, is expected. We replace the component with our REFLUX/REFLUXLP model, with the mass accretion rate fixed at 10 times of the Eddington accretion rate. As shown in Tabs. <ref> and <ref>, the models provide a good fit to the data, with the χ^2 close to that of . The best-fit model components and residuals are shown in Fig. <ref>. § DISCUSSIONS The quality of the fits with the model with relxill and the model with REFLUX is similar. We fit the same data and the two models have the same number of free parameters, so we can directly compare their χ^2 and the difference is marginal (Δχ^2 = 1.98 for fixed emissivity and 9.86 for lamppost). The χ^2 of the best-fit with relxill is slightly lower – a possible reason for this could be the smaller parameter grid of REFLUX than relxill in the FITS file. However, we know that the source was accreting around 10 times of its Eddington limit, so the fit with REFLUX appears to be more appropriate for the spectra analyzed. In Tab. <ref>, the two models provide consistent estimates of the parameters with the exception of the inclination angle of the disk and the iron abundance. The measurements of the inclination angle are i = 15.1_-2.4^+4 deg (relxill) and 25.5_-1.7^+3 (REFLUX). However, the estimates of disk inclination angles inferred from the analysis of reflection features are normally not very accurate and both models suggest a low value of i. Estimates of the iron abundance using X-ray reflection spectroscopy are even more problematic. It is currently unclear why reflection measurements often suggest super-solar iron abundances (but there are also sources for which reflection measurements suggest iron abundances significantly lower than the solar one); see, for instance, <cit.> for a short review on the problem. It may be related to a deficiency of current reflection models (e.g., underestimate of the disk electron density <cit.>, presence of two or more coronae <cit.>, etc.) or a true physical phenomenon, like radiative levitation of metal ions in the inner part of the accretion disk <cit.>. The estimate of the iron abundance with REFLUX would go to the right direction, decreasing the value of A_ Fe with respect to that inferred with relxill, but we would need to analyze more sources and more observations to see if this is a feature of our model and not an accidental result for the spectra analyzed in the previous section. In Tab. <ref>, we see that for a lamppost geometry, the inclination angle and the iron abundance for both the models closely match the result of REFLUX in the analysis with fixed emissivity. However, we see a significant difference in the corona height and the inner edge of the disk. With relxill, we see a very low corona height (the lowest in the model) and a larger value for the inner edge. With REFLUX, the case is the opposite, we obtain a higher corona height h > 65 M and inner edge closer to the central object R_in<14M. In the case of REFLUX, we see a small norm possibly because the whole disk may not be illuminated for many different h and R_in configurations making the flux from the outer part of the disk negligible. We caution the readers against interpreting the high value of h > 65 M too rigidly. As in our analysis, we assume a lamppost geometry, which may not be applicable to an Ultra-Luminous Pulsar (ULP) like Swift J0243.6+6124. The system might instead contain a more extended primary illuminating source, such as an accretion column (e.g., <cit.>). Nonetheless, we observed changes in inferred geometries when a slim disk model was employed. This underscores the critical importance of considering disk geometry when addressing systematic uncertainties. Such uncertainties significantly affect the inferred disk inner radius and, consequently, the magnetic field of ULPs (e.g., <cit.>). We will further investigate Swift J0243.6+6124 in this perspective in the future. A more detailed analysis of the 2017 NICER and NuSTAR data of Swift J0243.6+6124 is beyond the purpose of the present work, which is the presentation of REFLUX. In our fits (with both relxill and REFLUX), we have assumed that the spectrum illuminating the disk, and responsible for the reflection component can be described by a power law with a high-energy cutoff. However, this may not be the case for Swift J0243.6+6124. A power law with a high-energy cutoff is the spectrum expected for Comptonized photons, which is the case in which thermal photons from an accretion disk can inverse Compton scatter off free electrons in a hot corona. In the case of a neutron star, reflection features may be produced by illumination of the disk by thermal photons from the surface of the neutron star. Such a scenario could be implemented in REFLUX by using the xillverNS table, in which reflection spectra are produced by an incident blackbody spectrum. While for neutron star XRBs like Swift J0243.6+6124 the Schwarzschild metric should describe well the spacetime geometry around the source, in the case of other super-Eddington X-ray sources (black hole XRBs, AGNs, and TDEs) deviations from the Schwarzschild background can be important and it will be necessary to implement the Kerr metric in REFLUX. This is also left to future work. This work was supported by the National Natural Science Foundation of China (NSFC), Grant No. 12250610185 and 12261131497, and the Natural Science Foundation of Shanghai, Grant No. 22ZR1403400. J.J. acknowledges support from Leverhulme Trust, Isaac Newton Trust and St Edmund's College, University of Cambridge. Swift, NICER, NuSTAR astropy <cit.>, XSPEC <cit.>, numpy <cit.>, scipy <cit.>, matplotlib <cit.>, raytransfer <cit.>, blackray <cit.>, blacklamp <cit.> aasjournal
http://arxiv.org/abs/2407.13196v1
20240718061722
Statistical thermodynamics of the human brain activity, the Hagedorn temperature and the Zipf law
[ "Dante R. Chialvo", "Romuald A. Janik" ]
q-bio.NC
[ "q-bio.NC", "cond-mat.stat-mech" ]
http://arxiv.org/abs/2407.12902v1
20240717180000
Exact projected entangled pair ground states with topological Euler invariant
[ "Thorsten B. Wahl", "Wojciech J. Jankowski", "Adrien Bouhon", "Gaurav Chaudhary", "Robert-Jan Slager" ]
quant-ph
[ "quant-ph", "cond-mat.mes-hall", "cond-mat.str-el" ]
tw344@cam.ac.uk TCM Group, Cavendish Laboratory, Department of Physics, J J Thomson Avenue, Cambridge CB3 0HE, United Kingdom TCM Group, Cavendish Laboratory, Department of Physics, J J Thomson Avenue, Cambridge CB3 0HE, United Kingdom TCM Group, Cavendish Laboratory, Department of Physics, J J Thomson Avenue, Cambridge CB3 0HE, United Kingdom Nordita, Stockholm University and KTH Royal Institute of Technology, Hannes Alfvéns väg 12, SE-106 91 Stockholm, Sweden TCM Group, Cavendish Laboratory, Department of Physics, J J Thomson Avenue, Cambridge CB3 0HE, United Kingdom rjs269@cam.ac.uk TCM Group, Cavendish Laboratory, Department of Physics, J J Thomson Avenue, Cambridge CB3 0HE, United Kingdom § ABSTRACT We report on a class of gapped projected entangled pair states (PEPS) with non-trivial Euler topology motivated by recent progress in band geometry. In the non-interacting limit, these systems have optimal conditions relating to saturation of quantum geometrical bounds, allowing for parent Hamiltonians whose lowest bands are completely flat and which have the PEPS as unique ground states. Protected by crystalline symmetries, these states evade restrictions on capturing tenfold-way topological features with gapped PEPS. These PEPS thus form the first tensor network representative of a non-interacting, gapped two-dimensional topological phase, similar to the Kitaev chain in one dimension. Using unitary circuits, we then formulate interacting variants of these PEPS and corresponding gapped parent Hamiltonians. We reveal characteristic entanglement features shared between the free-fermionc and interacting states with Euler topology. Our results hence provide a rich platform of PEPS models that have, unexpectedly, a finite topological invariant, providing a platform for new spin liquids, quantum Hall physics, and quantum information pursuits. Exact projected entangled pair ground states with topological Euler invariant Robert-Jan Slager July 22, 2024 ============================================================================= Introduction Tensor network states (TNS) form a generally applicable tool for the description of quantum matter. A numerically efficient representation of the ground states of local gapped Hamiltonians <cit.>, TNS play a pivotal role both in the simulation of correlated systems <cit.> and the analytical classification of topological phases <cit.>. Yet, due to the increased complexity in higher dimensions, TNS have not yet matched the success of non-interacting band theory <cit.>, both in simulations and the classification of topological phases <cit.>. Of particular interest are therefore systems which can be well captured with band theory but are marked by difficulties when it comes to TNS approaches. The most well-known such example is chiral topological systems: In topological band theory, they are characterized by occupied bands whose overall Chern number is non-vanishing, separated by a gap from the conduction bands. While TNS approaches are able to capture the chiral topological features of such systems, this comes at the cost of producing algebraically decaying correlations characteristic of critical systems <cit.>. Generally, it has been shown that TNS with exponentially decaying correlations cannot capture any higher-dimensional topological invariant <cit.> of the ten Altland-Zirnbauer (AZ) classes <cit.>. The severe restrictions of TNS to represent gapped non-interacting topological phases suggest that TNS might equally struggle to capture topological phases protected by crystalline symmetries. However, reinvigorated interests <cit.> in relation to quantum geometry <cit.> could provide a useful tool in that they outline flatband conditions. Under such conditions, it is possible to define topological flatband Hamiltonians that can be formulated as exemplary parent Hamiltonians. These are sums of projectors with local support that each annihilate the ground state(s). From these local projectors, a TNS ground state can in principle be constructed. However, whether the resulting state is non-vanishing and can be made the unique ground state of such a crystalline symmetry-protected topological Hamiltonian might be hampered by the previously mentioned hurdles. Intuitively, TNS with exponentially decaying correlations are incompatible with topological invariants, as they come with delocalized edge modes around a physical boundary (in more than one dimension); the local structure of tensor networks is incapable of separating such edge modes from the bulk modes, delocalizing them as well. Because of that, in the following, we consider crystalline symmetry-protected topological phases which do not have helical or chiral edge modes. We show that a family of topological projected entangled pair states (PEPS) <cit.> and generating Hamiltonians can be formulated in the context of the Euler class <cit.>. The Euler class is a multi-gap invariant <cit.>, pertaining to topological structures that emerge when groups of partitioned bands (band subspaces) carry non-trivial topological indices <cit.>. These topological charges of groups of bands can be altered by braiding nodes in momentum space, as band nodes residing between neighboring bands can carry non-Abelian charges <cit.>. The braiding of non-Abelian frame charges and multi-gap topologies have been increasingly related, both theoretically and experimentally, to physical systems that range from out-of-equilibrium settings <cit.> and phonon spectra <cit.>, to electronic systems (twisted, magnetic and conventional) <cit.> as well as metamaterials <cit.>. The here introduced family of PEPS has non-trivial Euler class, evading no-go conditions as for tenfold-way topologies <cit.>, constituting a 2D analogue of the Kitaev chain <cit.>. Within the non-interacting limit, from a band theory perspective, these PEPS enjoy ideal quantum geometrical properties. More importantly, by applying shallow quantum circuits of diagonal unitaries, we transform these PEPS and their gapped parent Hamiltonians to interacting variants. We can signify the Euler phase both in the non-interacting and the interacting limit upon appealing to the entanglement spectrum. As such, our results set a benchmark for an exact class of PEPS parent Hamiltonians with finite topological invariant. Euler class A pair of isolated (gapped from the rest of the spectrum) bands |u_n(k)⟩, |u_n+1(k)⟩ can acquire a non-trivial Euler class χ when it is part of at least a three-band system that enjoys a reality condition assured by the presence of C_2T [twofold rotations combined with time-reversal symmetry (TRS)], or PT symmetry, involving parity and TRS. The Euler class is then concretely obtained as <cit.> χ= 12π∫_BZEu  dk_1∧ dk_2 ∈ℤ, where one integrates the Euler curvature Eu = ⟨∂_k_1 u_n(k)|∂_k_2 u_n+1(k)⟩ - ⟨∂_k_2 u_n(k)|∂_k_1 u_n+1(k)⟩ over the Brillouin zone (BZ). The pair of bands can either be degenerate (and flat) or feature a number of 2χ nodal points that cannot be annihilated due to the topological nature <cit.>. Eq. (<ref>) shows that the Euler class is the real analogue of the Chern number. Similarly, the isolated two-band subspace does not admit exponentially-localized Wannier functions in a 𝒞_2 𝒯-symmetric gauge, but unlike the Chern case, the system does not feature protected chiral or helical edge states, allowing for a PEPS representation. The model To concretize the discussion, we consider spinless fermions hopping on the kagome lattice with nearest-neighbor hopping t=-1, next-nearest-neighbor hopping t'=-1 and third-nearest-neighbor hopping t”=-1 inside the hexagons. For chemical potential μ, the Hamiltonian thus reads H = ∑_⟨ i,j⟩ a_i^ a_j + ∑_⟨⟨ i,j ⟩⟩ a_i^ a_j + ∑_⟨⟨⟨ i,j ⟩⟩⟩_ a_i^ a_j - μ∑_i=1^N a_i^ a_i, where ⟨ i,j⟩, ⟨⟨ i,j ⟩⟩ correspond to nearest- and next-nearest neighbor pairs of sites i,j and ⟨⟨⟨ i,j ⟩⟩⟩_ to third-nearest neighbor pairs of the same hexagons. a_i^ (a_i) are the fermionic creation (annihilation) operators and N the number of sites. The Hamiltonian has two degenerate flat bands at E = -2-μ and a dispersive band on top, separated by an energy gap Δ = 3. The flat bands have Euler number χ = 1, protected by 𝒞_2𝒯 symmetry. At μ = -2, both flat bands are at E = 0, and the ground states |ψ_n⟩, n = 0,1, …, 2N/3 are macroscopically degenerate, characterized by fillings [0,1,2,…, 2N/3]. We now construct the ground state with the highest filling, which will become the unique ground state for -2 < μ < 1. To that end, we note that the Hamiltonian for μ = -2 can be rewritten as H_p = ∑_∑_i,j ∈ a_i^ a_j = 6 ∑_ a_^ a_ = 6 ∑_ h_, where denotes the hexagons of the kagome lattice, and we defined a_ = 1/√(6)∑_i ∈ a_i and h_ = a_^ a_. Hence, the ground states fulfill a_ |ψ_n⟩ = 0 for all hexagons . The ground state with the highest occupation number is |ψ_2N/3⟩ = ∏_ a_ | 1 … 1⟩, where |1 … 1 ⟩ is the fully occupied state. Due to {a_, a_'}=0, the ordering in Eq. (<ref>) is irrelevant. However, notably, for the other commutation relations, we have {a_, a^_'}=δ_,'+1/6δ_⟨,' ⟩, where ⟨,' ⟩ denotes corner-sharing, neighboring hexagonal plaquettes. This anticommutation algebra shows that, while the operators a^†_ effectively create fermions in a superposition of six atomic orbitals, the Hamiltonian Eq. (<ref>) is not adiabatically connected to a Hamiltonian of an atomic insulator, despite the functional similarity to such Hamiltonians: In particular, that is the case due to the corner-sharing obstruction, the presence of which is also crucial for the entanglement of the system, as we demonstrate below. |ψ_2N/3⟩ can be written as a projected entangled simplex state (PESS) <cit.> as follows: We start out with a virtual state of 2N spinless fermions – two assigned to each physical fermion. The virtual fermions are in the state |ω_v⟩ = 1/√(6)∏_∑_i = 1^6 c_,i |1_v⟩, where |1_v ⟩ denotes the fully occupied virtual state. c_,i corresponds to virtual fermion i = 1, …, 6 within hexagon (as opposed to the physical particles a_j, there are now unique assignments within hexagons), cf. Fig. <ref>. The next step is to map each pair of virtual fermions around a site to one physical fermion. To that end, we use the operator M̂_j = a_j^ c_j' c_j + c_j' - c_j. Here, c_j' corresponds to the virtual fermion located on the left of the site j and c_j to the virtual fermion located on its right. We finally project on the vacuum of virtual particles, obtaining the overall state |ψ_PEPS⟩ = ⟨ 0_v| ∏_j=1^N M̂_j ∏_1/√(6)∑_i=1^6 c_,i |1_v 0_p⟩, where |1_v 0_p⟩ corresponds to the vacuum of physical fermions and fully occupied virtual fermionic state. We already labeled the overall state as a “PEPS”, since it can also be written as the more familiar projected entangled pair state, as we show further below. In order to demonstrate that |ψ_2N/3⟩∝ |ψ_PEPS⟩, we first verify that a_|ψ_PEPS⟩ = 0 and later that the PEPS has filling 2N/3. For the first claim, we notice that ⟨ 0_v| a_j (a_j^ c_j' c_j + c_j' - c_j) […] |1_v 0_p ⟩ = ⟨ 0_v | (a_j^ c_j' c_j + c_j' - c_j) c_j' […] |1_v 0_p⟩ = ⟨ 0_v| (a_j^ c_j' c_j + c_j' - c_j) c_j […] |1_v 0_p⟩, where […] denotes a sum of products of operators that do not act on the physical fermion at site j. We thus have a_' |ψ_PEPS⟩ =±1/6⟨ 0_v| ∏_j (a_j^ c_j' c_j + c_j' - c_j) ∑_i=1^6 c_',i× ×∏_∑_k=1^6 c_,k |1_v 0_p⟩ = 0. Second, we see that the initial state |1_v 0_p⟩ contains 2N virtual and no physical fermions. The operator ∏_∑_k=1^6 c_,k reduces that to 2N (1-1/6) = 5N/3 fermions. Finally, each operator M̂_j creates one physical fermion less than it annihilates virtual ones, i.e., we are left with 2N/3 physical fermions in |ψ_PEPS⟩. Hence, |ψ_PEPS⟩∝ |ψ_2N/3⟩, as claimed. |ψ_PEPS⟩ is a (non-unique) frustration-free ground state of the parent Hamiltonian H_p and the unique ground state of H = H_p - (μ + 2) ∑_i=1^N a_i^ a_i for -2 < μ < 1. This example shows that PEPS can be the unique ground states of local gapped Hamiltonians with non-trivial two-dimensional crystalline topological features, even in the non-interacting limit. This contrasts with the inability of free-fermionic PEPS to capture any higher-dimensional topological labels of the ten-fold classification unless the PEPS have algebraically decaying correlations <cit.>. The PESS we have considered so far can be converted into a PEPS by realizing that the simplex states are of the form |011111 ⟩ + |101111⟩ + … + |111110⟩, also known as a W-state <cit.>, which can be written as a non-translationally invariant matrix product state of bond dimension 2 or a translationally invariant one of bond dimension 6. M̂ can be represented as a rank-3 tensor M^i_ab with M_11^1 = M_10^0 = -M_01^0 = 1 and all other elements equal zero. The resulting PEPS tensor has rank 5 and bond dimension 2 or 6, respectively, see Fig. <ref>. We note that a similar construction in terms of PESS defined on triangles can be used to describe the ground state of the Hamiltonian (<ref>) for nearest-neighbor hopping only, an Euler insulator with one flat bottom band touched by two dispersive bands from above <cit.>. Free-fermion generalizations A straightforward generalization is obtained by modifying the simplex states to a linear combination, |ω̃_v⟩ = ∏_∑_i=1^6 β_i c_,i|1_v⟩ with β_i ∈ℂ, and keeping M̂_j the same. The new PEPS is annihilated by ã_ = ∑_i ∈β_i a_i, where we set ∑_i=1^6 |β_i|^2 = 1. This corresponds to the new Hamiltonian H̃ = 6∑_∑_i,j ∈β_i^* β_j a_i^ a_j - (μ + 2) ∑_i=1^N a_i^ a_i. One can check that 𝒞_2𝒯 symmetry implies β_i+3^* = β_i (i = 1,2,3) up to an irrelevant overall phase. Whether this state has a non-zero Euler number depends on the specific choice of the β_i. Quantum geometry We now highlight the ideal quantum geometrical properties due to the flatness of the bottom bands. The flatness is crucial, as it allows for Hamiltonians which are sums of local projectors and therefore have (macroscopically degenerate) ground states at energy E = 0. The quantum metric <cit.>, g^χ_ij = Tr_occ [(∂_k_iP̂)(∂_k_jP̂)], is defined as a trace over momentum-space projectors P̂ = ∑_n=1,2|u_n(k)⟩⟨u_n(k)| of occupied Bloch states |u_n(k)⟩, with the momenta components k_i, k_j = k_1, k_2. The flat Euler bands saturate the quantum-geometric bounds due to the Euler invariant χ <cit.>, between the quantum volume elements (√(det g^χ)) and the Euler curvature (√(det g^χ) = |Eu|) across the entire momentum space. Upon integrating, we thus retrieve a quantum volume which is a multiple of 2π, Vol g^χ≡∮√(det g^χ)  k_1 ∧ k_2 = 2π |χ|, showcasing the ideal non-Abelian quantum geometry <cit.>. Upon introducing interactions as below, the many-body quantum metric in the space of twisted boundary conditions can reflect the topological nature of many-body Euler ground states as we also demonstrate in the non-interacting limit [see Supplemental Material (SM) <cit.>)]. Using central relations of quantum metrology <cit.>, the ideal condition physically manifests itself through the non-triviality of the quantum Fischer information (QFI). That is, we retrieve a metrological quantum Cramer-Rao (QCR) bound <cit.> on the realizable model measurements, see SM <cit.>, which could be directly executed in quantum simulators or synthetic three-level systems <cit.>. Interacting generalizations The modified simplex states give rise to non-interacting PEPS. We can further generalize the construction to interacting states by applying a shallow quantum circuit U of diagonal unitaries, which makes it easy to ensure that 𝒞_2𝒯 symmetry is preserved. Hence, by definition, we remain in the same topological phase. Furthermore, the new state will also be a PEPS of low bond dimension. We consider the simplest case of nearest-neighbor gates. We view these as being applied on all hexagons in a translationally invariant fashion. Within each hexagon, we label u_j,j+1 as the unitary acting on sites j and j+1 (j = 7 ≡ 1) inside a given hexagon, with sites enumerated as in Fig. <ref>. 𝒞_2𝒯 symmetry is achieved if u_j,j+1 = u_j+3,j+4^* for all j = 1,2,3. The simplest continuously tuneable case is u_j,j+1 = 1 - (1-e^± iα) n_j n_j+1 with particle number operators n_j, α∈ [0,2π), and positive (negative) sign for j = 1,2,3 (j = 4,5,6). The new PEPS is given by |ψ_PEPS'⟩ = U |ψ_PEPS⟩ and the Hamiltonian gets transformed as H' = U H U^ = ∑_∑_i,j ∈a_i'^ a_j' - (μ + 2) ∑_i=1^N a_i^ a_i, where we defined a_i' = U a_i U^ and used that U commutes with n_i = a_i^ a_i. One can easily verify a_i'^ = a_i^∏_j^⟨ i,j ⟩[1 - (1-e^i σ_⟨ i,j⟩α) n_j], where the product runs over all nearest neighbors of site i. σ_⟨ i,j⟩ = +1 if ⟨ i,j ⟩ corresponds to one of the first three bonds in the hexagon that it lies in and -1 if it corresponds to one of the last three bonds. This gives rise to the overall Hamiltonian H' = ∑_∑_i,j ∈ a_i^∏_k^⟨ k,i ⟩[1 - (1-e^i σ_⟨ k,i⟩α)n_k] × ×∏_l^⟨ j,l ⟩[1 - (1-e^-i σ_⟨ j,l⟩α) n_l] a_j - (μ + 2)∑_i=1^N n_i. H' has the same spectrum as H for fixed μ, and |ψ_PEPS'⟩ is therefore its unique ground state for -2 < μ < 1. The Hamiltonian is strictly local, acting on hexagons and adjacent triangles. This is the first example of an interacting Euler insulator with a local gapped Hamiltonian. |ψ_PEPS'⟩ can be constructed by writing the phase matrix of u_j,j+1 = 1 - (1 - e^± i α)n_j n_j+1 as ∑_q=1^2 R^ab_q R^cd_q with R^ab_1 = δ_ab and R^ab_2 = √(-1 + e^± i α)δ_1aδ_1b, a,b ∈{0,1}. (As the underlying operators are even, they can be decomposed into tensor products.) Four nearest-neighbor unitaries act on each site, such that the tensors of |ψ_PEPS'⟩ can be constructed by contracting the physical leg of T with four R-tensors, see Fig. <ref>. If the bond dimension of T was chosen to be 2, the interacting PEPS has bond dimension D = 4. Entanglement spectra We numerically calculated the entanglement spectra of |ψ_PEPS'⟩ for an infinitely long torus with circumference L_y. That is, the torus is bipartitioned with L_y unit cells located around the perimeter of the resulting cylinder. We obtained its entanglement spectrum by calculating the non-interacting |ψ_PEPS⟩ using TenPy <cit.> and applying the quantum circuit U on it to obtain the interacting |ψ_PEPS'⟩. The entanglement spectra for various values of α and L_y = 6 are shown in Fig. <ref>. We observe that the low-lying part of the entanglement spectrum possesses a cusp at momentum K = 0, which remains intact as α is increased. This suggests that characteristic features of Euler insulators in the entanglement spectrum are preserved as interactions are turned on. We note that in the more familiar case of Chern insulators, entanglement spectra are qualitatively preserved also only as long as interactions are weak. We further detail the entanglement features of the non-interacting case, including the stable cusp at K = 0 in <cit.>. Discussion and Conclusion We leverage quantum geometric conditions to define a class of exact PEPS with finite topological Euler invariant. The enigmatic nature of the Euler class allows to circumvent no-go conditions. Importantly, these models can be generalized to interacting variants and have definite entanglement signatures. As such, these PEPS set a benchmark for new pursuits. These potential pursuits involve studying exotic excitations and spin liquids realized from Euler many-body PEPS ground states. In particular, on introducing interactions, novel kinds of fractionalizations should emerge from the interplay of the many-body entanglement as well as emergent quantum anomalous Hall states <cit.>. In addition, as all our states can be created by shallow quantum circuits from product states and have topological features, they are also particularly interesting for implementations on noisy intermediate-scale quantum devices and the development of new quantum error correction protocols. We will report on this in the near future. § ACKNOWLEDGMENTS R.-J. S., G. C. and T. B. W. acknowledge funding from a New Investigator Award, EPSRC grant EP/W00187X/1, a EPSRC ERC underwrite grant EP/X025829/1, and a Royal Society exchange grant IES/R1/221060 as well as Trinity College, Cambridge. W. J. J. acknowledges funding from the Rod Smallwood Studentship at Trinity College, Cambridge. § MOMENTUM-SPACE CHARACTERIZATION OF THE MODEL IN THE NON-INTERACTING LIMIT We demonstrate how the model introduced in the main text, Eq. (2), can be decomposed in momentum space. We first Fourier transform the real-space creation and annihilation operators to the basis of Bloch orbitals: a^(†)_α,k = 1/√(N)∑_R_i e^(±) i k· (R_i + r_α) a^(†)_α,i. Here, the operator a^(†)_α,i creates/annihilates a single particle in an atomic orbital α = A,B,C situated at the position r_α with respect to the position vector of a unit cell center R_i, where i = 1, …, N. Under the translational symmetry, one obtains: H = ∑_k;α,β=A,B,CH_αβ(k)  a^†_α,k a_β,k. Here, the Bloch Hamiltonian for the considered system on the kagome lattice, manifestly expressed in a real gauge, reads <cit.>: H(k) = [ H_AA(k) H_AB(k) H_AC(k); H_AB(k) H_BB(k) H_BC(k); H_AC(k) H_BC(k) H_BC(k); ], with the corresponding (real) matrix elements, on setting t = t' = t” = -1, H_AA(k) = -μ + 2 cos(k_1), H_AB(k) = 2 cos(k_1/2+k_2/2) + 2 cos(k_1/2-k_2/2), H_AC(k) = 2 cos(k_2/2) + 2 cos(k_1+k_2/2), H_BB(k) = -μ + 2 cos(k_2), H_BC(k) = 2 cos(k_1/2) + 2 cos(k_1/2+k_2), H_CC(k) = -μ + 2 cos(k_1+k_2). We recognize that the Bloch Hamiltonian can be further rewritten as, H(k) = [ -μ - 2 + 4 cos^2 (k_1/2) 4 cos (k_1/2) cos (k_2/2) 4 cos (k_1/2) cos (k_1/2 + k_2/2); 4 cos (k_1/2) cos (k_2/2) -μ - 2 + 4 cos^2 (k_2/2) 4 cos (k_2/2) cos (k_1/2 + k_2/2); 4 cos (k_1/2) cos (k_1/2 + k_2/2) 4 cos (k_2/2) cos (k_1/2 + k_2/2) -μ - 2 + 4 cos^2 (k_1 /2 + k_2/2); ], or more compactly, H(k) = (-μ - 2) 1_3 + 4 n(k) ⊗ n(k)^T, with n(k) = ( cos (k_1/2), cos (k_2/2), cos (k_1/2 + k_2/2) )^T. Importantly, under such decomposition, the topology of the Euler bands in any three-band Hamiltonian satisfying a reality condition [H(k)=H^*(k)] can be captured by the normalized vector n̂(k) = n(k)/||n(k)||. In particular, in the considered model, the vector n̂(k) reads n̂(k) = 1/√(cos^2 (k_1/2) + cos^2 (k_2/2) + cos^2 (k_1/2 + k_2/2))[ cos (k_1/2); cos (k_2/2); cos (k_1/2 + k_2/2) ], and it fully determines the Euler curvature as Eu = n̂· (∂_k_2n̂×∂_k_1n̂). The Euler curvature can be viewed as a skyrmion density in the momentum-space texture, with the skyrmion being spanned by n̂ over the Brillouin zone (BZ) square/torus. In particular, the Euler invariant is given by <cit.> χ = 1/2π∫_BZ^2 k Eu = 1/2π∫_BZ^2 k n̂· (∂_k_2n̂×∂_k_1n̂) = 2 Q, and obtains χ = 1 in the case of interest, which corresponds to the momentum-space meron (half-skyrmion) with the half-skyrmion number Q = 1/2 <cit.>. Additionally, the vector n(k) fully captures the band dispersion present in the model, as Eq. (<ref>) can be written as, H(k) = (-μ - 2) 1_3 + 4 ||n(k)||^2 n̂(k) ⊗ n̂(k)^T, explicitly determining the band dispersion in the third band as E_3(k) = (-μ - 2) + 4 ||n(k)||^2, contrary to the flat-band dispersion in the bottom Euler bands E_1(k) = E_2(k) = (-μ - 2). The band energies given by such dispersions manifestly have a gap across the entire Brillouin zone, as the norm of the vector n(k) is non-vanishing ||n(k)|| > 0 at every k-point. This follows from the fact that the components of the vector n(k), cos (k_1/2), cos (k_2/2), cos (k_1/2 + k_2/2) are not independent, with at least one of those terms being necessarily non-vanishing at any k-point. § QUANTUM GEOMETRY IN THE FREE FERMION LIMIT Here, we elaborate on the quantum geometry of the model in non-interacting limit. Consistently with the Plücker formalism for multi-band quantum geometry introduced in Ref. <cit.>, we first define the Fubini-Study metric (s^2 = 1 - |⟨u_1(k) ∧…∧ u_n(k)|u_1(k+k) ∧…∧ u_n(k+k)||⟩^2) in the set of occupied Bloch bands {|u_n(k)⟩} <cit.>, s^2 = g^χ_ij(k) dk_i dk_j where g^χ_ij is the quantum metric in the Euler flat bands, and the Einstein summation convention was assumed. With both of the Euler bands n=1,2 occupied (`occ'), we can correspondingly write the metric as g^χ_ij(k) = ∑^occ_n1/2[ ⟨∂_k_i u_n(k)|Q̂|∂_k_j u_n(k)⟩ + c.c.] = ⟨∂_k_i u_1(k)|u_3(k)|⟨%s|%s⟩⟩u_3(k)|∂_k_j u_1(k) + ⟨∂_k_i u_2(k)|u_3(k)|⟨%s|%s⟩⟩u_3(k)|∂_k_j u_2(k), where Q̂ = ∑^unocc_m|u_m(k)⟩⟨u_m(k)| = |u_3(k)⟩⟨u_3(k)| is the projector onto unoccupied (`unocc') band(s); here, m=3. In the second equality, we used the fact that the eigenvectors representing the Euler bands are chosen real, as here, the Bloch Hamiltonian H(k) = ∑^3_i=1 E_i(k) |u_i(k)⟩⟨u_i(k)| is a real symmetric matrix. The quantum metric is manifestly real and symmetric, by definition Eq. (<ref>). Alternatively, we can rewrite the metric in terms of the projector onto the unoccupied band as, g^χ_ij = Tr_occ [(∂_k_iQ̂)(∂_k_jQ̂)] = Tr_occ (|∂_k_i u_3(k)⟩⟨∂_k_j u_3(k)| + |u_3(k)⟩⟨u_3(k)| ∂_k_i u_3(k)|⟨%s|⟩∂_k_j u_3(k) + |u_3(k)⟩⟨∂_k_i u_3(k)|u_3(k)|⟨%s|⟩∂_k_j u_3(k) + |u_3(k)⟩⟨∂_k_i u_3(k)|∂_k_j u_3(k)|⟨%s|⟩u_3(k)) = ⟨∂_k_i u_3(k) | ∂_k_j u_3(k)|-⟩⟨∂_k_i u_3(k)| u_3(k)|⟨%s|%s⟩⟩u_3,k|∂_k_j u_3,k = ⟨∂_k_i u_3(k) | ∂_k_j u_3(k)|⟩ where the last equality follows from the reality condition. We now us the fact that the third Bloch band defines a normalized vector field: n̂(k) =̂|u_3(k)⟩, as follows from the spectral decomposition of the Hamiltonian. In terms of the momentum-space vector n̂, the quantum metric in a three-band Euler Hamiltonian reads <cit.> g^χ_ij = (∂_k_in̂) · (∂_k_jn̂), which obtains an inequality <cit.>, √(det g^χ)≥ |Eu|. Additionally, from inequality between arithmetic and geometric means, we directly obtain, Tr g^χ≡ g^χ_11 + g^χ_22≥ 2 √(g^χ_11 g^χ_22)≥ 2 √(g^χ_11 g^χ_22 - (g^χ_12)^2)≡ 2 √(det g^χ)≥ 2|Eu|, where we used the symmetry of the (real) metric tensor g^χ_12 = g^χ_21. In the considered model, the metric elements read: g^χ_11 = 8 - 3 cos k_1 - 3cos (k_1 + k_2) - cos (k_1 - k_2) - cos (k_1 + 2 k_2)/8(3 + cos k_1 + cos k_2 + cos(k_1 + k_2))^2, g^χ_22 = 8 - 3 cos k_2 - 3cos (k_1 + k_2) - cos (k_1 - k_2) - cos (2 k_1 + k_2)/8(3 + cos k_1 + cos k_2 + cos(k_1 + k_2))^2, g^χ_12 = g^χ_21 = 2 - 2 cos k_1 cos k_2 + sin k_1 sin k_2 /4(3 + cos k_1 + cos k_2 + cos(k_1 + k_2))^2, which directly obtains the quantum volume <cit.>, √(det g^χ (k)) = -3 + cos k_1 + cos k_2 + cos (k_1 + k_2)/4√(2) (3 + cos k_1 + cos k_2 + cos(k_1 + k_2))^3/2, as well as Tr g^χ (k) = 16 - 3 cos k_1 - 3 cos k_2 - 6 cos (k_1 + k_2) - 2 cos (k_1 - k_2) - cos (2 k_1 + k_2) - cos (k_1 + 2 k_2)/8(3 + cos k_1 + cos k_2 + cos(k_1 + k_2))^2. On the contrary, the Euler curvature in the model is given by the following expression Eu(k) = -3 + cos k_1 + cos k_2 + cos (k_1 + k_2)/4√(2) (3 + cos k_1 + cos k_2 + cos(k_1 + k_2))^3/2. We note that, analytically, an inequality Tr g^χ (k) ≥ 2|Eu(k)| holds, on substituting the individual quantum metric matrix elements to the bound between the determinant and trace. The equality of the determinant (quantum volume) and the Euler curvature follows trivially by inspection, as the analytical expressions for both quantities are identical across the entire momentum space. Moreover, beyond the single-particle context, we can consider many-body quantum metric g_ij(θ) defined in terms of the twist angles θ = (θ_1, θ_2) and the twisted boundary conditions <cit.>, ψ({x_i + L_1}, {y_i}) ≡⟨{x_i + L_1}, {y_i} | ψ|=⟩ e^i θ_1ψ({x_i}, {y_i}), ψ({x_i}, {y_i + L_2}) ≡⟨{x_i}, {y_i + L_2} | ψ|=⟩ e^i θ_2ψ({x_i},{ y_i}), where i = 1, 2, …, N_tot are particle labels, x_i and y_i the sets of coordinates of fermions i, and L_1, L_2 denote the cell lengthscales, on which the twisted periodic boundary conditions were imposed on the many-body state. In terms of the twist angles, the many-body quantum metric reads g_ij(θ) = ℜ𝔢 ⟨∂_θ_iψ(θ)| (1 - P̂_θ) |∂_θ_jψ(θ)⟩, with the projector onto the many-body ground state P̂_θ = |ψ(θ)⟩⟨ψ(θ)|. At the zero twist angle θ = (0, 0) ≡0, in the free-fermion limit, we retrieve a many-body bound, g_ij(0) = 1/L_1 L_2∑_kTr g^χ (k) ≥4π A/L_1 L_2 |χ|, where A is the area of the unit cell of the system. However, we note that unlike in the case of the determinant bound providing an ideal condition on Euler bands, this many-body bound does not saturate in the considered models, as it reduces to the trace bound in the free particle limit. In other words, here, the strong inequalities rather than equalities hold within the proposed models. § ENTANGLEMENT SPECTRA OF THE NON-INTERACTING PEPS Here we present the entanglement spectrum of the non-interacting kagome Euler model of the main text. For this purpose, we start with the momentum space Hamiltonian defined on a thin torus, i.e., L_x ≫ L_y. In the insulating state, the bottom two flat bands are occupied and the dispersive conduction band is empty. We first write down the projector on the occupied state P̂(k) = ∑_i∈occupied |ψ_i (k)⟩⟨ψ_i (k) |. The projector by definition has its eigenvalues restricted to 0 and 1. For the calculations performed on a lattice, we define the real space positions as r = n_1 a_1 + n_2 a_2, where n_1(2)∈ℤ and a_1(2) are the lattice vectors. The corresponding reciprocal space momenta take the values as k = k_1/2πb_1 + k_2/2πb_2, where k_1, k_2 ∈ (-π, π] and b_1(2) are the reciprocal lattice vectors. For the kagome model here, we have chosen, a_1 = (√(3)/2, 1/2) and a_2 = (0, 1) as the lattice vectors. The corresponding reciprocal lattice vectors are b_1 = (4π/√(3), 0) and b_2 = (-2π/√(3), 2π). From the projector, we obtain a one-body correlation operator G_nm(k_2) = 1/L_x∑_k_1e^i 2π k_1 (n - m)P̂(k_1, k_2). Since G is also a projector, its eigenvalues are also restricted to 0 and 1. We partition the system into subsystems A and B, such that the entanglement spectrum between the two subsystems is given by the eigenvalues of the reduced density matrix ρ_A. The spectrum of the reduced density matrix ρ_A can then be obtained from the spectrum of the reduced correlation matrix G^A defined as <cit.> G^A_nm (k_2) = G_nm(k_2); n, m ∈ [0, L_1/2). In Fig. <ref> (a) and (c), we show the spectrum of the reduced one-body correlation matrix G^A. The plots are obtained for system sizes L_x = 120 and L_y = 6 (12) for (a) and (c) respectively. The eigenvalues Λ_i(k_2) of G^A are bounded to lie in [0, 1], although, unlike the projector eigenvalues, they are not restricted to be 0 and 1. Indeed the in-gap eigenvalues are related to the topological Euler class of the model <cit.>. However, unlike the well-known case of Chern insulators, these in-gap modes in the one-body correlation spectrum of the Euler topology are not related to the physical edge states due to non-trivial topology, which typically has a spectral flow between the bulk conduction and valence bands. In Fig. <ref> (e) we explicitly show the absence of such topological edge states with a spectral flow between flat valence bands at -1 and dispersive conduction band. The physical energy spectrum is calculated for system size L_1 = 120 and L_2 = 12 with open boundary conditions along L_1 and periodic boundary conditions along L_2. From the spectrum of G^A, we obtain the entanglement spectrum of the non-interacting model using the relation ε (k_2) = - ∑_i ∈occupiedlog [ Λ_i (k_2) ] - ∑_j ∈unoccupiedlog [ 1 - Λ_j(k_2) ]. To calculate the full many-body entanglement spectrum as shown in the figure, we first obtain the ground state of subsection A by occupying 2/3 of the highest eigenvalues Λ_i, which is commensurate with the 2/3 filling of the whole system. Here, one should keep in mind that since the projector P eigenvalues of the occupied states lie at 1 and unoccupied states at 0, while constructing the ground state of G^A, one should start counting from Λ_i → 1 as occupied, and going down in the eigenvalues Λ_i corresponds to going up in the excitation spectrum. Once the ground state is identified, we obtain a many-body entanglement spectrum by creating excitations on this ground state. Since for the entanglement spectrum, we partition the system, while the filling fraction 2/3 is a constraint for the whole system, the subsystem A has eigenstates that have particle numbers different to 2/3 filling of the subsystem itself. Therefore to calculate the full many-body ground state, we first partition the subsystem A into different total particle number channels and from there create all possible particle-hole excitations. Then for each excited state configuration described by a fixed fermionic occupation number, we can obtain the entanglement spectrum using Eq. <ref>. The many-body entanglement spectrum of the non-interacting model is shown in Fig. <ref> (b) and (d) for the system size L_x = 120 and L_y = 6 (12) respectively. We have taken the ground state entanglement energy to zero as a reference, which is shown by the red marker at k = 0 in Fig.  <ref> (b) and (d). Notice the presence of the zero entanglement energy state at k=π. This is obtained by considering a channel with one less (or more) particle in subsystem A than the exact 2/3 filling. Comparing the many-body entanglement spectrum to the interacting case with α =0 in the main text (left panel in Fig. <ref>), we see a good agreement in their low energy features. In particular, in both cases, the ground state corresponds to k=0 with the lowest entanglement energy. As we move to a finite k, the entanglement energy increases and eventually comes back down to the ground state value at k = π creating a cusp-like feature in the low-energy entanglement spectrum. This low energy behavior can be traced back to the in-gap modes (around Λ = 0.5) in the one-body correlation spectrum shown in Fig. <ref> (a) and (c). In one-body correlation spectrum, for each mode near 0, there are two modes near 1, and therefore 2/3 filling corresponds to occupying all modes in the upper half of the correlation spectrum. The low energy excitations are then created near Λ_i = 0.5 with a very low energy cost, which leads to many low energy modes in the entanglement spectrum. § QUANTUM FISCHER INFORMATION AND QUANTUM CRAMER-RAO BOUND OF THE IDEAL EULER BANDS We further comment on the structures present in the models and their relation to quantum Fisher information (QFI) and the quantum Cramer-Rao (QCR) bound <cit.>. Namely, we derive a non-Abelian QCR bound, which is induced by Euler topology and the Euler bands satisfying an ideal condition. The QCR bounds are of central relevance for quantum metrology <cit.>. To define the QFI in the context of this work, we consider a two-parameter family of single-particle states |ψ_n(k)⟩, parametrized by k≡ (k_1, k_2) ∈ T^2, where T^2 denotes a two-torus. Consistently with the models introduced in the main text, at every point k of the parameter space, we consider a three-state system. We take a spectral decomposition of the density matrix of the single-particle states at given k-point, ρ≡∑_n λ_n |ψ_n(k)⟩⟨ψ_n(k)|. The QFI matrix for single-particle operators r̂_1, r̂_2 (conjugate to k_1, k_2), then reads <cit.>, F_ij[ρ] ≡∑_m,n: λ_m + λ_n ≥ 0 2(λ_m - λ_n)^2/λ_m + λ_n⟨ψ_m(k)|r̂_i |ψ_n(k)⟩⟨ψ_n(k)|r̂_j |ψ_m(k)⟩. In translationally symmetric contexts, if the parameters k were to be identified with momenta, then r̂_1, r̂_2 represent position operator components defined along the lattice vectors. We moreover recognize: r̂_i ∼ i∂_k_i, i.e. -i ∂_k_iρ = [ρ, r̂_i], and hence, -i ⟨ψ_m (k)|∂_k_iρ|ψ_n (k)⟩ = ⟨ψ_m (k)| [ρ, r̂_i] |ψ_n(k)⟩ = (λ_m - λ_n) ⟨ψ_m(k)|r̂_i |ψ_n (k)⟩. Therefore, for pure states, where in the context of this work we can consider single particle in the third band within the corresponding three-state problem, i.e. ρ = |ψ_3(k)⟩⟨ψ_3(k)| = |u_3(k)⟩⟨u_3(k)|; the QFI matrix reduces to the quantum metric (see the main text), F_ij[ρ] = 4 g^χ_ij(k). The QCR bound <cit.> for the two-parameter measurements can be captured by the covariance matrix Σ with <cit.>, Σ (k̂) ≥1/M F^-1 [ρ], where M is the number of the repetitions of measurements <cit.>. The covariance matrix for an unbiased estimator k̂ for the two-parameter family k = (k_1, k_2) under a set of positive operator-valued measurements (POVM), Π_p, such that ∑^N_p_p Π_p = 1, Π_p Π_p' = Π_p δ_p p', with N_p ≥ 3; is defined as <cit.>, Σ_ij (k̂) = ⟨δ k_i δ k_j ⟩≡∑_p k_i k_j Tr[ρΠ_p] - k_i k_j, where ⟨…⟩≡Tr[ ρ(…) ]. For ideal bands, as in the introduced model, the Euler curvature determines the quantum-metrological bound at every point of the parameter space as, √(det Σ(k̂))≥1/M √(det g^χ(k)) = 1/M | Eu(k)|, where the first inequality follows from the derivation of Ref. <cit.>, and the second equality is realized in the models introduced in our work. Beyond the demonstrated quantum-metrological manifestations, the realized ideal condition for the Euler bands opens avenues for exotic fractionalization of the excitations in the topological bands, and offers a platform for exploring further deeper connections to the many-body quantum metric under twisted boundary conditions.
http://arxiv.org/abs/2407.13469v1
20240718124245
Fixed and Adaptive Simultaneous Machine Translation Strategies Using Adapters
[ "Abderrahmane Issam", "Yusuf Can Semerci", "Jan Scholtes", "Gerasimos Spanakis" ]
cs.CL
[ "cs.CL" ]
Examining inverse generative social science to study targets of interest [ Received: date / Accepted: date ======================================================================== § ABSTRACT Simultaneous machine translation aims at solving the task of real-time translation by starting to translate before consuming the full input, which poses challenges in terms of balancing quality and latency of the translation. The wait-k policy offers a solution by starting to translate after consuming k words, where the choice of the number k directly affects the latency and quality. In applications where we seek to keep the choice over latency and quality at inference, the wait-k policy obliges us to train more than one model. In this paper, we address the challenge of building one model that can fulfil multiple latency levels and we achieve this by introducing lightweight adapter modules into the decoder. The adapters are trained to be specialized for different wait-k values and compared to other techniques they offer more flexibility to allow for reaping the benefits of parameter sharing and minimizing interference. Additionally, we show that by combining with an adaptive strategy, we can further improve the results. Experiments on two language directions show that our method outperforms or competes with other strong baselines on most latency values. [Code is available at: <https://github.com/issam9/Adapters-SiMT>] § INTRODUCTION Simultaneous machine translation (SiMT) aims at reducing the latency of translation systems. In scenarios with low latency demands, such as conferences or lectures, translating with minimum delay is crucial. In order to reduce the latency, SiMT models start translating before consuming the full input sentence, which improves the latency but affects the quality of the translation, because of limited access to enough source context to make a correct prediction. SiMT techniques design a strategy to decide when to make a READ (i.e. wait for more source tokens) or WRITE (i.e. output a new token) action. The strategy has to balance the trade-off between quality and latency by making more READ or WRITE actions. Making more READ actions will lead to improved quality but will hinder the latency, while the opposite is true for making more WRITE actions. Fixed policies design a strategy that is detached from whether there is sufficient context to make a WRITE action <cit.>. For instance, the wait-k policy <cit.> trains the model to make k number of READ actions before every WRITE action. The value of k has a direct impact on the quality and latency of the translation and since it is decided during training, wait-k models have to be trained with latency in mind, which means that in order to support multiple latency levels, we need to train multiple models. The multi-path training <cit.> was introduced to solve this issue by sampling the value of k randomly during training, which results in a model that supports multiple latency levels. This technique was shown to benefit the inference at lower wait-k values by improving the results, but it neglects that parameter sharing between all the wait-k values might introduce interference. <cit.> addressed the interference issue by using Mixture-of-Experts (MoE), where each head of the multi-head attention is treated as an expert and is trained on different wait-k values. This has proven to be a successful technique, but the number of wait-k experts we can introduce depends on the number of heads in the Transformer model, which limits the flexibility in terms of balancing parameter sharing and interference between the wait-k paths. Our method relies on inserting lightweight adapters <cit.> for this purpose. The number of the adapters and their capacity can be easily adjusted depending on the wait-k values we intend to support and the complexity of the language direction. Dynamic strategies have gained increased attention in recent years <cit.> due to their effectiveness. Dynamic strategies strive to strike a balance between latency and quality by making as much READ actions as necessary and as much WRITE actions as possible. The decision to read or write is made dynamically based on the context (which can be the received input and the previous target tokens) at each decoding step. Although dynamic strategies achieve state-of-the-art results, they often require specialized training techniques <cit.> that can balance between latency and quality when generating READ/WRITE actions, or even require the training of multiple models <cit.> to support multiple latency levels. In order to take advantage of the dynamic wait-k strategies, we adopt a strategy that composes multiple wait-k models during inference (we refer to this as Adaptive Wait-k <cit.>) to work with wait-k adapters instead. This brings efficiency and cost benefits as only one model is required to satisfy multiple latency levels and also improves performance compared to other strong baselines including Adaptive Wait-k. In summary, our main contributions are the following: * We introduce lightweight adapters as a flexible solution to balance parameter sharing and interference in multi-path training. * We show that by combining adapters with a simple adaptive strategy (i.e. Adaptive Wait-k) we can further improve the results. * We show that our technique outperforms or competes with other strong baselines on most latency levels. § RELATED WORKS §.§ Adapters for Machine Translation Adapters <cit.> are typically small modules that are used in order to efficiently adapt a pre-trained model to a downstream task, where the pre-trained model can be either frozen <cit.>, or trained jointly with the adapters <cit.>. Adapters have been used for efficient multi-task fine-tuning <cit.>, where each set of adapters is trained on a specific task. <cit.> added AdapterFusion on top of the adapters as a way to compose the representations of different tasks. <cit.> used adapters as language-specific parameters in order to address the curse of multilinguality in multilingual pre-training, where the adapter modules are introduced during pre-training instead of post-hoc. For Neural Machine Translation (NMT), <cit.> introduced a simple formulation of adapters to learn language-pair specific parameters, where they showed that it improves performance on high resource languages in Multilingual Translation. <cit.> trained language-family adapters to address negative interference while allowing for parameter sharing between similar languages, which improved performance on low resource languages. <cit.> fine-tuned adapters on multimodal noise, then added a fusion layer in order to improve generalization to other types of noise. Adapters were also explored for other motivations like Zero-shot NMT and unsupervised domain adaptation <cit.>. §.§ Simultaneous Machine Translation SiMT systems can be divided into fixed and adaptive policies. Fixed policies rely on predefined rules for READ/WRITE decisions. <cit.> proposed the wait-k policy, where the model starts by reading k tokens then alternates between reading and writing one token. <cit.> introduced multi-path training, where one model is trained to support multiple wait-k values by sampling k randomly during training. <cit.> addressed interference in multi-path training by using Mixture-of-Experts. <cit.> used Knowledge Distillation from a Full-Sentence Transformer to embed future information into the SiMT model. For adaptive policies, <cit.> trained a Reinforcement Learning agent to decide READ/WRITE actions, where the reward function is designed to consider both quality and latency. <cit.> generated supervised READ/WRITE actions then trained a classification model to predict the action based on encoder and decoder representations. <cit.> introduced a heuristic strategy to compose wait-k models into an adaptive policy based on their uncertainty. <cit.> trained a sentence segmentation model to predict complete sentences and feed them through a full-sentence translation model. <cit.> introduced MILK, where they modified the attention mechanism to learn a Bernoulli variable to decide READ/WRITE actions. <cit.> adapted MILK to the transformer architecture. <cit.> proposed ITST, which quantifies the transported information from source to target then generates a token when the quantity is deemed sufficient. <cit.> trained a supervised policy network based on automatically generated divergence between the predicted distribution of partial and full sentence input. The majority of the techniques outlined require training multiple models to accommodate different latency levels. Our approach focuses on the efficient training of a single model that can support various latency levels at inference time. § BACKGROUND §.§ Adapters Adapters are lightweight modules that can be inserted into a model for the purpose of task or domain adaptation <cit.>. They offer an efficient solution for fine-tuning the model and limiting catastrophic forgetting <cit.>. Formally, for a set of N tasks and a model M, the adapter parameters A are introduced. We assume that for each task we have a dataset D_n. The model parameters can be frozen or jointly trained with the adapters. For a frozen model, the model M is pre-trained and the objective function for task n ∈{1,...,N} can be defined as: A_n ←A_nargmin L_n(D_n;M, A_n) The parameters A_n are randomly initialized for each task, then they are trained on the dataset D_n in order to minimize the loss function L_n. This results in N adapters that can specialize the model representations to each task n. In the case of jointly training the model and the adapters, the model parameters M can be randomly initialized or frozen. The objective function can be defined as: M' ←M, Aargmin ( ∑_n=1^N L_n(D_n;M, A_n) ) where M' is both the parameters of the model M and the adapters A_n for n ∈{1, ..., N}. The parameters A_n are activated during training depending on the task n. §.§ Wait-k Policy The wait-k policy <cit.> trains a model to start translating after receiving k source tokens. The model then alternates between writing and reading a new token. It is a fixed policy, where the k value has to be chosen during training and inference. The model reads g_k(t) number of source tokens from the source sentence x=(x_1, ..., x_m) when generating the target token y_t, where g_k(t) is defined as: g_k(t) = min{|x|, t+k-1} Instead of training the model for a specific wait-k value, <cit.> introduced the multi-path training, which samples k uniformly from [1,...,|x|] for each batch during training. This enables the model to support multiple wait-k values and allows for information sharing between different wait-k paths. While it was shown that the multi-path training improves the results over the wait-k policy, it does not offer a solution to balance between parameter sharing and interference that we aim at solving by introducing adapters. § METHOD Our method is composed of two steps: first we train a single model that can support multiple fixed wait-k values by using wait-k adapters, then we rely on the probability that the model assigns to the most likely token in order to build an adaptive strategy, where we decide a READ or WRITE action based on a predefined probability threshold. §.§ Multi-path Training with Adapters Multi-path training is highly advantageous as an efficient alternative to the wait-k policy, where we need to train multiple models to support more than one latency at inference, but might introduce interference between wait-k paths due to parameter sharing. In order to provide the ability to balance between parameter sharing and interference, we introduce adapters into each decoder layer and we activate adapters according to the wait-k paths they are meant to support. Figure <ref> shows an illustration of this. During training, the wait-k value for each batch is sampled uniformly from [1,...,|x|] following the multi-path training <cit.> and based on that, the model decides which adapter will be activated. We set the adapter lagging K_A as a list of equally spaced positive integers in increasing order, where each integer specifies the minimum wait-k value supported by each adapter. We insert one adapter for each value in K_A. Since the train wait-k is randomly sampled from [1, …, |x|], we train each adapter on values starting from its minimum wait-k up until the minimum wait-k of the next adapter. For example, we can set K_A = {1, 5, 9, 13} and this will indicate adding 4 adapters, where each adapter will handle 4 wait-k values (starting from each integer in K_A until the next), except the fourth adapter (k_A = 13), which will handle values starting from 13 up until the length of the input sequence |x|. We follow <cit.> implementation and insert the residual adapter modules after the feed-forward layer. Algorithm <ref> shows the pseudo-code for computing the decoder hidden states at decoding step t using Adapters Wait-k, where H^0 is considered to be the input embeddings of the decoder, and g_k(t) is computed based on equation <ref>. §.§ Adaptive Adapters We follow <cit.> to build an adaptive strategy by using adapters instead of different models for each wait-k value, which can be computationally expensive and less efficient. At each decoding step, we activate one adapter based on the lagging behind the current generation step, which is calculated as k = |x| - |y|, where |x| is the number of input tokens and |y| is the number of generated tokens. At the beginning of generation, |x|=1 and |y|=0, which means k starts from 1. Then, we rely on the probability of the most likely token to decide whether to write or read a new token. If the probability is less than a threshold ρ_k, we read a new token, otherwise, we write. The possible values of k are between k_min and k_max that we determine during inference. If k is lower than k_min, we force the model to read, if it is higher or equal to k_max, we force the model to write, which means that the choice of k_min and k_max also impacts the trade-off between latency and quality (as we analyze in Section <ref>). When the whole input sequence is consumed (i.e. x_|x|=</s>), we set k to k_max and generate the rest of the target sequence. Algorithm <ref> shows the pseudo-code of this method using adapters. § EXPERIMENTS In this section, we describe the datasets we used to evaluate the models and the baselines that we compare against along with the evaluation setup. We also provide the main results of our experiments. §.§ Datasets We evaluate our method on two public datasets: the En-Vi dataset for Transformer-Small and De-En for both Transformer-Base and Transformer-Big. IWSLT15[<nlp.stanford.edu/projects/nmt/>] English → Vietnamese (133K pairs) <cit.>. We follow the settings of <cit.> and <cit.>. We use TED tst2012 (1553 pairs) as the validation set and TED tst2013 (1268 pairs) as the test set. We replace tokens with frequency less than 5 with <unk>. The final vocabulary sizes are 17K and 7.7K for English and Vietnamese respectively. WMT15[<www.statmt.org/wmt15/>] German → English (4.5M pairs) We follow the settings of <cit.>. We use newstest2013 (3000 pairs) as the validation set and newstest2015 (2169 pairs) as the test set. We apply BPE <cit.> with 32K merge operations jointly on the source and target to construct a shared vocabulary. §.§ System Settings We conduct experiments on the following systems: Full Sentence: <cit.> Standard Transformer model that takes the full sentence as input before starting to translate. Wait-k: <cit.> A simple policy that waits for k source tokens before starting to alternate between writing a target token and reading a source token. Multi-path Wait-k: <cit.> Trains a model to support multiple wait-k policies by randomly sampling k during training, then the k value is fixed during inference. Adaptive Wait-k: <cit.> It is a method for composing multiple wait-k models during inference in order to build an adaptive strategy. The model is selected based on the lagging behind the generation step, and the decision to write or read is based on the output probabilities. MoE Wait-k: <cit.> Mixture-of-Experts Wait-k is similar to Multipath Wait-k but applies experts to learn different wait-k policies to avoid interference. MMA: <cit.> Monotonic multi-head attention (MMA) jointly learns a Bernoulli variable that is used to decide READ/WRITE action. Adapters Wait-k: Our method as described in Section <ref>. Adaptive Adapters: Our method as described in Section <ref>. All implementations are based on the original Transformer architecture <cit.> and are using the Fairseq library <cit.>. We apply Transformer-Small (4 heads) for En-Vi and both Transformer-Base (8 heads) and Transformer-Big (16 heads) for De-En. The encoder is made unidirectional to avoid encoding the source input each time a new token is added. The evaluation is performed using BLEU <cit.> for translation quality and Average Lagging (AL)[<github.com/SimulTrans-demo/STACL>] <cit.> for latency. AL measures by how many tokens the system is lagging behind an ideal policy (a wait-k policy with k=0). Given g(t), AL is computed as: AL_g(x, y) = 1/τ_g(|x|)∑_t=1^τ_g(|x|) g(t) - (t - 1)/|y|/|x| where x and y are the source and target sentences respectively, while τ_g(|x|) = min{ t | g(t) = |x| } is the decoding step where the source sentence finishes. We set the adapter lagging to K_A = {1, 3, 5, 7, 9, 11, 13, 15} for our experiments, which means that 8 adapters are inserted into the model and we specify the adapter bottleneck size as 64. In Table <ref>, we report the number of parameters of each method and the number of models required to achieve the latency levels reported in the results section. Adapters Wait-k policy introduces 79.94M parameters into Transformer-Big, but still has the advantage of using one model to support multiple latency levels. In Section <ref>, we experiment with other settings of K_A in order to shed light on how much sharing is best between wait-k values during the multi-path training. The adaptive strategy requires three parameters to be specified at inference, namely, k_min, k_max, and the probability threshold ρ_k. For En-Vi experiments, k_min and k_max are set to 1 and 9 respectively, while for De-En, we lower k_max to 5, which we have found to improve the results in low latency. We analyze this effect in Section <ref>. ρ_k decreases as a function of the lagging k, since we want the model to be more aggressive when k is low and more conservative when k is high. We set ρ_k_min and ρ_k_max and compute the threshold as: ρ_k = ρ_k_min - d.(k-1), where k_min≤ k ≤ k_max and d=(ρ_k_min - ρ_k_max)/(k_max-k_min). In order to vary the latency, we test the following values of ρ_k_min and ρ_k_max: ρ_k_min∈{0.2, 0.4, 0.6, 0.8, 1.}, ρ_k_max = 0., and ρ_k_min = 1., ρ_k_max∈{0.2, 0.4,0.6, 0.8}. §.§ Main Results In Figure <ref>, we compare our methods to previous adaptive and fixed strategies on two language directions. We find that our method improves or competes with other strategies while using a single model. MMA, Wait-k, and Adaptive Wait-k require the training of multiple models in order to support different latency levels (as seen in Table <ref>), while our method is more efficient in this regard. Adapters Wait-k is competitive with other strong fixed strategies like MoE Wait-k and Multi-path Wait-k and it brings further improvements to combine it with the adaptive strategy. Our method does not support higher latency on De-En because we are using a k_max value of 5 (as seen in Figures <ref> and <ref>), which we have found to improve results for low latency. However, we show the results for higher k_max and compare them with Adaptive Wait-k on De-En in Section <ref>. Using adapters alone is competitive with other methods, especially on En-Vi (as seen as in Figure <ref>). Compared to Multi-path Wait-k, our method achieves better results on most latency levels, which shows the importance of minimizing interference between different lagging values. Combining our method with an adaptive strategy further improves the results, especially in low latency. In comparison to Adaptive Wait-k, where wait-k policy models are trained and composed during inference, we find that our method is better in all latency levels while being more efficient. Compared to MoE Wait-k, which also aims at minimizing interference introduced by multi-path training <cit.>, we find that our method is better in all latency levels on En-Vi and De-En with Transformer-Big (as seen in Figures <ref> and <ref>), while achieving competitive results when using Transformer-Base (as seen in Figure <ref>). Our method is more flexible in terms of balancing the trade-off between parameter sharing and interference, as we can choose the number of wait-k values supported by each adapter and we can also manipulate the capacity of the adapters by adjusting the bottleneck size. This can bring further improvements but requires more experimentation to find the appropriate hyperparameters. § ANALYSIS In this section, we look into how the performance changes in response to varying the value of k_max, then we provide a wall-clock time comparison between Adapters Wait-k and Multi-path Wait-k. Moreover, we experiment with how balancing between parameter sharing and interference by adjusting the adapter lagging impacts the performance, and also experiment with varying the bottleneck size in order to discern the impact of the complexity of the adapters. At last, we analyze the L2-norm of the adapter representations to discover which adapter layers are involved in the prediction. §.§ Ablation We found that lowering the value of k_max for the adaptive strategy improves the results in low latency, which we believe is the priority in SiMT, but a lower k_max value also limits the ability of supporting high latency. In Figure <ref>, we show that by increasing the value of k_max we can support high latency and get better quality translations. We compare to Adaptive Wait-k and show that we still achieve better results for all the values of k_max. A lower k_max forces the model to be more aggressive, which in some cases can improve the results in lower latency. The fact that forcing the model to be more aggressive improves the performance signifies that the adaptive strategy decides to wait in cases where the model is able to make a correct prediction, which suggests that the adaptive strategy based on the probability threshold can still be improved by a better strategy. §.§ Inference Time Although our method has more parameters than the baseline Multi-path Wait-k due to the additional adapters, the effect on the inference time is not proportional to the number of adapters because only one adapter is activated at a time. To illustrate this, we compare the wall-clock inference time (averaged over 5 runs) of Adapters Wait-k and Multi-path Wait-k in Figure <ref>. It seems that adapters are faster in low k values which could be due to over generation by the Multi-path model (where the model generates longer sequences than it should), while starting from a k value of 7, Multi-path Wait-k is better and the difference fluctuates between 0.29s and 0.66s. §.§ Adapter Lagging The adapter lagging K_A specifies the number of wait-k values that one single adapter will support and also the number of adapters that we will use. We vary the adapter lagging window between 1 and 5, while maintaining the range between 1 and 16. The results are shown in Figure <ref>. The wait-k values supported by an adapter controls the amount of sharing and interference between the values. For example, for K_A={1,5,9,13}, adapter A_1 will be trained on k ∈{1,2,3,4}. We note that although it has more parameters, a window of 1 achieves the worst results, which signifies that parameter sharing between wait-k values is crucial. Adapter lagging with window 4 and 5 are competitive especially in low latency, which indicates that lower wait-k values benefit more from sharing. This is consistent with the fact that wait-k models achieve better results when tested on lower wait-k values <cit.>. §.§ Adapter Bottleneck The adapter's bottleneck size can be used to tune the representation capacity of the adapters and can be interesting to tune depending on the language pair and the adapter lagging. In Figure <ref>, we experiment with doubling the adapter's bottleneck size from 8 to 128, which can be regarded as increasing the representation capacity of the adapter network. We found that the bottleneck size impacts the performance but not in a consistent way - as in larger size results in better performance - but it seems to interact with other hyperparameters (e.g. adapter lagging) to improve or hinder the performance, especially in high latency, where the gap in performance is larger. §.§ Adapter Representation Norm We compute the L2-norm of the adapter representations in order to discover which adapter layers are involved in the representations <cit.>. We measure the L2-norm during inference for k_min=1 and k_max=9 while varying the value of ρ_k_min and ρ_k_max, as described in Section <ref>. As depicted in Figure <ref>, the norm for all layers except layer 6 decreases as we increase ρ_k_min or ρ_k_max, which correlates with making the adaptive strategy more conservative because the threshold for making a write action is higher. This shows that the adapters are more involved in the prediction when the model is forced to be more aggressive. Only layer 6 is stably invested in adapting the model representations at all the threshold values, which seems to indicate that only low threshold predictions are complex enough to recruit all the adapter layers. Based on this observation, we experiment with inserting adapters only in the last layer (i.e. layer 6). We show in Figure <ref> the results of comparing between inserting adapters in all layers and inserting the adapters only in the last layer, where we see a drop in performance only in lower latency levels. This shows that we can make the model more efficient by removing lower layer adapters with a small drop in performance. § CONCLUSION In this paper, we employ adapters to build a SiMT model that can support multiple latency levels at inference. We use the multi-path training and show that by adding wait-k adapters we can flexibly balance parameter sharing and interference between the wait-k paths. Furthermore, we adopt a simple adaptive strategy and show that it further improves the results. By comparing against strong adaptive and fixed strategies, we find that our method achieves better or competitive results on most latency levels. § LIMITATIONS The two datasets we used are common in SiMT research and were selected to compare against other baselines, but evaluating on only two language directions can be a limiting factor for the generalization of our results. Although Vietnamese is from a different language family, it deploys a similar word order (i.e. Subject-Verb-Object) to English and German and we believe that more challenges might emerge when dealing with language directions with a different word order. Additionally, we evaluate latency using common SiMT latency metrics such as AL, which are sentence-level and do not reflect the nature of a streaming scenario <cit.>. Furthermore, in this work, we only evaluated on offline data, while evaluating on real interpretation data might offer more realistic results <cit.>. § ACKNOWLEDGEMENTS The research presented in this paper was conducted as part of VOXReality project[https://voxreality.eu/], which was funded by the European Union Horizon Europe program under grant agreement No. 101070521. acl_natbib § HYPERPARAMETERS We list the hyperparameters of our experiments in Table <ref>. § NUMERIC RESULTS In Tables <ref>, <ref> and <ref>, we report the numeric results of our methods. We report the BLEU score for quality, while for latency we used Average Lagging (AL), Consecutive Wait (CW) <cit.>, Average Proportion (AP) <cit.> and Differentiable Average Lagging (DAL) <cit.>. Below we provide the definition of CW, AP and DAL. g(i) constitutes the number of tokens read when predicting y_i, while |x| and |y| refer to the number of source and target tokens respectively. Consecutive Wait (CW) Computes the average number of consecutive tokens read between two predicted tokens. CW = ∑_i=1^|y| (g(i) - g(i - 1))/∑_i=1^|y|𝕀_g(i) - g(i - 1) > 0 Average Proportion (AP) Computes the proportion of tokens read to make every prediction. AP = 1/|x||y|∑_i=1^|y| g(i) Differentiable Average Lagging (DAL) Is a differentiable version of the Average Lagging metric. g'(i) = g(i) if i = 1 max(g(i), g'(i-1) + |x|/|y|) if i > 1 DAL = 1/|y|∑_i=1^|y| g'(i) - i-1/|x|/|y|
http://arxiv.org/abs/2407.12350v1
20240717065508
Index Modulation Embedded Mode Hopping for Anti-Jamming
[ "Liping Liang", "Wenchi Cheng", "Wei Zhang", "Hailin Zhang" ]
eess.SP
[ "eess.SP" ]
Index Modulation Embedded Mode Hopping for Anti-Jamming Liping Liang, Student Member, IEEE, Wenchi Cheng, Senior Member, IEEE, Wei Zhang, Fellow, IEEE, and Hailin Zhang, Member, IEEE Part of this work has been presented in the IEEE Global Communications Conference, 2019 <cit.>. This work was supported by the National Key R&D Program of China (2021YFC3002100), the National Natural Science Foundation of China (No. 61771368), Key Area Research and Development Program of Guangdong Province under grant No. 2020B0101110003, in part by the Australian Research Council's Project funding scheme under LP160101244, and in part by Shenzhen Science & Innovation Fund under Grant JCYJ20180507182451820. Liping Liang, Wenchi Cheng, and Hailin Zhang are with the State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, 710071, China (e-mails: liangliping@xidian.edu.cn; wccheng@xidian.edu.cn; hlzhang@xidian.edu.cn). W. Zhang is with the School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, Australia (e-mail: w.zhang@unsw.edu.au). July 22, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty empty § ABSTRACT Due to the crowded spectrum, frequency hopping (FH) techniques are now very difficult to achieve efficient anti-jamming and increase spectrum efficiency (SE) for wireless communications. The emerging orbital angular momentum (OAM), which is a property describing the helical phase fronts of electromagnetic waves, offers the potential to improve reliability and increase SE in wireless communications. To achieve efficient anti-jamming and increase SE of wireless communications with slight computational complexity cost, in this paper we propose an index-modulation embedded mode-hopping (IM-MH) scheme, which simultaneously activates several OAM-modes for hopping along with additional index information and signal information transmission. We analyze the average bit error rates (ABERs) for our proposed IM-MH scheme with perfect channel state information (CSI) and imperfect CSI, respectively. We also propose the index-modulation embedded double-serial MH (IM-DSMH) scheme, which randomly activates one OAM-mode as the serial second hop to transmit the hopping signals in the IM-MH scheme, to further decrease the ABER of wireless communications. Extensive numerical results demonstrate that our proposed schemes within a narrowband can achieve the low ABER and significantly increase the SE. Also, the ABERs of our proposed IM-MH and IM-DSMH schemes are around 25% and 10%, respectively, compared with that of the mode hopping scheme. Orbital angular momentum (OAM), mode hopping (MH), index modulation, double-serial MH (DSMH), anti-jamming. § INTRODUCTION   Frequency hopping (FH) has been widely used for secure communications under hostile jamming. In conventional FH communications, the carrier frequency is quickly changed according to a preset hopping pattern, which facilities the avoidance of jamming attacks. There are several typical FH schemes, such as uncoordinated FH (UFH), message-driven FH (MDFH), and adaptive FH (AFH) <cit.>. To overcome the dependence of preset hopping patterns and guarantee efficient anti-jamming of highly dynamic wireless networks, the authors proposed the UFH scheme, which randomly selects one channel to hop at both transmitter and receiver without any predefined hopping patterns <cit.>. Compared with the conventional FH schemes, UFH is with lower spectrum efficiency (SE) due to the lack of coordination between the transmitter and receiver. In MDFH, the transmit information contains encryption information and signal information, thus increasing the SE of FH communications. AFH schemes attempt to adaptively select a channel with minimum interference <cit.>. However, as the traffic date rate greatly increases and serve diversity explodes, the limited bandwidth results in the spectrum being more and more crowded in wireless communications, thus increasing the difficulty of anti-jamming with FH. Also, the scarce spectrum limits the SE of FH schemes. Orbital angular momentum (OAM), which describes the helical phase fronts of electromagnetic waves, offers a new degree of freedom different from the conventional degrees of freedom such as time, frequency, and polarization <cit.>. By using the orthogonal and independent characteristic of integer OAM-modes, beams carrying different orders of OAM-modes can simultaneously propagate on a narrowband with less or no inter-mode interference, which shows potential applications of OAM for efficient multiplexing <cit.> and anti-jamming <cit.>. In the past few decades years, OAM of light has been studied for data encoding and channel hopping to increase the tolerance to hostile jamming attacks in optical communications and quantum communications. To rapidly switch the generated OAM-modes, a digital micro-mirror device <cit.> and a programable spatial light modulator <cit.> were used to change the hologram in free-space, which only generated one OAM-mode at each time slot. Due to the inefficient OAM-mode utilization, these schemes are with very low SE. By contrast to optical communications, there are few researches about the anti-jamming applications of OAM in wireless communications <cit.>. To avoid the information interception by eavesdroppers, the authors of <cit.> proposed a secure rang-dependent transmission scheme, where the OAM-modes were generated by frequency diverse array antennas. Mode-hopping (MH) scheme, which rapidly switches the operating OAM-mode to another according to a certain MH pattern controlled by pseudo-noise generators within a narrowband, has been studied to achieve robust anti-jamming of wireless communications <cit.>. However, the main limitation of this technique is that it only transmits legitimate signal information over one OAM-mode, thus resulting in low SE. Introducing the concept of index modulation <cit.>, the authors of <cit.> have proposed the OAM with index modulation (OAM-IM) scheme to decrease the bit error rate and increase the SE. In addition to transmit traditional signal constellations, the OAM-IM scheme also utilized the indices of OAM-modes for additional index information transmission. However, when multiple OAM-modes are activated, the probability that signals are jammed significantly increases under hostile jamming attack environments, thus resulting in inefficient anti-jamming results. How to use OAM within a limited spectrum band to increase SE and achieve efficient anti-jamming remains an open and interesting problem in wireless communications. To overcome the problem mentioned above, in this paper we propose the index modulation embedded MH (IM-MH) scheme, which simultaneously activates serval OAM-modes for hopping in wireless communications. Different from the conference version <cit.>, where we mainly focus on deriving the optimal number of activated OAM-modes to achieve the maximum capacity, we extend the IM-MH scheme from one hop to multiple hops within a symbol duration, thus further improving the reliability of our proposed IM-MH scheme. We derive the average bit error rates (ABERs) of our proposed IM-MH scheme under perfect CSI and imperfect CSI scenarios, respectively, to evaluate the anti-jamming performance. The jamming probability of IM-MH scheme increases as the number of activated OAM-modes increases at each time-slot, thus resulting in the relatively high ABER. To further decrease the ABER, we also propose the index modulation embedded double-serial MH (IM-DSMH) scheme, which randomly activates one OAM-mode as the serial second hop to transmit the hopping signals in the IM-MH scheme. We conduct extensive numerical results to evaluate our proposed schemes, showing that our proposed schemes within a narrowband can achieve efficient anti-jamming and high SEs in wireless communications. The remainder of this paper is organized as follows. Section <ref> gives the OAM-based index modulation wireless communications system model. To reduce the ABER and increase the SE, the IM-MH scheme is proposed in Section <ref>, where we derive the exact closed-form expressions of ABER under perfect and imperfect CSI scenarios, respectively. The IM-DSMH scheme is proposed to further decrease the ABER of wireless communications in Section <ref>. Numerical results of proposed schemes are discussed in Section <ref>. The paper concludes with Section <ref>. § SYSTEM MODEL   We build up a new OAM-based index modulation wireless communications system model in Fig. <ref>, which consists of a signal modulator, an index selector A, and an OAM-transmitter at the transmitter followed by an OAM-receiver, an index selector C, and a discrete Fourier transform (DFT) operator at the receiver. Index selectors A and C are synchronized. When index selector B at the transmitter as well as another DFT operator at the receiver are added into the system, the DSMH can work for the OAM-based index modulation wireless communications system. Index selector C is synchronized with index selectors A and B when the DSMH works. As shown in Fig. <ref>, the input signal information is separated to signal information and index information <cit.>. According to the index information determined by secure keys, index selector A activates I out of N OAM-modes so that OAM multiplexing can be achieved at each hop. The remaining (N-I) OAM-modes are inactivated. When the DSMH works, based on the index information one OAM-mode is activated by index selector B as the second serial hop. The OAM-modes are activated regularly for the transmitter and the receiver, but randomly for attackers. OAM-transmitter, which consists of N elements equidistantly distributed around the circle of uniform circular array (UCA) antenna, can generate N OAM-modes in wireless communications <cit.>. OAM-receiver is also comprised of N elements equidistantly distributed around the circle of UCA antenna. OAM-transmitter and OAM-receiver can be coaxial and non-coaxial in the OAM-based index modulation wireless communications system. To simplify analysis below, we assume the OAM-transmitter and OAM-receiver are coaxial and aligned. At the receiver, index selector C is used to select the same OAM-modes activated by the transmitter. When the IM-MH works, the DFT operator is used to decompose OAM signals. Since one OAM-mode is activated by index selector B as the second serial hop, the basedband signals are modulated twice in the angular domain when the DSMH works. Therefore, two DFT operators are used to successively decompose OAM signals. In this paper, we adopt the fast hopping method with U hops during a symbol transmission, which brings the diversity gain. § THE IM-MH SCHEME   In this section, we propose the IM-MH scheme to increase the SE while achieving the low ABER of wireless communications. Specifically, we first give the transmit signals and de-hop the corresponding received signals. Then, we derive the exact closed-form expression of ABER under the perfect CSI scenarios for arbitrary number of activated OAM-modes and hops. Finally, we calculate the ABER under the realistic imperfect CSI scenarios. §.§ Signal Transmission Since each OAM combination corresponds to the specific index information, the number of available OAM combinations is K_1=2^⌊log_2NI⌋ when I out of N OAM-modes are simultaneously activated at each hop, where ⌊·⌋ is the floor function. We select a unique combination out of K_1 at each hop. To illustrate the hopping clearly, an example of IM-MH pattern is shown in Fig. <ref>, where the axis of OAM-mode is ranked in the ascending order of OAM-modes and we set N=8 as well as I=2. OAM-mode and time slot are integrated into a two-dimension time-mode resource block. The time-mode resource blocks with specified color denote the activated OAM-modes at each hop in Fig. <ref>. For instance, the coordinates (1,0) and (1,+1) represent that OAM-modes 0 and +1 are activated for hopping at the first time slot. I OAM-modes are activated to convey the index information and each activated OAM-mode conveys M-ary constellation symbols. Thus, the total number of transmission bits, denoted by η_0, corresponding to U hops is obtained as follows: η_0=Ilog_2M_η_s + U log_2K_1_η_x, where η_s and η_x are the signal transmission bits and the index information bits, respectively. Without loss of generality, we consider the u-th (1 ≤ u ≤ U) hop. According to the index information, the set of activated OAM-modes for the u-th hop, which is ranked in the ascending order of OAM-modes, is given by L_u={l_1,u,⋯, l_i,u, ⋯, l_I,u}, where l_i,u (1 ≤ i ≤ I and |l_i,u|≤ N/2) is the i-th activated OAM-mode of all I OAM-modes. We denote by 𝒮 the whole symbol constellations. According to the signal information, the transmitted modulated symbols, denoted by s_u, at the output of M-ary constellation modulator is given by s_u=[s_1,u⋯ s_i,u⋯ s_I,u ]^T, where [·]^T represents the transpose of a vector and s_i,u∈𝒮. It is noticed that the modulated symbols are transmitted within U hops duration. The emitted modulated signal, denoted by x_n,u, for the n-th (0 ≤ n ≤ N-1) transmit element is expressed as follows: x_n,u=∑_i=1^Is_i,u e^j2π n/N l_i,u. Emitted signals can be propagated in a line-of-sight (LoS) path <cit.> or in sparse multipath environments containing an LoS path and several non-line-of-sight (NLoS) paths <cit.>. The NLoS paths consist of several primary reflection paths, secondary reflection paths, and triple reflection paths in sparse multipath environments <cit.>. Then, the received signal, denoted by y_m,u, for the m-th (0 ≤ m ≤ N-1) receive element is obtained as follows: y_m,u=∑_n=0^N-1h_mnx_n,u+ J_m + n_m, where h_mn represents the channel gain from the n-th legitimate transmit element to the m-th legitimate receive element, J_m denotes the received jamming signals for the m-th legitimate receive element, and n_m represents the received complex Gaussian noise with zero mean for the m-th legitimate receive element. The secure key is pre-shared between the transmitter and receiver. We assume that the transmitter and receiver are completely synchronized. Thus, index selector C can activate the OAM-modes same with those activated by index selector A. To obtain de-hopping signals, the DFT operation is performed with respect to OAM-mode l_i,u. Therefore, we have the de-hopping signal, denoted by y_i,u, corresponding to the OAM-mode l_i,u as follows: y_i,u=1/N∑_m=0^N-1 y_m,u e^-j2π m/N l_i,u. In sparse multipath environments, the received interference contains the inter-mode interference, the inter-symbol interference, and the jamming signals. The inter-mode interference and inter-symbol interference, which are caused by the phase difference between the complex channel gains of LoS path and all reflection paths, can be mitigated by using channel ray-tracing and phase difference compensation methods <cit.>. Note that for the case of non-coaxis between the OAM-transmitter and OAM-receiver, beam steering or phase compensation can mitigate the inter-mode interference caused by the misalignment of transceiver <cit.>. Since OAM is very sensitive to azimuthal angles, the small phase difference between the azimuthal angles of attackers and legitimate transceivers will lead to jamming failure. Therefore, any attacker attempting to interfere with legitimate transceiver requires the same OAM-modes and azimuthal angles. To simplify the analyses of ABER, we assume that attackers randomly activate I out of N OAM-modes at each hop without knowing the legitimate transceiver's IM-MH pattern. Also, attackers have the same azimuthal angles with legitimate transceiver. Thereby, the jamming signals can be mitigated with the help of DFT algorithm. The legitimate signals only can be jammed by the jamming signals with OAM-mode l_i,u and the cross-talk of jamming signals on OAM-mode l_i,u. It is noticed that index selector C is necessary. If index selector C is not used at the receiver, all OAM signals regardless of whether OAM-modes are activated or inactivated are decomposed simultaneously after DFT algorithm in IM-MH communications. Thus, the receiver will decompose unexpected OAM signals if attackers transmit signals with inactivated OAM-modes. For example, we have L_u={-1, 0, +2} at the transmitter. However, the orders of OAM-modes of decomposed signals are corresponding to {-1,0,+1,+2} at the receiver, which implies that the decomposed signal corresponding to OAM-mode +1 is the jamming signal. Therefore, the de-hopping signal y_i,u in Eq. (<ref>) is obtained as follows: y_i,u=h_l_i,us_i,u+κ_i,u w_i,u+n_i,u, where h_l_i,u, w_i,u, and n_i,u represent the channel gain from the legitimate user's OAM-transmitter to the OAM-receiver, the interference from attackers, and the received noise with zero mean and variance σ^2, respectively, corresponding to the OAM-mode l_i,u. We assume that the interference follows Gaussian distribution with zero mean and variance σ_J^2. κ_i,u is used to identify the existence of interference. When κ_i,u=0, it means that there is no interference in OAM-mode l_i,u. On the other hand, when κ_i,u=1, the legitimate signals are jammed by jamming signals. In sparse multipath environments, the effect of a dominant signal arriving with several weaker NLoS paths gives rise to Rician distribution. For the given transceiver, the channel of the LoS path, denoted by h_l_i,u,LoS, corresponding to OAM-mode l_i,u can be derived as follows <cit.>: h_l_i,u,LoS=βλ N j^-l_i,ue^-j2π/λ√(d^2+r_1^2+r_2^2)/4π√(d^2+r_1^2+r_2^2) J_l_i,u(2π r_1 r_2/λ√(d^2+r_1^2+r_2^2)), where β represents attenuation, λ denotes the carrier wavelength, d is the distance from the legitimate user's OAM-transmitter center to the OAM-receiver center for the LoS path, r_1 denotes the radius of legitimate user's OAM-transmitter, r_2 represents the radius of legitimate user's OAM-receiver, and J_l_i,u(·) is the l_i,u-th order of first kind Bessel function. Also, we denote by h_l_i,u,NLoS the channel of NLoS paths, which follows Rayleigh distribution with zero mean and variance σ_i,u^2. Therefore, we have the expression of h_l_i,u as follows <cit.>: h_l_i,u=√(ξ/1+ξ)h_l_i,u,LoS+√(1/1+ξ)h_l_i,u,NLoS, where ξ called Ricain factor is the ratio between the power in the LoS path and the power in NLoS paths. §.§ ABER Analysis Under Perfect CSI   In this subsection, we analyze the ABER under perfect CSI at the receiver. We assume that I_u^' (0 ≤ I_u^'≤ I) out of I OAM-modes are jammed at the u-th hop. We denote by y_u=[y_1,u⋯ y_i,u⋯ y_I,u]^T the received signal vector and H_u the diagonal channel matrix with entries h_l_i,u, respectively, for the u-th hop. Based on Eq. (<ref>), the conditional probability density function (PDF), denoted by f(y_u|H_u, I_u^'), of y_u with I_u^' OAM-modes jammed under perfect CSI is given as follows: f(y_u|H_u, I_u^') =exp(-∑_i=1^I_u^'|y_i,u-h_i,us_i,u|^2/σ_J^2+σ^2 -∑_i=1+I_u^'^I|y_i,u-h_i,us_i,u|^2/σ^2)/π^I(σ_J^2+σ^2)^I_u^'σ^2(I-I_u^'). Assuming that U^' (0 ≤ U^'≤ U) out of U hops are jammed, we have the conditional PDF, denoted by f(y_1⋯y_U |H_1⋯H_U, I_1^'⋯ I_U^'), corresponding to U hops as follows: f(y_1⋯y_U |H_1⋯H_U, I_1^'⋯ I_U^')=∏_u=1^U f(y_u|H_u, I_u^'). The transmitter and the receiver per-share the MH pattern under the control of secret keys. Based on the pre-shared MH pattern and synchronization, the receiver knows which OAM-modes are activated by the transmitter. We use the maximum likelihood (ML) decoder at the receiver to estimate the bits related to transmit symbols. Searching all signal constellations, we have ŝ = max_s∈𝒮 ln f(y_1⋯y_U |H_1⋯H_U, I_1^'⋯ I_U^') = min_s∈𝒮{∑_u=1^U(∑_i=1^I_u^'|y_i,u-h_l_i,us_i,u|^2/σ_J^2+σ^2.. ..+∑_i=1+I_u^'^I|y_i,u-h_l_i,us_i,u|^2/σ^2)}, where s=[s_1⋯s_u⋯s_U] and ŝ is the estimation of s. Based on Eq. (<ref>), we can derive the expression of conditional pairwise error probability (PEP), denoted by P_I,U(s→ŝ|H_u,I_u^'), of ŝ detected actually instead of s as shown in Eq. (<ref>), where D is the difference between the two sides of unequal sign and ŝ_i,u is the estimation of s_i,u. For simplicity, we denote by Δ_i,u=s_i,u-ŝ_i,u. Since the received interference and noise follow complex Gaussian distribution, the mean and variance of D, denoted by D_μ and D_var, are derived as D_μ=-∑_u=1^U(∑_i=1^I_u^' |h_l_i,uΔ_i,u|^2/σ^2+σ_J^2 +∑_i=1+I_u^'^I|h_l_i,uΔ_i,u|^2/σ^2) and D_var=2∑_u=1^U(∑_i=1^I_u^' |h_l_i,uΔ_i,u|^2/σ^2+σ_J^2 +∑_i=1+I_u^'^I|h_l_i,uΔ_i,u|^2/σ^2), respectively. Thus, Eq. (<ref>) can be re-expressed as Eq. (<ref>), where Q(x)≈1/12e^-x^2/2+1/4e^-2x^2/3 is the Gaussian Q-function <cit.>. Observing Eq. (<ref>), the conditional PEP depends on the Euclidean distance between ŝ_i,u and s_i,u for given H_u and I_u^'. Aiming at calculating the ABER to evaluate the anti-jamming performance of our proposed IM-MH scheme, we need to statistically average the conditional PEP over the number of OAM-modes jammed by interference, the number of hops jammed, and the channel gains. In the following, we obtain the exact closed-form expression of conditional PEP for arbitrary U^' and I_u^'. Proposition 1: The exact conditional PEP, denoted by P_I,U(s→ŝ|I_u^'), with I_u^' OAM-modes jammed by hostile jamming under perfect CSI is derived as follows: P_I,U(s→ŝ|I_u^') = exp(∑_i=1^I∑_u=1^Uξρ_i,uh^2_l_i,u,LoS/4(1+ξ)-ρ_i,uσ_i,u^2h^2_l_i,u,LoS)/12∏_i=1^I∏_u=1^U[1-ρ_i,uσ_i,u^2h^2_l_i,u,LoS/4(1+ξ)] + exp(∑_i=1^I∑_u=1^Uξρ_i,uh^2_l_i,u,LoS/3(1+ξ)-ρ_i,uσ_i,u^2h^2_l_i,u,LoS)/4∏_i=1^I∏_u=1^U[1-ρ_i,uσ_i,u^2h^2_l_i,u,LoS/3(1+ξ)], where ρ_i,u is given by ρ_i,u={ -Δ_i,u^2/σ^2+σ_J^2 if i∈ [1,I_u^']; -Δ_i,u^2/σ^2 if i∈ [1+I_u^',I]. . See Appendix <ref>. To average the conditional PEP over I_u^' and U^', the jamming probability is required. We assume each OAM-mode has an equal probability for being active by using the equiprobable activation mapping method <cit.>. Thus, each OAM-mode is jammed with equal probability. We denote by P(I_u^'|I) the jamming probability that I_u^' out of I OAM-modes are jammed at the u-th hop. Thus, we have P(I_u^'|I)=II_u^'N-II-I_u^'/NI. When N-I < I-I_u^', we have P(I_u^'|I)=0. Also, the probability, denoted by P_0, without any interference at the u-th hop is N-II/NI. Thus, we can calculate the jamming probability, denoted by P(U^'|U), with U^' out of U hops jammed as follows: P(U^'|U) = ∑_I_1^'=1^I∑_I_2^'=1^I⋯∑_I_U^'^'=1^I _U^'-foldUU^'P_0^U-U^' ×∏_I_i^'=1^III_i^'P(I_1^'|I)P(I_2^'|I)⋯ P(I_U^'^'|I)_U^'- fold. The upper bound on ABER can be calculated by the following steps. First, we weight conditional PEP P(U^'|U) by the number of transmit bit errors related to a given transmission sequence. Next, we sum the conditional PEP over all error events corresponding to the given transmit sequence. Then, we statistically average the sum over all possible transmit sequences. Finally, we average the value over the number of transmission bits per symbol <cit.>. Therefore, based on Eqs. (<ref>) and (<ref>), the ABER, denoted by P_I,U, is calculated using the union-bound method as follows: P_I,U≤∑_s^2^η_s∑_ŝ^2^η_s∑_U^'=0^UP(U^'|U)P_I,U(s→ŝ|I_u^')N_e(s, ŝ)/η_0 2^η_s, where N_e(s, ŝ) represents the number of transmission bit errors for the event (s→ŝ). §.§ ABER Analysis Under Imperfect CSI   In practical scenarios, it is difficult for channel estimators at the receiver to estimate channels perfectly, thus resulting in channel estimation errors. The estimated channel gain, denoted by h̃_l_i,u, corresponding to h_l_i,u is given as follows <cit.>: h̃_l_i,u=h_l_i,u-ϵ_i,u, where ϵ_i,u is the channel estimation error with zero mean and variance σ_ϵ^2. It is noticed that h_l_i,u and ϵ_i,u are independent with each other and the variance of ϵ_i,u refers to the accuracy of channel estimation. Replacing h_l_i,u in Eq. (<ref>) by h̃_l_i,u, we can rewrite the received signal as follows: y_i,u=h̃_l_i,us_i,u_useful signal+ϵ_i,us_i,u_estimation error+κ_i,u w_i,u+n_i,u^overall interference, where the variance of estimation error is equal to σ_ϵ^2|s_i,u|^2. Then, under imperfect CSI, the estimation of s by the ML decoder is expressed as follows: ŝ =min_s∈𝒮 {∑_u=1^U(∑_i=1^I_u^'|y_i,u-h̃_l_i,us_i,u|^2/σ_J^2+σ^2+σ_ϵ^2|s_i,u|^2.. ..+∑_i=1+I_u^'^I|y_i,u-h̃_l_i,us_i,u|^2/σ^2+σ_ϵ^2|s_i,u|^2)}. In the similar way as deriving Eq. (<ref>) and based on Eqs. (<ref>) and (<ref>), the conditional PEP, denoted by P_I(s→ŝ|H_u,I_u^'), under imperfect CSI is derived as shown in Eq. (<ref>), where H_u is the diagonal channel matrix with the entries h̃_l_i,u and D is the difference between the two sides of unequal sign. Thus, the decision variable D has the mean and variance, denoted by D_μ and D_var, as D_μ=-∑_u=1^U(∑_i=1^I_u^' |h̃_l_i,uΔ_i,u|^2/σ^2+σ_J^2+σ_ϵ^2|s_i,u|^2+∑_i=1+I_u^'^I|h̃_l_i,uΔ_i,u|^2/σ^2+σ_ϵ^2|s_i,u|^2) and D_var=2∑_u=1^U(∑_i=1^I_u^' |h̃_l_i,uΔ_i,u|^2/σ^2+σ_J^2+σ_ϵ^2|s_i,u|^2 +∑_i=1+I_u^'^I|h̃_l_i,uΔ_i,u|^2/σ^2+σ_ϵ^2|s_i,u|^2), respectively <cit.>. Then, we can obtain the expression of P_I(s→ŝ|H_u,I_u^') as shown in Eq. (<ref>). It is clear that Eq. (<ref>) gets back to Eq. (<ref>) if σ_ϵ^2=0, namely perfect CSI. Under imperfect CSI, the mean of h̃_l_i,u is √(ξ/1+ξ)h_l_i,u,LoS while the variance is (σ_i,u^2/ξ+1-σ_ϵ^2). Then, following the similar steps in Appendix <ref>, the conditional PEP under imperfect CSI for arbitrary U^' and I_u^' is obtained as shown in Proposition 2. Proposition 2: The exact PEP, denoted by P_I,U^im(s→ŝ|I_u^'), with I_u^' OAM-modes jammed at each hop under imperfect CSI is given by Eq. (<ref>), where ρ̃_i,u={ -Δ_i,u^2/σ^2+σ_J^2+σ_ϵ^2|s_i,u|^2 if i∈ [1,I_u^']; -Δ_i,u^2/σ^2+σ_ϵ^2|s_i,u|^2 if i∈ [1+I_u^',I]. . Replacing P_I,U(s→ŝ|I_u^') in Eq. (<ref>) by P_I,U^im(s→ŝ|I_u^'), we can obtain the ABER of our proposed IM-MH scheme under imperfect CSI. The expressions of conditional PEP and jamming probability are the same for our proposed IM-MH scheme and the previously proposed MH scheme when I=1. However, our proposed IM-MH scheme has lower ABER as compared with the MH scheme because N_e(s, ŝ)/η_0 is used in the IM-MH scheme instead of N_e(s, ŝ)/η_s in the MH scheme. Therefore, our proposed IM-MH scheme achieves more robust anti-jamming results than the MH scheme. To significantly increase the SE in both signal information and index information for our proposed IM-MH scheme, activating multiple OAM-modes is required. However, such a requirement leads to the high probability of signals being jammed at each hop when the jammer sends jamming signals with OAM beams, thus causing inefficient anti-jamming results. In order to deal with the above-mentioned problem, a novel IM-DSMH scheme is proposed to effectively protect transmit signals from hostile jamming. § THE IM-DSMH SCHEME As shown in Fig. <ref>, index selector B at the transmitter and additional DFT operator at the receiver are added into the OAM-based index modulation wireless communications system when DSMH works. Thereby, at the transmitter two index selectors work for the IM-DSMH scheme. The index information determines the activated OAM-modes in both index selectors A and B. Based on the index information bits, one OAM-mode is activated as the second hop to transmit the hopping signals in the proposed IM-MH scheme. To illustrate the hopping of the IM-DSMH scheme, an example of the IM-DSMH pattern is depicted in Fig. <ref>, where the time resource is divided into multiple time slots. The activated OAM-mode in selector B is denoted by l_s,u (|l_s,u| ≤ N/2). We integrate OAM-mode l_i,u, l_s,u and time slot into a three-dimension resource cube. Each hop corresponds to a cube and is identified by the specified color. Therefore, the output signal, denoted by s_l_s,u, after index selector B for the u-th hop is given as follows: s_l_s,u=∑_i=1^Is_i,ue^j2 π/N l_i,u l_s,u. Next, the emitted modulated signal, denoted by x_n,l_s,u, for the n-th transmit element is derived as follows: x_n,l_s,u = s_l_s,u e^j2 π/N n l_s,u = ∑_i=1^Is_i,u e^j2π n/Nl_s,u (l_i,u+n). It is clear that the IM-DSIM scheme still keeps the orthogonality of OAM beams. Then, the received signal, denoted by ỹ_m,u, for the m-th element at the u-th hop is given as follows: ỹ_m,u=∑_n=0^N-1h_mnx_n,l_s,u+ J_m + n_m. We multiply the received signals with exp{-j2π m l_i,u/N} for the first de-hopping and then average the sum of signals for all receive elements. Thus, we can obtain the de-hopping signal, denoted by y_l_i,u, corresponding to OAM-mode l_i,u as follows: y_l_i,u = 1/N∑_m=0^N-1ỹ_m,ue^-j2π m l_i,u/N = h_l_i,u s_l_i,ue^j2 π/N l_i,u l_s,u+1/N∑_m=0^N-1(J_m + n_m)e^-j2π m l_i,u/N. To further de-hop the obtained signal, the second DFT algorithm is performed. Thus, we have the de-hopping signal, denoted by y_s,u,i, for the activated OAM-modes l_i,u and l_s,u as follows: y_s,u,i=1/N∑_i=1^I y_l_i,ue^-j2π/N l_s,u l_i,u. Since attackers do not know the legitimate user's hopping pattern in IM-DSMH communications, jamming signals with different OAM-modes can be eliminated completely after multiplying the received signal with exp{-j2π m/N l_i,u}. The residual jamming signal with OAM-mode l_i,u has been decomposed from the vorticose beams to plane beams. Then, after multiplying the received signal with exp{-j2π/N l_s,u l_i,u} and performing the DFT algorithm, the jamming signals can be further eliminated. It is noticed that the legitimate user's signals cannot be jammed when the activated OAM-modes in index selector B do not contain OAM-mode zero. Only when jamming signals carry the same OAM-modes l_s,u and the user activates the OAM-mode zero in index selector B, the jamming signals can interfere with the desired signals. Therefore, we propose that all OAM-modes except OAM-mode zero can be activated by index selector B with equal probability to mitigate the interference in wireless communications, thus making it impossible for the legitimate user's signals to be jammed. Hence, we have the signal y_s,u,i, corresponding to the OAM-modes l_i,u in index selector A and l_s,u in index selector B as follows: y_s,u,i=h_l_i,us_i,u+n_l_s,u,i, where n_l_s,i is the corresponding received noise. Index selector A activates I out of N-1 OAM-modes as the first hop while index selector B activates one out of N OAM-modes as the second serial hop. The activated OAM-modes by index selectors A and B are combined into two-dimensional OAM-OAM blocks. Each OAM-OAM combination corresponds to the specific index information. Therefore, the number of available OAM-OAM combinations is K_2=2^⌊log_2[N-1IN]⌋. Thus, the corresponding transmission bits, denoted by η_1, of the IM-DSMH scheme is expressed as follows: η_1 = Ilog_2M+Ulog_2K_2. Comparing the transmission bits of our proposed IM-MH and IM-DSMH schemes, we can obtain the increase of transmission bits, denoted by Δ_η, for the proposed IM-DSMH scheme as follows: Δ_η = η_1-η_0 (a)≈ Ulog_2(N-I), where (a) ignores the floor operation. It is clear that Δ_η increases monotonically with the increase of N and U, respectively. The derivation of ABER for our proposed IM-DSMH scheme is similar to that for the IM-MH scheme. By considering all possible cases, the ABER, denoted by P̃_I,U, with I activated OAM-modes and U hops for our proposed IM-DSMH scheme is derived with the union-bound method <cit.> as follows: P̃_I,U≤∑_s^2^η_s∑_ŝ^2^η_sP̃_I,U(s→ŝ)N_e(s, ŝ)/η_12^η_s, where P̃_I,U(s→ŝ) is expressed as P̃_I,U (s→ŝ) = exp(-∑_u=1^U∑_i=1^IξΔ_i,u^2h^2_l_i,u,LoS/4(1+ξ)σ^2+σ_i,u^2Δ_i,u^2h^2_l_i,u,LoS)/12∏_u=1^U∏_i=1^I[1+σ_i,u^2Δ_i,u^2h^2_l_i,u,LoS/4(1+ξ)σ^2] + exp(-∑_u=1^U∑_i=1^IξΔ_i,u^2h^2_l_i,u,LoS/3(1+ξ)σ^2+σ_i,u^2Δ_i,u^2h^2_l_i,u,LoS)/4∏_u=1^U∑_i=1^I[1+σ_i,u^2Δ_i,u^2h^2_l_i,u,LoS/3(1+ξ)σ^2] under perfect CSI and P̃_I,U(s→ŝ) = exp[-∑_u=1^U∑_i=1^IξΔ_i,u^2h^2_l_i,u,LoS/4(1+ξ)(σ^2+σ_ϵ^2|s_i,u|^2)/1+Δ^2_i,uh^2_l_i,u,LoS/4(σ^2+σ_ϵ^2|s_i,u|^2)(σ_i,u^2/1+ξ-σ_ϵ^2)]/12∏_u=1^U∏_i=1^I[1+Δ^2_i,uh^2_l_i,u,LoS/4(σ^2+σ_ϵ^2|s_i,u|^2)(σ_i,u^2/1+ξ-σ_ϵ^2)] + exp[-∑_u=1^U∑_i=1^IIξΔ_i,u^2h^2_l_i,u,LoS/3(1+ξ)(σ^2+σ_ϵ^2|s_i,u|^2)/1+Δ^2_i,uh^2_l_i,u,LoS/3(σ^2+σ_ϵ^2|s_i,u|^2)(σ_i,u^2/1+ξ-σ_ϵ^2)]/4∏_u=1^U∏_i=1^I[1+Δ^2_i,uh^2_l_i,u,LoS/3(σ^2+σ_ϵ^2|s_i,u|^2)(σ_i,u^2/1+ξ-σ_ϵ^2)] under imperfect CSI, respectively. It is noticed that our proposed schemes can use various modulation schemes, such as phase-shift keying (PSK), quadrature amplitude modulation (QAM), and frequency-shift keying (FSK). The proposed IM-MH scheme not only simultaneously activates several OAM-modes for transmission at each hop according to the input index information, but also utilizes the diversity of fast hopping to decrease the probability of signals being jammed. The proposed IM-DSMH scheme with slight computational complexity cost uses the double-serial hop to increase the index information and effectively avoid legitimate signals being jammed. Thus, our proposed IM-MH and IM-DSMH schemes within a narrow frequency band can be used for achieving high SE and low ABER for various interfering waveforms such as single-tone interference, wideband interference, and partial-band interference. Therefore, our proposed IM-MH scheme can be applied into the scenarios, such as indoor wireless communication, radar, wireless local area networks. § PERFORMANCE EVALUATION   In this section, we show several numerical and simulation results to evaluate the performance of our proposed IM-MH and IM-DSMH schemes. The numerical results are organized as follows. Section <ref> shows the ABERs of our proposed IM-MH scheme. Section <ref> depicts the ABERs of our proposed IM-DSMH scheme and compares the ABERs among IM-MH, IM-DSMH, FH, and MH schemes. Section <ref> compares the SEs among IM-MH, IM-DSMH, FH, and MH schemes. Throughout our evaluations, the jamming-to-noise ratio is set to 2 dB and binary PSK (BPSK) modulation is employed. §.§ ABER of IM-MH Scheme To evaluate the impact of N and I on the ABERs of the IM-MH and FH schemes under perfect CSI, we present the ABER versus signal-to-noise ratio (SNR) in Fig. <ref>, where we set ξ=10 and U=1. The ABER corresponding to N=8 and I=2 is the highest among the four cases because the jamming probability is the highest among the four cases as proved in Eq. (<ref>). The other reason is that the case for I=2 has a denser joint constellation size than that for I=1, which makes it more possible for the ML decoder to make error decisions. Hence, the ABER of our proposed IM-MH scheme increases as the number of available OAM-modes decreases and the number of activated OAM-modes increases. Also, we can find that for a fixed I, the difference between the cases N=16 and N=8 is larger in a relatively high SNR region than that in the low SNR region. This is because interference plays a dominant role in the low SNR region while legitimate signals play a dominant role in the high SNR region. In addition, comparing the ABERs of our proposed IM-MH and the conventional FH schemes, we can find that our proposed IM-MH scheme achieves lower ABER than the FH scheme. This is because our proposed IM-MH scheme can simultaneously transmit index bits and signal bits. The index bits related to activated OAM-modes can be easily obtained at the receiver due to the pre-shared secret keys between the transmitter and receiver. Therefore, when given the transmission bits, the IM-MH scheme has a lower ABER than the FH scheme. Figure <ref> compares the ABERs of our proposed IM-MFH and the conventional FH schemes versus different values of U, where we set N=8, I=2, and ξ=10, respectively. As expected, the ABER monotonically decreases as the number of hops increases, thus resulting in decreasing the jamming probability as verified in Eq. (<ref>). This implies that it is difficult for attackers to track trajectories of activated OAM-modes. Also, modulated signals are transmitted within U hops duration, thus bringing a diversity gain of wireless communications to improve the anti-jamming performance. This result verifies that by increasing the number of hops, our proposed IM-MH scheme can achieve robust anti-jamming in wireless communications. Observing Fig. <ref>, we can find that the ABER of our proposed IM-MH scheme is lower than that of the conventional FH scheme because our proposed IM-MH scheme increases the index bits for the fixed U and I. Figure <ref> shows the ABERs of our proposed IM-MH scheme versus channel SNR for different values of ξ under perfect CSI, where we set N=4, I=4, and U=1, respectively. Observing Fig. <ref>, we can obtain that the ABER decreases as the value of ξ increases. This is because the LoS component plays a dominant role in the high ξ region whereas NLoS components dominate in the low ξ region. When ξ is high, the impact of multipath fading on ABER is mitigated. Figure <ref> depicts the impact of channel estimation error on the ABERs of our proposed IM-MH scheme versus the received SNR, where we set N=8, I=2, and ξ=10, respectively. As shown in Eq. (<ref>), σ_ϵ^2 refers to the accuracy of channel estimation. σ_ϵ^2=0 means that the CSI is estimated perfectly. As σ_ϵ^2 increases, the received SINR decreases, thus resulting in high ABER. Therefore, as shown in Fig. <ref>, the ABER curve for σ_ϵ^2=0 can be considered as the ABER threshold for imperfect CSI scenarios. As the channel SNR increases, the ABER first slowly decreases and then gets close to a fixed value as shown in Fig. <ref>. However, there is no fixed value of ABER under perfect CSI in the whole SNR region. §.§ ABER of IM-DSMH Scheme and ABER Comparisons Figures <ref> and <ref> show the ABERs of our proposed IM-DSMH scheme versus the received SNR for different values of U and ξ under both perfect and imperfect CSI scenarios, where we set I=2 and N=8, respectively. Observing the ABER curves, we can find that the ABERs under perfect CSI are lower than those under imperfect CSI scenarios. The ABER difference between perfect CSI and imperfect CSI is larger in a relatively high SNR region as compared with that in a low SNR region. Fig. <ref>, where we set U=1, shows that the ABERs of our proposed IM-DSMH scheme decreases as ξ increases. Also, observing Fig. <ref>, we can see that the ABERs of our proposed IM-DSMH scheme decrease as U increases. This verifies that our proposed IM-DSMH scheme with multiple hops can achieve robust anti-jamming of wireless communications. Figure <ref> compares the ABERs of our proposed IM-MH, IM-DSMH, MH, and the conventional FH schemes versus the received SNR for different values of U and I under perfect CSI, where we set N=8, I=3, and ξ=10, respectively. Observing the curves, we can see that our proposed IM-MH and IM-DSMH schemes have lower ABERs than the MH scheme for given U and I. Although our proposed IM-MH scheme and the MH scheme have the same jamming probabilities for given U and I, our proposed IM-MH scheme has a lower ratio between the number of bit errors and all transmission bits due to the increase of η_x in comparison with the MH scheme. The MH and FH schemes have the same ABER due to the same jamming probabilities and transmission bits. Also, the IM-DSMH scheme not only has the lowest ratio between the number of bit errors and all transmit bits, but also successfully avoids jamming attacks. Therefore, our proposed IM-DSMH scheme achieves the lowest ABER among the four schemes as shown in Fig. <ref>. In addition, the ABERs of our proposed IM-MH and IM-DSMH schemes are around 25% and 10%, respectively, compared with that of the MH scheme. §.§ Spectrum Efficiency Comparison Similar to the derivation analysis about SE in <cit.>, the upper bound of SE, denoted by C_1, for our proposed IM-MH scheme can be obtained as follows: C_1=∑_U^'=0^UP(U^'|U)∑_i=1^Ilog_2(1+∑_u=1^Uγ_i,u)+log_2K_1, where γ_i,u is the signal-to-interference-plus-noise ratio (SINR) for the i-th activated OAM-mode at the u-th hop in the IM-MH communication and a maximal-ratio combiner is used at the receiver. The first term on the right hand of Eq. (<ref>) is the signal information and the second term corresponds to the index information. The upper bound of SE, denoted by C_2, for our proposed IM-DSMH scheme is derived as follows: C_2=∑_i=1^Ilog_2(1+∑_u=1^Uχ_i,u)+log_2K_2, where χ_i,u is the SNR for the i-th activated OAM-mode at the u-th hop in the IM-DSMH communication. Figure <ref> compares the SEs of our proposed IM-MH, IM-DSMH, FH and MH schemes with different values of I, where N=12, U=1, and ξ=10. To analyze the impact of SNR on SE, we set the SNR as 10 dB and 25 dB, respectively. The SE of our proposed IM-MH and IM-DSMH scheme is upper bounded as the sum of index information and signal information, whereas the SE of MH scheme is obtained through signal information. As shown in Fig. <ref>, the SEs of our proposed IM-MH and IM-DSMH schemes first increase and then decrease as I increases when SINR is low. Thus, in this case there exist optimal solutions I to achieve the highest SE. On the one hand, the index information is a concave function with respect to I for fixed N. On the other hand, the signal information increases as I increases. Thereby, the SE first increase. When the signal information increases slower than the index information decreases, the SEs begin to decrease. Hence, when SNR is low, the SE curves of our proposed IM-MH and IM-DSMH schemes are concave with respect to I. When SNR is relatively high, the SEs of our proposed IM-MH and IM-DSMH schemes monotonically increase as I increases. The reason is that the increase of signal information is always larger than the decrease of index information. Observing Fig. <ref>, we also can find that when I=N-1, the SEs of our proposed IM-MH and IM-DSMH scheme are the same, which is due to the same index information. The SE of our proposed IM-DSMH scheme is larger than that of the IM-MH scheme because the IM-DSMH scheme provides additional index information in comparison with the IM-MH scheme. This result can be proved by Eq. (<ref>). In addition, we can see that our proposed IM-MH and IM-DSMH schemes achieve higher SE than the MH scheme in wireless communications. It is clear that the FH scheme has the lowest SE among these schemes. This is because our proposed IM-MH, IM-DSMH, and MH schemes can be used for anti-jamming within the narrow frequency band while the FH scheme requires N narrow frequency bands for hopping. Figure <ref> compares the SEs of our proposed IM-MH, IM-DSMH, MH, and the conventional FH schemes versus I, where N is set as 12 and 16, respectively. As N increases, the SEs of our proposed schemes increase due to the increase of index information as shown in Eqs. (<ref>) and (<ref>). It is clear that the SE curves of the MH scheme are overlapped because the SEs are obtained through signal information without considering the jamming attack. Since the index information is always equal to zero for conventional FH and MH schemes, the SEs of our proposed IM-MH and IM-DSMH schemes are larger than those of conventional FH and MH schemes for a fixed I. § CONCLUSIONS In this paper, we proposed the IM-MH scheme to increase the SE and decrease the ABER for anti-jamming in wireless communications within a narrowband. The exact closed-form expressions of ABER were theoretically analyzed under perfect CSI scenarios and imperfect CSI scenarios, respectively. We also proposed the IM-DSMH scheme, which randomly activated an OAM-mode as the second hop to transmit the hopping signals in the IM-MH scheme, to further decrease the ABER. Numerical results have shown that the ABER of our proposed IM-MH scheme within a narrowband decreases as the number of available OAM-modes increases and the number of activated OAM-modes decreases. Our proposed IM-DSMH scheme can achieve robust anti-jamming by performing the second serial hop. In addition, our proposed schemes outperform the MH scheme in terms of SE and ABER within the narrow frequency band. § PROPOSITION 1 We first analyze the conditional PEP for U=1 and then for an arbitrary U. Thus, Eq. (<ref>) for U=1 can be re-expressed by P_I,U(s→ŝ|H_u,I_u^') =Q(√(∑_i=1^I_u^' |h_l_i,uΔ_i,u|^2/2(σ^2+σ_J^2) +∑_i=1+I_u^'^I|h_l_i,uΔ_i,u|^2/2σ^2)). Based on Eqs. (<ref>) and (<ref>), we can derive P_I,U(s→ŝ|H_u,I_u^') as shown in Eq. (<ref>). Since the channels follow the independent Rician fading, the conditional PEP P_I,U(s→ŝ|I_u^') can be derived as follows: P_I,U(s→ŝ|I_u^') = ∫_0^∞∫_0^∞⋯∫_0^∞_I-fold P_I(s→ŝ|H_u,I_u^') × f(h__l_1,u)⋯ f(h_l_I,u)d h_l_1,u⋯ d h_l_I,u, where f(h_l_i,u) is the PDF of h_l_i,u. Then, P_I,U(s→ŝ|I_u^') with I-fold integral can be calculated using the moment generating function (MGF) <cit.>. Thus, Eq. (<ref>) is calculated as follows: P_I,U(s→ŝ|I_u^')=1/12∏_i=1^Iℳ_h_l_i,u^2(ρ_i,u/4)+1/4∏_i=1^Iℳ_h_l_i,u^2(ρ_i,u/3), where ℳ_h_l_i,u^2(ρ_i,u/4) = ∫_0^∞exp(-ρ_i,u h_l_i,u^2/4) f(h_l_i,u) d h_l_i,u is the MGF of random variable h_l_i,u^2 at ρ_i,u/4. The mean and variance of OAM-based wireless channel h_l_i,u are √(ξ/1+ξ)h_l_i,u,LoS and σ_i,u^2/1+ξ, respectively. Thus, the MGF associated with Rician fading channel model at ρ_i,u/4 is further derived as follows <cit.>: ℳ_h_l_i,u^2(ρ_i,u/4)=(1+ξ)exp(ξh_l_i,u,LoS^2ρ_i,u/4/1+ξ-h_l_i,u,LoS^2ρ_i,uσ_i,u^2/4)/1+ξ-ρ_i,uσ_i,u^2/4h_l_i,u,LoS^2 . Based on Eqs. (<ref>) and (<ref>), the conditional PEP P_I,U(s→ŝ|I_u^') for U=1 can be obtained. In the similar way as deriving P_I,U(s→ŝ|I_u^') for U=1, we can obtain the exact closed-form expression of P_I,U(s→ŝ|I_u^') for arbitrary U as shown in Eq. (<ref>). IEEEbib [ < g r a p h i c s > ]Liping Liang (SM'18) received the B.S. degree in electronic and information engineering from Jilin University, Changchun, China, in 2015. She received the Ph.D. degree in telecommunication engineering from Xidian University, Xian, China, in 2021. She is currently a lecturer with Xidian University. Her research interests focus on B5G/6G wireless communications with an emphasis on radio vortex wireless communications, integrated sensing and communication, and anti-jamming communications. [ < g r a p h i c s > ]Wenchi Cheng (M'14-SM'18) received the B.S. and Ph.D. degrees in telecommunication engineering from Xidian University, Xian, China, in 2008 and 2013, respectively, where he is a Full Professor. He was a Visiting Scholar with Networking and Information Systems Laboratory, Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, USA, from 2010 to 2011. His current research interests include B5G/6G wireless networks, emergency wireless communications, and orbital-angular-momentum based wireless communications. He has published more than 100 international journal and conference papers in IEEE Journal on Selected Areas in Communications, IEEE Magazines, IEEE Transactions, IEEE INFOCOM, GLOBECOM, and ICC, etc. He received URSI Young Scientist Award (2019), the Young Elite Scientist Award of CAST, the Best Dissertation of China Institute of Communications, the Best Paper Award for IEEE ICCC 2018, the Best Paper Award for IEEE WCSP 2019, and the Best Paper Nomination for IEEE GLOBECOM 2014. He has served or serving as the Associate Editor for IEEE Systems Journal, IEEE Communications Letters, IEEE Wireless Communications Letter, the IoT Session Chair for IEEE 5G Roadmap, the Wireless Communications Symposium Co-Chair for IEEE ICC 2022 and IEEE GLOBECOM 2020, the Publicity Chair for IEEE ICC 2019, the Next Generation Networks Symposium Chair for IEEE ICCC 2019, the Workshop Chair for IEEE ICC 2019/IEEE GLOBECOM 2019/INFOCOM 2020 Workshop on Intelligent Wireless Emergency Communications Networks. [ < g r a p h i c s > ]Wei Zhang (S'01-M'06-SM'11-F'15) received the Ph.D. degree from The Chinese University of Hong Kong in 2005. Currently, he is a Professor at the School of Electrical Engineering and Telecommunications, the University of New South Wales, Sydney, Australia. His current research interests include UAV communications, 5G and beyond. He received 6 best paper awards from IEEE conferences and ComSoc technical committees. He was elevated to Fellow of the IEEE in 2015 and was an IEEE ComSoc Distinguished Lecturer in 2016-2017. Within the IEEE ComSoc, he has taken many leadership positions including Member-at-Large on the Board of Governors (2018-2020), Chair of Wireless Communications Technical Committee (2019-2020), Vice Director of Asia Pacific Board (2016-2021), Editor-in-Chief of IEEE Wireless Communications Letters (2016-2019), Technical Program Committee Chair of APCC 2017 and ICCC 2019, Award Committee Chair of Asia Pacific Board and Award Committee Chair of Technical Committee on Cognitive Networks. He was recently elected as Vice President of IEEE Communications Society (2022-2023). In addition, he has served as a member in various ComSoc boards/standing committees, including Journals Board, Technical Committee Recertification Committee, Finance Standing Committee, Information Technology Committee, Steering Committee of IEEE Transactions on Green Communications and Networking and Steering Committee of IEEE Networking Letters. Currently, he serves as an Area Editor of the IEEE Transactions on Wireless Communications and the Editor-in-Chief of Journal of Communications and Information Networks. Previously, he served as Editor of IEEE Transactions on Communications, IEEE Transactions on Wireless Communications, IEEE Transactions on Cognitive Communications and Networking, and IEEE Journal on Selected Areas in Communications C Cognitive Radio Series. [ < g r a p h i c s > ]Hailin Zhang (M'97) received B.S. and M.S. degrees from Northwestern Polytechnic University, Xi'an, China, in 1985 and 1988 respectively, and the Ph.D. from Xidian University, Xi'an, China, in 1991. In 1991, he joined School of Telecommunications Engineering, Xidian University, where he is a senior Professor and the Dean of this school. He is also currently the Director of Key Laboratory in Wireless Communications Sponsored by China Ministry of Information Technology, a key member of State Key Laboratory of Integrated Services Networks, one of the state government specially compensated scientists and engineers, a field leader in Telecommunications and Information Systems in Xidian University, an Associate Director of National 111 Project. Dr. Zhang's current research interests include key transmission technologies and standards on broadband wireless communications for B5G/6G wireless access systems. He has published more than 200 papers in journals and conferences.
http://arxiv.org/abs/2407.13746v1
20240718175102
Multi-Label Learning with Stronger Consistency Guarantees
[ "Anqi Mao", "Mehryar Mohri", "Yutao Zhong" ]
cs.LG
[ "cs.LG", "stat.ML" ]
General Geometry-aware Weakly Supervised 3D Object Detection Guowen Zhang1,20000-0001-6692-1185 Junsong Fan20000-0001-6989-2711 Liyi Chen1,20000-0001-6600-5064 Zhaoxiang Zhang 2,3,40000-0003-2648-3875 Zhen Lei 2,3,4,Corresponding authors. 0000-0002-0791-189X Lei Zhang 10000-0002-2078-4215 July 22, 2024 =========================================================================================================================================================================================================================================== § ABSTRACT We present a detailed study of surrogate losses and algorithms for multi-label learning, supported by -consistency bounds. We first show that, for the simplest form of multi-label loss (the popular Hamming loss), the well-known consistent binary relevance surrogate suffers from a sub-optimal dependency on the number of labels in terms of -consistency bounds, when using smooth losses such as logistic losses. Furthermore, this loss function fails to account for label correlations. To address these drawbacks, we introduce a novel surrogate loss, multi-label logistic loss, that accounts for label correlations and benefits from label-independent -consistency bounds. We then broaden our analysis to cover a more extensive family of multi-label losses, including all common ones and a new extension defined based on linear-fractional functions with respect to the confusion matrix. We also extend our multi-label logistic losses to more comprehensive multi-label comp-sum losses, adapting comp-sum losses from standard classification to the multi-label learning. We prove that this family of surrogate losses benefits from -consistency bounds, and thus Bayes-consistency, across any general multi-label loss. Our work thus proposes a unified surrogate loss framework benefiting from strong consistency guarantees for any multi-label loss, significantly expanding upon previous work which only established Bayes-consistency and for specific loss functions. Additionally, we adapt constrained losses from standard classification to multi-label constrained losses in a similar way, which also benefit from -consistency bounds and thus Bayes-consistency for any multi-label loss. We further describe efficient gradient computation algorithms for minimizing the multi-label logistic loss. § INTRODUCTION Supervised learning methods often assign a single label to each instance. However, real-world data exhibits a more complex structure, with objects belonging to multiple categories simultaneously. Consider a video about sports training, which could be categorized as both `health' and `athletics,' or a culinary blog post tagged with `cooking' and `nutrition'. As a result, multi-label learning <cit.> has become increasingly important, leading to the development of various interesting and effective approaches, predominantly experimental in nature, in recent years <cit.>. Although there is a rich literature on multi-label learning (see <cit.> and <cit.> for detailed surveys), only a few studies focus on the theoretical analysis of multi-label learning, particularly the study of the Bayes-consistency of surrogate losses <cit.>. <cit.> initiated the study of Bayes-consistency in multi-label learning with respect to Hamming loss and (partial) ranking loss. They provided negative results for ranking loss, demonstrating that no convex and differentiable pairwise surrogate loss is Bayes-consistent for that multi-label loss. They also showed that the binary relevance method, which learns an independent binary classifier for each of the labels, is Bayes-consistent with respect to the Hamming loss. <cit.> further demonstrated that under the assumption of conditionally independent labels, the binary relevance method is also Bayes-consistent with respect to the F_β measure loss. However, they noted that it can perform arbitrarily poorly when this assumption does not hold. <cit.> provided a positive result for the (partial) ranking loss by showing that the simpler univariate variants of smooth surrogate losses are Bayes-consistent with respect to it. Additionally, <cit.> proposed a family of Bayes-consistent surrogate losses for the F_β measure by reducing the F_β learning problem to a set of binary class probability estimation problems. This approach was motivated by the consistent output coding scheme in <cit.> for general multiclass problems. Other works have studied generalization bounds in multi-label learning <cit.>. Another related topic is the characterization of the Bayes classifier and corresponding Bayes-consistent plug-in algorithm in multi-label learning. This includes the characterization of the Bayes classifier for subset 0/1 loss and Hamming loss in <cit.> and the characterization of the Bayes classifier for F_1 measure in <cit.>. <cit.> further extended the results in <cit.> by designing a Bayes-consistent plug-in algorithm for the F_β measure. <cit.> characterized the Bayes classifier for general linear fractional losses with respect to the confusion matrix and designed the corresponding plug-in algorithms in the empirical utility maximization (EUM) framework. In this framework, the measures are directly defined as functions of the population, in contrast to a loss function that is defined as a function over a single instance in the decision theoretic analysis (DTA) framework <cit.>. <cit.> studied the Bayes-consistency of various reduction methods with respect to Precision@κ and Recall@κ in multi-label learning. However, all these publications only established Bayes-consistency for specific loss functions. Can we derive a unified surrogate loss framework that is Bayes-consistent for any multi-label loss? Furthermore, as <cit.> pointed out, Bayes-consistency is an asymptotic guarantee and does not provide convergence guarantees. It also applies only to the family of all measurable functions unlike the restricted hypothesis sets typically used in practice. Instead, they proposed a stronger guarantee known as -consistency bounds, which are both non-asymptotic and account for the hypothesis set while implying Bayes-consistency. These guarantees provide upper bounds on the target estimation error in terms of the surrogate estimation error. Can we leverage this state-of-the-art consistency guarantee when designing surrogate loss functions for multi-label learning? Moreover, one of the main concerns in multi-label learning is label correlations (see <cit.>). For the simplest form of multi-label loss, the popular Hamming loss, the existing Bayes-consistent binary relevance surrogate fails to account for label correlations. Can we design consistent loss functions that effectively account for label correlations as well? Our Contributions. This paper directly addresses these key questions in multi-label learning. We present a detailed study of surrogate losses and algorithms for multi-label learning, supported by -consistency bounds. In Section <ref>, we first show that for the simplest form of multi-label loss, the popular Hamming loss, the well-known consistent binary relevance surrogate, when using smooth losses such as logistic losses, suffers from a sub-optimal dependency on the number of labels in terms of -consistency bounds. Furthermore, this loss function fails to account for label correlations. To address these drawbacks, we introduce a novel surrogate loss, multi-label logistic loss, that accounts for label correlations and benefits from label-independent -consistency bounds (Section <ref>). We then broaden our analysis to cover a more extensive family of multi-label losses, including all common ones and a new extension defined based on linear-fractional functions with respect to the confusion matrix (Section <ref>). In Section <ref>, we also extend our multi-label logistic losses to more comprehensive multi-label comp-sum losses, adapting comp-sum losses from standard classification to the multi-label learning. We prove that this family of surrogate losses benefits from -consistency bounds, and thus Bayes-consistency, across any general multi-label loss. Our work thus proposes a unified surrogate loss framework that is Bayes-consistent for any multi-label loss, significantly expanding upon previous work which only established consistency for specific loss functions. Additionally, we adapt constrained losses from standard classification to multi-label constrained losses in a similar way, which also benefit from -consistency bounds and thus Bayes-consistency for any multi-label loss (Appendix <ref>). We further describe efficient gradient computation algorithms for minimizing the multi-label logistic loss (Section <ref>). § PRELIMINARIES Multi-label learning. We consider the standard multi-label learning setting. Let be the input space and = *+1, -1^ the set of all possible labels or tags, where is a finite number. For example, can be a set of images, and can be a set of pre-given tags (such as 'flowers', 'shoes', or 'books') that can be associated with each image in the image tagging problem. Let n = *. For any instance x ∈ and its associated label y = *y_1, …, y_∈, if y_i = +1, we say that label i is relevant to x. Otherwise, it is not relevant. Let [] = *1, …,. Given a sample S drawn i.i.d. according to some distribution over ×, the goal of multi-label learning is to learn a hypothesis h × [] → to minimize the generalization error defined by a multi-label loss function _all××→, _(h) = _(x, y) ∼*(h, x, y) , where _all is the family of all measurable hypotheses. For convenience, we abusively denote the scoring vector by h(x) = *h(x, 1), …, h(x, ). Given a hypothesis set ⊂_all, we denote by ^*_() = inf_h ∈(h) the best-in-class error. We refer to the difference _(h) - ^*_() as the estimation error, which is termed the excess error when = _all. Let t ↦ 1_t ≥ 0 - 1_t < 0 be the sign function, and let t →_+ be a threshold function. The target loss function can be typically given by a function mapping from × to real numbers: (h, x, y) = *(x), y, where (x) *_1(x), …, _(x)∈ is the prediction for the input x ∈ and _i(x) = (h(x, i) - t(x)) for any i ∈ []. As with many multi-label learning algorithms, such as binary relevance, we set the threshold function t(x) = 0. There are many multi-label loss functions, such as Hamming loss, (partial) ranking loss, F_1 and the more general F_β measure loss, subset 0/1 loss, precision@κ, recall@κ, etc. <cit.>. Among these, several loss functions are defined based on the prediction of the hypothesis (x), while others are based on the scoring vector h(x). We will specifically consider the first type of multi-label loss in the form given in (<ref>), which is based on some `distance' between the prediction and the true label. This includes all the loss functions previously mentioned (see Section <ref> for a list of several common multi-label losses in this family) but excludes the (partial) ranking loss, which is defined based on pairwise scores. For convenience, we may alternatively refer to or its induced as the multi-label loss. Without loss of generality, we assume that ∈ [0, 1], which can be achieved through normalization. We also denote by _max = max_y', y(y', y). Our analysis is general and adapts to any multi-label loss . Surrogate risk minimization and consistency. Minimizing the multi-label loss directly is computationally hard for most hypothesis sets because it is discrete and non-convex. A common method involves minimizing a smooth surrogate loss function _all××→, which is the main focus of this paper. Minimizing a surrogate loss directly leads to an algorithm for multi-label learning. A desirable guarantee for the surrogate loss in multi-label learning is Bayes-consistency <cit.>. That is, minimizing the surrogate loss over the family of all measurable functions leads to the minimization of the multi-label loss over the same family: A surrogate loss is said to be Bayes-consistent with respect to a multi-label loss if the following holds for any distribution and all given sequences of hypotheses *h_n_n ∈⊂_all: *lim_n →∞_(h_n) - ^*_(_all) = 0 *lim_n →∞_(h_n) - ^*_(_all) = 0. As pointed out by <cit.> (see also <cit.>), Bayes-consistency is an asymptotic guarantee that cannot provide any guarantee for approximate minimizers; it also applies only to the family of all measurable functions and does not consider the hypothesis sets typically used in practice. Instead, they propose a stronger guarantee known as -consistency bounds, which are both non-asymptotic and dependent on the hypothesis set, and imply Bayes-consistency when = _all. These guarantees provide upper bounds on the target estimation error in terms of the surrogate estimation error. In the multi-label learning scenario, they can be formulated as follows: A surrogate loss is said to admit an -consistency bound with respect to a multi-label loss if the following condition holds for any distribution and for all hypotheses h ∈, given a concave function Γ_+→_+ with Γ(0) = 0: _(h) - ^*_() + _() ≤Γ*_(h) - ^*_() + _() . The quantities _() appearing in the bounds are called minimizability gaps, which measure the difference between the best-in-class error and the expected best pointwise error for a loss function and a hypothesis set : _() = ^*_() - _x*inf_h ∈*_y | x*(h, x, y)≥ 0. These are inherent quantities depending on the distribution and hypothesis set, which we cannot hope to minimize. Since Γ is concave and Γ(0) = 0, Γ is sub-additive and an -consistency bound (<ref>) implies that: _(h) - ^*_() + _() ≤Γ*_(h) - ^*_() + Γ*_(). Therefore, when the surrogate estimation error *_(h) - ^*_() is minimized to ϵ, the target estimation error *_(h) - ^*_() is upper bounded by Γ() + Γ*_(). The minimizability gaps vanish when = _all or in more general realizable cases, such as when ^*_() = ^*_(_all) <cit.>. In these cases, -consistency bounds imply the -consistency of a surrogate loss with respect to a multi-label loss : _(h) - ^*_() ≤_(h) - ^*_() ≤Γ(), for any ≥ 0. The minimizability gap _() is upper bounded by the approximate error _() = ^*_() - _x*inf_h ∈_all*_y | x*(h, x, ,y) and is generally a finer quantity <cit.>. Thus, -consistency bounds are more informative, more favorable, and stronger than excess error bounds, and they imply these bounds when = _all. Next, we will study surrogate loss functions and algorithms for multi-label learning, supported by -consistency bounds, the state-of-the-art consistency guarantee for surrogate risk minimization. § EXISTING CONSISTENT SURROGATES FOR THE HAMMING LOSS In the section, we consider the simplest form of multi-label loss, the Hamming loss, defined as: ∀ (h, x, y) ∈××, _ham(h, x, y) = _ham((x), y), where _ham(y', y) = ∑_i = 1^ 1_y_i ≠ y'_i. The existing Bayes-consistent surrogate loss function is to transform the multi-label learning into independent binary classification tasks <cit.>, defined as for all (h, x, y) ∈××, _br(h, x, y) = ∑_i = 1^Φ(y_i h(x, i)), where Φ→_+ is a binary margin-based loss function, such as the logistic loss u ↦log*1 + e^-u. The algorithm that minimizes this surrogate loss is known as binary relevance <cit.>, which learns an independent binary classifier for each of the labels. <cit.> shows that _br is Bayes-consistent with respect to _ham if Φ is Bayes-consistent with respect to ℓ_0-1 (f, x, y) ↦ 1_y ≠(f(x)), the binary zero-one loss. Here, we prove a stronger result that _br admits an -consistency bound with respect to _ham with a functional form Γ*·/ if Φ admits an -consistency bounds with respect to ℓ_0-1 with a functional form Γ(·). Let be a hypothesis set consist of functions mapping from to . theoremBoundBi Let = ^. Assume that the following -consistency bound holds in the binary classification, for some concave function Γ→_+: ∀ f ∈, _ℓ_0-1(f) - ^*_ℓ_0-1() + _ℓ_0-1() ≤Γ(_Φ(f) - ^*_Φ() + _Φ() ). Then, the following -consistency bound holds in the multi-label learning: for all h ∈, __ham(h) - ^*__ham() + __ham() ≤Γ*__br(h) - ^*__br() + __br()/. The proof is included in Appendix <ref>. We say that a hypothesis set is complete if *f(x) f ∈ =, ∀ x ∈. This notion of completeness is broadly applicable and holds for commonly used hypothesis sets in practice, including linear hypotheses, multi-layer feed-forward neural networks, and all measurable functions. For such complete hypothesis sets and with smooth functions Φ like the logistic loss function, Γ admits a square root dependency in the binary classification <cit.>. Thus, by Theorem <ref>, we obtain the following result. corollaryBoundBiLog Let = ^. Assume that is complete and Φ(u) = log(1 + e^-u). Then, the following -consistency bound holds in the multi-label learning: for all h ∈, __ham(h) - ^*__ham() + __ham() ≤^1/2*__br(h) - ^*__br() + __br() ^1/2. Since t ↦ t^1/2 is sub-additive, the right-hand side of the -consistency bound in Corollary <ref> can be further upper bounded by ^1/2*__br(h) - ^*__br()^1/2 + ^1/2*__br() ^1/2. This implies that when the estimation error of the surrogate loss _br is reduced to , the corresponding estimation error of the Hamming loss is upper bounded by ^1/2^1/2 + ^1/2*__br() ^1/2 - __ham(). In the nearly realizable cases where minimizability gaps are negligible, this upper bound approximates to __ham(h) - ^*__ham() ≤^1/2^1/2. Therefore, as the number of labels increases, the bound becomes less favorable. Furthermore, the loss function _br clearly fails to account for the inherent correlations among labels. For instance, `coffee' and 'mug' are more likely to co-occur than `coffee' and `umbrella'. Additionally, _br is only Bayes-consistent with respect to the Hamming loss and cannot yield risk-minimizing predictions for other multi-label losses such as subset 0/1 loss or F_β-measure loss <cit.>. To address these drawbacks, we will introduce a new surrogate loss in the next section. § MULTI-LABEL LOGISTIC LOSS In this section, we define a new surrogate loss for Hamming loss in multi-label learning that accounts for label correlations and benefits from label-independent -consistency bounds. This loss function can be viewed as a generalization of the (multinomial) logistic loss <cit.>, used in standard classification, to multi-label learning. Thus, we will refer to it as multi-label logistic loss. It is defined as follows: for all (h, x, y) ∈××, _log(h, x, y) = ∑_y' ∈* 1- _ham(y', y) log*∑_y”∈ e^∑_i = 1^* y”_i - y'_i h(x, i). This formulation can be interpreted as a weighted logistic loss, where * 1- _ham(·, y) serves as a weight vector. Additionally, this formulation accounts for label correlations among the y_is within the logarithmic function. The next result shows that the multi-label logistic loss benefits from a favorable -consistency bound with respect to _ham, without dependency on the number of labels . We assume that = ^ and is complete, conditions that typically hold in practice. theoremBoundLog Let = ^. Assume that is complete. Then, the following -consistency bound holds in the multi-label learning: for all h ∈, __ham(h) - ^*__ham() + __ham() ≤ 2 *__log(h) - ^*__log() + __log() ^1/2. Since t ↦ t^1/2 is sub-additive, the right-hand side of the -consistency bound in Theorem <ref> can be further upper bounded by 2 *__log(h) - ^*__log()^1/2 + 2 *__log() ^1/2. This implies that when the estimation error of the surrogate loss _log is reduced up to , the corresponding estimation error of the Hamming loss is upper bounded by 2 ^1/2 + 2 *__log() ^1/2 - __ham(). In the nearly realizable cases where minimizability gaps are negligible, this upper bound approximates to __ham(h) - ^*__ham() ≤ 2 ^1/2. Therefore, the bound is independent of the number of labels . This contrasts with the bound for _br shown in (<ref>), where a label-dependent factor ℓ^1/2 replaces the constant factor 2, making it significantly less favorable. The proof of Theorem <ref> is included in Appendix <ref>. We first present a general tool (Theorem <ref>) in Appendix <ref>, which shows that to derive -consistency bounds in multi-label learning with a concave function Γ, it is only necessary to upper bound the conditional regret of the target multi-label loss by that of the surrogate loss with the same Γ. This generalizes <cit.> in standard multi-class classification to multi-label learning. Next, we characterize the conditional regret of the target multi-label loss, such as Hamming loss, in Lemma <ref> found in Appendix <ref>, under the given assumption. By using Lemma <ref>, we upper bound the conditional regret of _ham by that of the surrogate loss _log with a concave function Γ(t) = 2 √(t). When = _all, minimizability gaps __log() and __ham() vanish, Theorem <ref> implies excess error bound and Bayes-consistency of multi-label logistic loss with respect to the Hamming loss. The following excess error bound holds in the multi-label learning: for all h ∈_all, __ham(h) - ^*__ham(_all) ≤ 2 *__log(h) - ^*__log(_all)^1/2. Moreover, _log is Bayes-consistent with respect to _ham. It is known that _br is only Bayes-consistent with respect to the Hamming loss and can be arbitrarily bad for other multi-label losses such as F_β-measure loss <cit.>. Instead, we will show in the following section that our surrogate loss _log adapts to and is Bayes-consistent with respect to an extensive family of multi-label losses, including the F_β measure loss. § EXTENSION: GENERAL MULTI-LABEL LOSSES In this section, we broaden our analysis to cover a more extensive family of multi-label losses, including all common ones and a new extension defined based on linear-fractional functions with respect to the confusion matrix. Note that several loss functions are defined over the space *0, 1^, rather than *+1, -1^. To accommodate this difference, any pair y, y' ∈ = *+1, -1^ can be projected onto *0, 1^ by letting y = y + 1/2 and y' = y' + 1/2, where 1∈^ is the vector with all elements equal to 1. Several common multi-label losses are defined as follows. Hamming loss: (y', y) = ∑_i = 1^ 1_y_i ≠ y'_i. F_β-measure loss: (y', y) = 1 - (1 + β^2) y' · y/β^2 * y_1 + * y'_1. Subset 0/1 loss: (y', y) = max_i ∈ [] 1_y'_i ≠ y_i. Jaccard distance: (y', y) = 1 - y' · y/* y_1 + * y'_1 - y' · y Precision@κ: (y', y) = 1 - 1/κ∑_i ∈( y') 1_ y_i = 1 subject to y' ∈_κ, where _κ = *y ∈* y_1 = κ and ( y') = *i ∈ [] y'_i = 1. Recall@κ: (y', y) = 1 - 1/* y_1∑_i ∈( y') 1_ y_i = 1 subject to y' ∈_κ, where _κ = *y ∈* y_1 = κ and ( y') = *i ∈ [] y'_i = 1. More generally, we can define a multi-label loss based on true positives (), true negatives (), false positives () and false negatives () , which can be written explicitly as follows: = y' · y = * y_1 - y'· y = * y'_1 - y' · y, = + y' · y - * y_1 - * y'_1 Similar to <cit.>, we now define a general family of multi-label losses as linear-fractional functions in terms of these four quantities: (y', y) = a_0 + a_11 + a_10 + a_01 + a_00/b_0 + b_11 + b_10 + b_01 + b_00. It can be shown that the aforementioned Hamming loss, F_β-measure loss, Jaccard distance, precision and recall all belong to this family. Note that the previous definitions in <cit.> are within the empirical utility maximization (EUM) framework <cit.>, where the measures are directly defined as functions of the population. We generalize their definition to the decision theoretic analysis (DTA) framework, in terms of loss functions defined over y and y'. Moreover, we can consider extending multi-label losses (<ref>) to non-linear fractional functions of these four quantities, or more generally, to any other forms, as long as they are defined over the space ×. Another important family of multi-label losses is the tree distance loss, used in cases of hierarchical classes. In many practical applications, the class labels exist within a predefined hierarchy. For example, in the image tagging problem, class labels might include broad categories such as `animals' or `vehicles', which further subdivide into more specific classes like `mammals' and `birds' for animals, or `cars' and `trucks' for vehicles. Each of these subcategories can be divided even further, showcasing a clear hierarchical structure. Tree distance: Let T = *, E, W be a tree over the label space , with edge set E and positive, finite edge lengths specified by W. Suppose r ∈ is designated as the root node. Then, _T(y', y) = the shortest path length in T between y and y'. Despite the widespread use of hierarchical classes in practice, to our knowledge, no Bayes-consistent surrogate has been proposed for the tree distance loss in multi-label learning. Next, we will show that our multi-label logistic loss can accommodate all these different loss functions, including the tree distance loss. For any general multi-label loss , we define the multi-label logistic loss as follows: ∀ (h, x, y) ∈××, _log(h, x, y) = ∑_y' ∈* 1- (y', y) log*∑_y”∈ e^∑_i = 1^* y”_i - y'_i h(x, i). Here, can be chosen as all the multi-label losses mentioned above. Next, we will show that _log benefits from -consistency bounds and Bayes consistency with respect to any of these loss functions. theoremBoundLogGeneral Let = ^. Assume that is complete. Then, the following -consistency bound holds in the multi-label learning: ∀ h ∈, _(h) - ^*_() + _() ≤ 2 *__log(h) - ^*__log() + __log() ^1/2. -0.05in The proof of Theorem <ref> is basically the same as that of Theorem <ref>, modulo replacing the Hamming loss _ham with a general multi-label loss . We include it in Appendix <ref> for completeness. When = _all, minimizability gaps __log() and _() vanish, Theorem <ref> implies excess error bound and Bayes-consistency of multi-label logistic loss with respect to any multi-label loss. The following excess error bound holds in the multi-label learning: for all h ∈_all, _(h) - ^*_(_all) ≤ 2 *__log(h) - ^*__log(_all)^1/2. Moreover, _log is Bayes-consistent with respect to . -0.05in Corollary <ref> is remarkable, as it demonstrates that a unified surrogate loss, _log, is Bayes-consistent for any multi-label loss, significantly expanding upon previous work which only established consistency for specific loss functions. Furthermore, Theorem <ref> provides a stronger guarantee than Bayes-consistency, which is both non-asymptotic and specific to the hypothesis set used. Minimizing the multi-label logistic loss directly leads to the effective algorithm in multi-label learning. We further discuss the efficiency and practicality of this algorithm in Section <ref>, where we describe efficient gradient computation. § EXTENSION: MULTI-LABEL COMP-SUM LOSSES -0.01in In this section, we further extend our multi-label logistic losses to more comprehensive multi-label comp-sum losses, adapting comp-sum losses <cit.> from standard classification to the multi-label learning. As shown by <cit.>, comp-sum loss is defined via a composition of the function Ψ and a sum, and includes the logistic loss (Ψ(u) = log(u)) <cit.>, the sum-exponential loss (Ψ(u) = u - 1) <cit.>, the generalized cross-entropy loss (Ψ(u) = 1/*1 - 1/u^, ∈ (0,1)) <cit.>, and the mean absolute error loss (Ψ(u) = 1 - 1/u) <cit.> as special cases. Given any multi-label loss , we will define our novel multi-label comp-sum losses as follows: ∀ (h, x, y) ∈××, _comp(h, x, y) = ∑_y' ∈* 1- (y', y) Ψ*∑_y”∈ e^∑_i = 1^* y”_i - y'_i h(x, i). This formulation can be interpreted as a weighted comp-sum loss, where * 1- _ham(·, y) serves as a weight vector. Additionally, this formulation accounts for label correlations among the y_is within the function Ψ. Next, we prove that this family of surrogate losses benefits from -consistency bounds, and thus Bayes-consistency, across any general multi-label loss. -0.02in theoremBoundComp Let = ^. Assume that is complete. Then, the following -consistency bound holds in the multi-label learning: ∀ h ∈, _(h) - ^*_() + _() ≤Γ*__comp(h) - ^*__comp() + __comp() , where Γ(t) = 2√(t) when Ψ(u) = log(u) or u - 1; Γ(t) = 2√(n^t) when Ψ(u) = 1/*1 - 1/u^, ∈ (0,1); and Γ(t) = n t when Ψ(u) = 1 - 1/u. -0.02in corollaryBoundCompAll The following excess error bound holds in the multi-label learning: ∀ h ∈_all, _(h) - ^*_(_all) ≤Γ*__comp(h) - ^*__comp(_all), where Γ(t) = 2√(t) when Ψ(u) = log(u) or u - 1; Γ(t) = 2√(n^t) when Ψ(u) = 1/*1 - 1/u^, ∈ (0,1); and Γ(t) = n t when Ψ(u) = 1 - 1/u. Moreover, _comp with these choices of Ψ are Bayes-consistent with respect to . -0.02in The proof of Theorem <ref> is included in Appendix <ref>. Similar to the proof of Theorem <ref>, we make use of Theorem <ref> and Lemma <ref> in Appendix <ref>. However, upper bounding the conditional regret of by that of the surrogate loss _comp for different choices of Ψ requires a distinct analysis depending on the specific form of the function Ψ, leading to various concave functions Γ. Our proof is inspired by the proof of -consistency bounds for comp-sum losses in <cit.> through the introduction of a parameter μ and optimization. However, the novelty lies in the adaptation of μ with a quantity tailored to multi-label loss functions instead of the score vector h itself. Note that, as with Ψ(u) = log(u) shown in Section <ref>, for Ψ(u) = u - 1, the bounds are also independent of the number of labels and are favorable. However, for other choices of Ψ, the bounds exhibit a worse dependency on n, which can be exponential with respect to . In Appendix <ref>, we introduce another novel family of surrogate losses, adapting constrained losses <cit.> from standard classification to multi-label constrained losses in a similar way. § EFFICIENT GRADIENT COMPUTATION -0.01in In this section, we demonstrate the efficient computation of the gradient for the multi-label logistic loss _log at any point (x^j, y^j). This loss function is therefore both theoretically grounded in -consistency bounds and computationally efficient. Consider the labeled pair (x^j, y^j) and a hypothesis h in . The expression for _log(h, x^j, y^j) can be reformulated as follows: _log(h, x^j, y^j) = ∑_' ∈* 1- (', y^j) log*∑_y”∈ e^∑_i = 1^* y”_i - y'_i h(x^j, i) = - ∑_y' ∈* 1- (y', y^j) ∑_i = 1^ y'_i h(x^j, i) + ∑_y' ∈* 1- (y', y^j) log*∑_y ∈ e^∑_i = 1^ y_i h(x^j, i). Let _1(j) = ∑_y ∈* 1- (y, y^j), which is independent of h and can be pre-computed. It can also be invariant with respect to j and is a fixed constant for many loss functions such as the Hamming loss. Next, we will consider the hypothesis set of linear functions = *x ↦·Ψ(x, i) ∈^d, where Ψ is a feature mapping from × [] to ^d. Using the shorthand for h, we can rewrite _log at (x^j, y^j) as follows: _log(, x^j, y^j) = - ·*∑_y' ∈* 1- (y', y^j) *∑_i = 1^ y'_i Ψ(x^j, i) + _1(j) log*Z_, j, where Z_, j = ∑_y ∈ e^·*∑_i = 1^ y_i Ψ(x^j, i). Then, we can compute the gradient of _log at any ∈^d: ∇_log() = -∑_y' ∈* 1- (y', y^j) *∑_i = 1^ y'_i Ψ(x^j, i) + _1(j) ∑_y∈e^·*∑_i = 1^ y_i Ψ(x^j, i)/ Z_, j*∑_i = 1^ y_i Ψ(x^j, i) = -∑_y' ∈* 1- (y', y^j) *∑_i = 1^ y'_i Ψ(x^j, i) + _1(j) _y ∼_**∑_i = 1^ y_i Ψ(x^j, i), where _ is a distribution over with probability mass function _(y) = e^·*∑_i = 1^ y_i Ψ(x^j, i)/ Z_, j. By rearranging the terms in (<ref>), we obtain the following result. The gradient of _log at any ∈^d can be expressed as follows: ∇_log() = ∑_i = 1^Ψ(x^j, i) _2(i, j) + _1(j) ∑_i = 1^Ψ(x^j, i) Q_(i) where _2(i, j) = ∑_y ∈* 1- (y, y^j) y_i, _1(j) = ∑_y ∈* 1- (y, y^j), Q_(i) = ∑_y ∈_(y) y_i, _(y) = e^·*∑_i = 1^ y_i Ψ(x^j, i)/ Z_, j, and Z_, j = ∑_y ∈ e^·*∑_i = 1^ y_i Ψ(x^j, i). The overall time complexity for gradient computation is O(). Here, the evaluation of _2(i, j), i ∈ [] and _1(j) can be computed once and for all, before any gradient computation. For evaluation of Q_(i), note that it can be equivalently written as follows: Q_(i) = ∑_y ∈e^·Ψ(x^j, y)/∑_y ∈ e^·Ψ(x^j, y) y_i, with Ψ(x^j, y) = ∑_i = 1^ y_i Ψ(x^j, i), where Ψ(x^j, y) admits a Markovian property of order 1 <cit.>. Thus, as shown by <cit.>, Q_(i) can be evaluated efficiently by running two single-source shortest-distance algorithms over the (+, ×) semiring on an appropriate weighted finite automaton (WFA). More specifically, in our case, the WFA can be described as follows: there are ( + 1) vertices labeled 0, …, l. There are two transitions from k to (k + 1) labeled with +1 and -1. The weight of the transition with label +1 is exp(+·Ψ(x^j, k)), and exp(-·Ψ(x^j, k)) for the other. 0 is the initial state, and the final state. The overall time complexity of computing all quantities Q_(i), i ∈ [], is O(). § CONCLUSION We presented a comprehensive analysis of surrogate losses for multi-label learning, establishing strong consistency guarantees. We introduced a novel multi-label logistic loss that addresses the shortcomings of existing methods and enjoys label-independent consistency bounds. Our proposed family of multi-label comp-sum losses offers a unified framework with strong consistency guarantees for any general multi-label loss, significantly expanding upon previous work. Additionally, we presented efficient algorithms for their gradient computation. This unified framework holds promise for broader applications and opens new avenues for future research in multi-label learning and related areas. toc § EXTENSION: MULTI-LABEL CONSTRAINED LOSSES In this section, we introduce another novel family of surrogate losses, adapting constrained losses <cit.> from standard classification to multi-label constrained losses in a similar way. Given any general multi-label loss , we define multi-label constrained losses as: ∀ (h, x, y) ∈××, _cstnd(h, x, y) = ∑_y' ∈(y', y) Φ*- ∑_i = 1^ y'_i h(x, i). where ∑_y ∈∑_i = 1^ y_i h(x, i) = 0. Next, we show that _cstnd also benefit from -consistency bounds and thus Bayes-consistency for any multi-label loss. theoremBoundCstnd Let = ^. Assume that is complete Then, the following -consistency bound holds in the multi-label learning: ∀ h ∈, _(h) - ^*_() + _() ≤Γ*__cstnd(h) - ^*__cstnd() + __cstnd() , where Γ(t) = 2√(_maxt) when Φ(u) = e^-u; Γ(t) = 2√(t) when Φ(u) = max*0, 1 - u^2; and Γ(t) = t when Φ(u) = max*0,1 - u or Φ(u) = min*max*0, 1 - u/ρ, 1, ρ > 0. corollaryBoundCstndAll The following excess error bound holds in the multi-label learning: ∀ h ∈_all, _(h) - ^*_( _all)≤Γ*__cstnd(h) - ^*__cstnd( _all), where Γ(t) = 2√(_maxt) when Φ(u) = e^-u; Γ(t) = 2√(t) when Φ(u) = max*0, 1 - u^2; and Γ(t) = t when Φ(u) = max*0,1 - u or Φ(u) = min*max*0, 1 - u/ρ, 1, ρ > 0. Moreover, _cstnd with these choices of Φ are Bayes-consistent with respect to . The proof of Theorem <ref> is included in Appendix <ref>. As with the proof of Theorem <ref>, we use Theorem <ref> and Lemma <ref> from Appendix <ref>, and aim to upper bound the conditional regret of by that of the surrogate losses _comp using various concave functions Γ. However, the difference lies in our introduction and optimization of a parameter μ tailored to a quantity that is specific to the form of the multi-label constrained loss. These results show that in cases where minimizability gaps vanish, reducing the estimation error of _cstnd to ϵ results in the estimation error of target multi-label loss being upper bounded by either √(ϵ) or ϵ, modulo a constant that is independent of the number of labels. § PROOF OF H-CONSISTENCY BOUNDS FOR EXISTING SURROGATE LOSSES (THEOREM <REF>) * Let p(y | x) = ℙ(Y = y | X = x) be the conditional probability of Y = y given X = x. Given a multi-label surrogate loss and a hypothesis set , we denote the conditional error by _(h, x) = _y | x*(h, x, y), the best-in-class conditional error by ^*_*, x = inf_ h ∈_(h, x), and the conditional regret by Δ_, *h, x = _(h, x) - ^*_*, x. We can express the conditional error of the hamming loss and the surrogate loss _br as follows: __ham(h, x) = ∑_y ∈ p(y | x) ∑_i = 1^ 1_y_i ≠ h(x, i) = ∑_i = 1^*∑_y y_i = +1 p(y | x) 1_1 ≠(h(x, i)) + ∑_y y_i = -1 p(y | x) 1_-1 ≠(h(x, i)) __br(h, x) = ∑_y ∈ p(y | x) ∑_i = 1^Φ(y_i h(x, i)) = ∑_i = 1^*∑_y y_i = +1 p(y | x) Φ(h(x, i)) + ∑_y y_i = -1 p(y | x) Φ(-h(x, i)) Let q(+1 | x) = ∑_y y_i = +1 p(y | x) and q(-1 | x) = ∑_y y_i = +=-1 p(y | x). Let f_i = h(·, i) ∈, for all i ∈ []. Then, it is clear that the conditional regrets of _ham and _br can be expressed as the corresponding conditional regrets of ℓ_0-1 and Φ under this new introduced new distribution: Δ__ham, (h, x) = ∑_i = 1^Δ_ℓ_0-1, (f_i, x), Δ__br, (h, x) = ∑_i = 1^Δ_Φ, (f_i, x). Since we have Δ_ℓ_0-1, (f_i, x) ≤Γ*Δ_Φ, (f_i, x) under the assumption, we obtain Δ__ham, (h, x) = ∑_i = 1^Δ_ℓ_0-1, (f_i, x) ≤∑_i = 1^Γ*Δ_Φ, (f_i, x) ≤Γ*1/∑_i = 1^Δ_Φ, (f_i, x)concavity of Γ = Γ*1/Δ__br, (h, x). By taking the expectation on both sides and using the Jensen's inequality, we complete the proof. § PROOFS OF H-CONSISTENCY BOUNDS FOR NEW SURROGATE LOSSES §.§ Auxiliary definitions and results (Theorem <ref> and Lemma <ref>) Before proceeding with the proof, we first introduce some notation and definitions. Given a multi-label surrogate loss and a hypothesis set , we denote the conditional error by _(h, x) = _y | x*(h, x, y), the best-in-class conditional error by ^*_*, x = inf_ h ∈_(h, x), and the conditional regret by Δ_, *h, x = _(h, x) - ^*_*, x. We then present a general theorem, which shows that to derive -consistency bounds in multi-label learning with a concave function Γ, it is only necessary to upper bound the conditional regret of the target multi-label loss by that of the surrogate loss with the same Γ. theoremTool-Gamma Let be a multi-label loss and be a surrogate loss. Given a concave function Γ_+→_+. If the following condition holds for all h ∈ and x ∈: Δ_, (h, x) ≤Γ*Δ_, (h, x), then, for any distribution and for all hypotheses h ∈, _(h)- ^*_() + _() ≤Γ*_(h) - ^*_() + _(). By the definitions, the expectation of the conditional regrets for and can be expressed as: _x*Δ_, (h, x) = _(h) - ^*_() + _() _x*Δ_, (h, x) = _(h) - ^*_() + _(). Thus, by taking the expectation on both sides of (<ref>) and using Jensen's inequality, we have _(h) - ^*_() + _() = _x*Δ_, (h, x) ≤_x*Γ*Δ_, (h, x)Eq. (<ref>) ≤Γ*_x*Δ_, (h, x)concavity of Γ = Γ*_(h) - ^*_() + _(). This completes the proof. To derive -consistency bounds using Theorem <ref>, we will characterize the conditional regret of a multi-label loss . For simplicity, we first introduce some notation. For any x ∈, let (x) = _y' ∈_y | x*(y', y)∈. To simplify the notation further, we will drop the dependency on x. Specifically, we use to denote (x) and to denote (x). Additionally, we define = _y | x*(, y), = _y | x*(, y) and = _y | x*(y', y), ∀ y' ∈. Let = ^. Assume that is complete. Then, the conditional regret of a multi-label loss can be expressed as follows: Δ_, (h, x) = - . By definition, the conditional error of can be expressed as follows: _(h, x) = _y | x*(h, x, y) = _y | x*((x), y) = . Since = ^ and is complete, for any x ∈, *(x) h ∈ =. Then, the best-in-class conditional error of can be expressed as follows: ^*_*, x = inf_ h ∈_(h, x) = inf_ h ∈_y | x*((x), y) = _y | x*((x), y) = . Therefore, Δ_, (h, x) = _(h, x) - ^*_*, x = - . Next, by using Lemma <ref>, we will upper bound the conditional regret of the target multi-label loss by that of the surrogate loss with a concave function Γ. §.§ Proof of Theorem <ref> * We will use the following notation adapted to the Hamming loss: = _y | x*_ham(, y), = _y | x*(, y) and = _y | x*_ham(y', y), ∀ y' ∈. We will denote by (h, x, y') = e^∑_i = 1^ y'_i h(x, i)/∑_y”∈ e^∑_i = 1^ y”_i h(x, i) and simplify notation by using , thereby dropping the dependency on h and x. It is clear that ∈ [0, 1]. Then, the conditional error of _log can be expressed as follows: __log(h, x) = _y | x*∑_y' ∈* 1- (y', y) log*∑_y”∈ e^∑_i = 1^* y”_i - y'_i h(x, i) = -∑_y' ∈ (1 - ) log() For any ≠, we define ^μ as follows: set ^μ_y' = for all y' ≠ and y' ≠; define ^μ_ = - μ; and let ^μ_ = + μ. Note that ^μ can be realized by some h' ∈ under the assumption. Then, we have Δ__log, (h, x) ≥*-∑_y' ∈ (1 - ) log() - inf_μ∈*-∑_y' ∈ (1 - ) log*^μ_y' = sup_μ∈* (1 - ) *log( - μ) - log() + (1 - ) *log( + μ) - log() = (1 - ) log* + (1 - )/*2 - - + (1 - ) log* + (1 - )/*2 - - supremum is attained when μ^* = -(1 - ) + (1 - ) /2 - - ≥ (1 - ) log2(1 - )/*2 - - + (1 - ) log2(1 - )/*2 - - minimum is attained when = ≥( - )^2/2 *2 - - alog2a/a + b + blog2b/a + b≥(a - b)^2/2(a + b), ∀ a, b ∈[0, 1] ≥( - )^2/4. Therefore, by Lemma <ref>, Δ__ham, (h, x) ≤ 2 *Δ__log, (h, x)^1/2. By Theorem <ref>, we complete the proof. §.§ Proof of Theorem <ref> * The proof is basically the same as that of Theorem <ref>, modulo replacing the Hamming loss _ham with a general multi-label loss . We adopt the following notation: = _y | x*(, y), = _y | x*(, y) and = _y | x*(y', y), ∀ y' ∈. We will denote by (h, x, y') = e^∑_i = 1^ y'_i h(x, i)/∑_y”∈ e^∑_i = 1^ y”_i h(x, i) and simplify notation by using , thereby dropping the dependency on h and x. It is clear that ∈ [0, 1]. Then, the conditional error of _log can be expressed as follows: __log(h, x) = _y | x*∑_y' ∈* 1- (y', y) log*∑_y”∈ e^∑_i = 1^* y”_i - y'_i h(x, i) = -∑_y' ∈ (1 - ) log() For any ≠, we define ^μ as follows: set ^μ_y' = for all y' ≠ and y' ≠; define ^μ_ = - μ; and let ^μ_ = + μ. Note that ^μ can be realized by some h' ∈ under the assumption. Then, we have Δ__log, (h, x) ≥*-∑_y' ∈ (1 - ) log() - inf_μ∈*-∑_y' ∈ (1 - ) log*^μ_y' = sup_μ∈* (1 - ) *log( - μ) - log() + (1 - ) *log( + μ) - log() = (1 - ) log* + (1 - )/*2 - - + (1 - ) log* + (1 - )/*2 - - supremum is attained when μ^* = -(1 - ) + (1 - ) /2 - - ≥ (1 - ) log2(1 - )/*2 - - + (1 - ) log2(1 - )/*2 - - minimum is attained when = ≥( - )^2/2 *2 - - alog2a/a + b + blog2b/a + b≥(a - b)^2/2(a + b), ∀ a, b ∈[0, 1] ≥( - )^2/4. Therefore, by Lemma <ref>, Δ_, (h, x) ≤ 2 *Δ__log, (h, x)^1/2. By Theorem <ref>, we complete the proof. §.§ Proof of Theorem <ref> * Recall that we adopt the following notation: = _y | x*(, y), = _y | x*(, y) and = _y | x*(y', y), ∀ y' ∈. We will denote by (h, x, y') = e^∑_i = 1^ y'_i h(x, i)/∑_y”∈ e^∑_i = 1^ y”_i h(x, i) and simplify notation by using , thereby dropping the dependency on h and x. It is clear that ∈ [0, 1]. Next, we will analyze case by case. The case where Φ(u) = log(u): See the proof of Theorem <ref>. The case where Φ(u) = u - 1: The conditional error of _comp can be expressed as follows: __comp(h, x) = _y | x*∑_y' ∈* 1- (y', y) *∑_y”∈ e^∑_i = 1^* y”_i - y'_i h(x, i) - 1 = ∑_y' ∈ (1 - ) *1/ - 1. For any ≠, we define ^μ as follows: set ^μ_y' = for all y' ≠ and y' ≠; define ^μ_ = - μ; and let ^μ_ = + μ. Note that ^μ can be realized by some h' ∈ under the assumption. Then, we have Δ__comp, (h, x) ≥∑_y' ∈ (1 - ) *1/ - 1 - inf_μ∈*∑_y' ∈ (1 - ) *1/^μ_y' - 1 = sup_μ∈* (1 - ) *1/ - 1/ - μ + (1 - ) *1/ - 1/ + μ = 1 - / + 1 - / - 2 - - + 2(1 - )^1/2(1 - )^1/2/ + supremum is attained when μ^* = -√(1- ) + √(1 - )/√(1 - ) + √(1 - ) ≥*(1 - )^1/2 - (1 - )^1/2^2 minimum is attained when = = 1/2 = ( - )^2/*(1 - )^1/2 + (1 - )^1/2^2 ≥( - )^2/4. Therefore, by Lemma <ref>, Δ_, (h, x) ≤ 2 *Δ__comp, (h, x)^1/2. By Theorem <ref>, we complete the proof. The case where Φ(u) = 1/q*1 - 1/u^q, q ∈ (0, 1): The conditional error of _comp can be expressed as: __comp(h, x) = 1/q∑_y' ∈ (1 - ) *1 - ()^q. For any ≠, we define ^μ as follows: set ^μ_y' = for all y' ≠ and y' ≠; define ^μ_ = - μ; and let ^μ_ = + μ. Note that ^μ can be realized by some h' ∈ under the assumption. Then, we have Δ__comp, (h, x) ≥1/q∑_y' ∈ (1 - ) *1 - - inf_μ∈*1/q∑_y' ∈ (1 - ) *1 - (^μ_y')^q = 1/qsup_μ∈* (1 - ) *- + ( - μ)^q + (1 - ) * -()^q + ( + μ)^q = 1/q*+ ^q*(1 - )^1/1 - q + (1 - )^1/1 - q^1 - q - 1/q(1 - )^q - 1/q(1 - )^qsupremum is attained when μ^* = -(1 - )^1/1 - q + (1 - )^1/1 - q/(1 - )^1/1 - q+(1 - )^1/1 - q ≥1/q n^q*2^q*(1 - )^1/1 - q + (1 - )^1/1 - q^1 - q - (1 - )-(1 - )minimum is attained when = = 1/n ≥( - )^2/4n^q*a^1/1 - q + b^1/1 - q/2^1 - q - a + b/2≥q/4(a - b)^2, ∀ a, b∈[0, 1], 0 ≤ a + b ≤ 1. Therefore, by Lemma <ref>, Δ_, (h, x) ≤ 2 n^q/2*Δ__comp, (h, x)^1/2. By Theorem <ref>, we complete the proof. The case where Φ(u) = *1 - 1/u: The conditional error of _comp can be expressed as: __comp(h, x) = ∑_y' ∈ (1 - ) *1 - ()^q. For any ≠, we define ^μ as follows: set ^μ_y' = for all y' ≠ and y' ≠; define ^μ_ = - μ; and let ^μ_ = + μ. Note that ^μ can be realized by some h' ∈ under the assumption. Then, we have Δ__comp, (h, x) ≥∑_y' ∈ (1 - ) *1 - - inf_μ∈*∑_y' ∈ (1 - ) *1 - ^μ_y' = sup_μ∈* (1 - ) *- + - μ + (1 - ) * - + + μ = ( - ) supremum is attained when μ^* = ≥1/n ( - ) minimum is attained when = 1/n. Therefore, by Lemma <ref>, Δ_, (h, x) ≤ n Δ__comp, (h, x). By Theorem <ref>, we complete the proof. §.§ Proof of Theorem <ref> * Recall that we adopt the following notation: = _y | x*(, y), = _y | x*(, y) and = _y | x*(y', y), ∀ y' ∈. We will also denote by (h, x, y') = ∑_i = 1^ y'_i h(x, i) and simplify notation by using , thereby dropping the dependency on h and x. It is clear that the constraint can be expressed as ∑_y' ∈ = 0. Next, we will analyze case by case. The case where Φ(u) = e^-u: The conditional error of _cstnd can be expressed as follows: __cstnd(h, x) = _y | x*∑_y' ∈(y', y) e^∑_i = 1^ y'_i h(x, i) = ∑_y' ∈ e^. For any ≠, we define ^μ as follows: set ^μ_y' = for all y' ≠ and y' ≠; define ^μ_ = - μ; and let ^μ_ = + μ. Note that ^μ can be realized by some h' ∈ under the assumption. Then, we have Δ__comp, (h, x) ≥∑_y' ∈ e^ - inf_μ∈*∑_y' ∈ e^^μ_y' = sup_μ∈**e^ - e^ + μ + *e^ - e^ - μ = *√( e^) - √( e^)^2 supremum is attained when μ^* = 1/2log e^/ e^ = * - /√() + √()^2 minimum is attained when = = 0 ≥1/4 _max*-^2. Therefore, by Lemma <ref>, Δ_, (h, x) ≤ 2 *_max^1/2*Δ__cstnd, (h, x)^1/2. By Theorem <ref>, we complete the proof. The case where Φ(u) = max*0, 1 - u^2: The conditional error of _cstnd can be expressed as follows: __cstnd(h, x) = ∑_y' ∈max*0, 1 + ^2. For any ≠, we define ^μ as follows: set ^μ_y' = for all y' ≠ and y' ≠; define ^μ_ = - μ; and let ^μ_ = + μ. Note that ^μ can be realized by some h' ∈ under the assumption. Then, we have Δ__cstnd, (h, x) ≥∑_y' ∈max*0, 1 + ^2 - inf_μ∈*∑_y' ∈max*0, 1 + ^μ_y'^2 = sup_μ∈{*max*0, 1 + ^2 - max*0, 1 + + μ^2 + *max*0, 1 + ^2 - max*0, 1 + - μ^2} ≥*1+ ^2 * - ^2 differentiating with respect to μ to optimize ≥*-^2 minimum is attained when = 0. Therefore, by Lemma <ref>, Δ_, (h, x) ≤*Δ__cstnd, (h, x)^1/2. By Theorem <ref>, we complete the proof. The case where Φ(u) = max*0, 1 - u: The conditional error of _cstnd can be expressed as: __cstnd(h, x) = ∑_y' ∈max*0, 1 + . For any ≠, we define ^μ as follows: set ^μ_y' = for all y' ≠ and y' ≠; define ^μ_ = - μ; and let ^μ_ = + μ. Note that ^μ can be realized by some h' ∈ under the assumption. Then, we have Δ__cstnd, (h, x) ≥∑_y' ∈max*0, 1 + - inf_μ∈*∑_y' ∈max*0, 1 + ^μ_y' = sup_μ∈{*max*0, 1 + - max*0, 1 + + μ + *max*0, 1 + ^2 - max*0, 1 + - μ^2} ≥*1+ * - differentiating with respect to μ to optimize ≥*-minimum is attained when = 0. Therefore, by Lemma <ref>, Δ_, (h, x) ≤Δ__cstnd, (h, x). By Theorem <ref>, we complete the proof. The case where Φ(u) = min*max*0, 1 - u/ρ, 1, ρ > 0: The conditional error of _cstnd can be expressed as: __cstnd(h, x) = ∑_y' ∈min*max*0, 1 + / ρ, 1. For any ≠, we define ^μ as follows: set ^μ_y' = for all y' ≠ and y' ≠; define ^μ_ = - μ; and let ^μ_ = + μ. Note that ^μ can be realized by some h' ∈ under the assumption. Then, we have Δ__cstnd, (h, x) ≥∑_y' ∈min*max*0, 1 + / ρ, 1 - inf_μ∈*∑_y' ∈min*max*0, 1 + _y'^μ / ρ, 1 = sup_μ∈{*min*max*0, 1 + / ρ, 1 - min*max*0, 1 + ( + μ) / ρ, 1 + *min *max*0, 1 + / ρ, 1 - min*max*0, 1 + ( - μ) / ρ, 1} ≥* - differentiating with respect to μ to optimize . Therefore, by Lemma <ref>, Δ_, (h, x) ≤Δ__cstnd, (h, x). By Theorem <ref>, we complete the proof. § FUTURE WORK While our work proposed a unified surrogate loss framework that is Bayes-consistent for any multi-label loss, significantly expanding upon previous work which only established consistency for specific loss functions, empirical comparison with surrogate losses for specific loss functions could be an interesting direction, which we have left for future work and have already initiated. Moreover, the potential to theoretically improve surrogate losses for specific target losses is another promising direction.
http://arxiv.org/abs/2407.12927v1
20240717180125
Text- and Feature-based Models for Compound Multimodal Emotion Recognition in the Wild
[ "Nicolas Richet", "Soufiane Belharbi", "Haseeb Aslam", "Meike Emilie Schadt", "Manuela González-González", "Gustave Cortal", "Alessandro Lameiras Koerich", "Marco Pedersoli", "Alain Finkel", "Simon Bacon", "Eric Granger" ]
cs.CV
[ "cs.CV" ]
A Light QCD Axion with Hilltop Misalignment Kun-Feng Lyu July 17, 2024 =========================================== fancy A Light QCD Axion with Hilltop Misalignment Kun-Feng Lyu July 17, 2024 =========================================== Richet et al. [Under review 2024] § ABSTRACT Systems for multimodal Emotion Recognition (ER) commonly rely on features extracted from different modalities (e.g., visual, audio, and textual) to predict the seven basic emotions. However, compound emotions often occur in real-world scenarios and are more difficult to predict. Compound multimodal ER becomes more challenging in videos due to the added uncertainty of diverse modalities. In addition, standard features-based models may not fully capture the complex and subtle cues needed to understand compound emotions. Since relevant cues can be extracted in the form of text, we advocate for textualizing all modalities, such as visual and audio, to harness the capacity of large language models (LLMs). These models may understand the complex interaction between modalities and the subtleties of complex emotions. Although training an LLM requires large-scale datasets, a recent surge of pre-trained LLMs, such as BERT and LLaMA, can be easily fine-tuned for downstream tasks like compound ER. This paper compares two multimodal modeling approaches for compound ER in videos – standard feature-based vs. text-based. Experiments were conducted on the challenging dataset for compound ER, and contrasted with results on the dataset for basic ER. Our code is available at: https://github.com/nicolas-richet/feature-vs-text-compound-emotiongithub.com/nicolas-richet/feature-vs-text-compound-emotion. Keywords: Emotion Recognition, Multimodal Learning, Multimodal Textualization, Large Language Models § INTRODUCTION Emotion Recognition (ER) plays a critical role in human behavior analysis, human-computer interaction, and affective computing. Research in ER mainly focuses on recognizing the seven basic emotions – anger, surprise, disgust, enjoyment, fear, sadness, and contempt <cit.>. However, there has recently been a parallel interest in recognizing complex emotions that commonly occur in real-world scenarios, such as compound emotions, where a mixture of emotions is exhibited <cit.>. Recognizing such compound emotions is more challenging since they are often ambiguous and subtle and can be easily confused with basic emotions. Compound ER becomes more difficult when dealing with videos. Despite the availability of multiple modalities that can potentially help recognize these complex emotions <cit.>, they can introduce additional uncertainty and conflict. Multimodal information, e.g., faces, voice and text extracted from videos, has been used to develop robust ER models <cit.>. Convolutional and transformer-based backbones are commonly used to extract discriminant features from each modality, and the features are combined to perform the desired task. This includes vision backbones such as ResNet <cit.> and VGGish <cit.> for audio. Multimodal learning allows building complex joint feature representations to better recognize emotions. Typically, a fusion model is required to combine features from verbal (spoken text) and nonverbal cues (visual and audio), and thereby produce contextualized features for accurate predictions. However, such features may not fully capture the complex and subtle cues manifested with compound emotions. Textual description cues can be obtained to provide insights on a specific modality, such as describing the pitch of a vocal signal or the activation of action units in a facial image <cit.>. Understanding such textual descriptions requires a language model. However, training these models requires large amounts of text data, which is not always available. Recently, there has been a growing interest in leveraging LLMs such as BERT <cit.> and LLaMA <cit.> and their adaptation to downstream tasks <cit.>. LLMs are powerful for understanding relations between words. Recently, they have been used in multimodal learning in sentiment analysis <cit.>, where non-verbal cues such as audio and visual have been converted into textual descriptions. A pre-trained LLM such as BERT may jointly learn these modalities in addition to textual transcripts, leading to a better interpretation of compound emotions. This paper focuses on the following question: how does the textualized modeling approach perform compared to feature-based modeling for compound ER in videos? (Fig. <ref>). Feature-based methods are the most commonly used approach. They offer different ways to extract features from a single modality. In addition, feature fusion allows us to automatically combine and build new, more adequate features for the task. However, fusing multiple diverse modalities remains a challenge. On the other hand, the textualized approach simplifies fusion since all non-verbal modalities (visual and audio) are converted into a single textual modality <cit.>. However, textualizing nonverbal modalities can be challenging for the user, in particular the choice of textual description, and how they should be designed. To answer our question, experiments were conducted for compound ER using the challenging video dataset in the context of the 7th Workshop and Competition on Affective Behavior Analysis in-the-Wild (ABAW). To further assess the benefits of using textualized approaches for complex emotions, additional experiments were conducted on the video dataset with basic emotions. § FEATURE-BASED MODELING A common approach to leverage multi-modality in videos for ER task is to use their features <cit.>. Such approach is he de facto strategy in different applications <cit.>. In the context of ER, these modalities typically include vision, and audio. Textual modality is also included when available. The general motivation behind combining these modalities is to leverage their complementary information over a video sequence. A common strategy for feature-based approach is presented in Fig.<ref>. Typically, each modality employs a dedicated pre-trained feature extractor. This is one of the benefits of this approach as there is rich families of publicly available feature extractors. One can go further and use multiple extractors per-modality. For visual modality, ResNet <cit.> backbone pre-trained on large face datasets, such as MS-CELEB1M <cit.>, can be used. 3D models such as R3D-CNN <cit.> can better leverage temporal dependency between frames. For audio modality, a variety of public feature extractors are available such as VGGish <cit.>, Wav2Vec 2.0 <cit.>, and HuBERT <cit.>. In addition, traditional audio features can be easily computed such as spectrograms, and MFCCs <cit.>. With the recent progress in LLMs, multiple text feature extractors are available such as BERT <cit.> and RoBERTa <cit.>. A common challenge in this approach is how to fuse all these modalities efficiently for the task at hand. Self-attention and cross attention strategies are commonly used <cit.>. The fusion module allows to build better features based on each modality features. The final embedding is then used to perform the task. To avoid expensive computational cost, feature extractors are kept frozen while the subsequent modules are finetuned. In this work, we consider ResNet50 <cit.>, pre-trained on MS-CELEB1M <cit.> and FER^+ <cit.> datasets for visual modality. For audio modality, we employ VGGish <cit.>, while BERT <cit.> is used over text modality. Temporal information is further exploited by using temporal convolutional network (TCN) <cit.> after each feature extractor. Fusion based on self- and cross-attention are considered <cit.>. § TEXT-BASED MODELING In this section, we describe how to produce textual descriptions from audio and visual modalities for the task of compound emotion recognition. Fig. <ref> shows the entire process. §.§ Visual Text Description Face cropping and alignment are first performed at each frame using RetinaFace <cit.>. Py-feat library <cit.> is then used to extract action units intensity <cit.> along with basic emotion probabilities. Action units codebook <cit.> maps each facial expression to a set of action units. For instance, the expression "Happy" is associated with "AU6" (Cheek raise), "AU12" (Lip corner puller), and "AU25" (lips part). Typically, a set of these facial units activate at once. Over a sequence of frames, we select the maximum intensity of each action unit over time. Then, we use a threshold to determine which action units are activated. In this case, the text description will be the concatenation of the name of each selected action unit as depicted in Fig. <ref>. We do the same procedure with basic emotions, where we pick top-3 emotions and use their textual name as descriptions. §.§ Audio Text Description Over a sliding window, we run the API of the website https://www.hume.ai/https://www.hume.ai to analyze the tone of the audio. Their model is trained on millions of human interactions. It is based on an empathic large language model (eLLM), combining language modeling and text-to-speech. The API scores each tone characteristic, allowing sorting and picking the top 10. Examples of tone characteristics include confusion, anxiety, disappointment, distress, and even basic emotions. The name of each tone characteristic and a "Low" or "High" prefix, determined using a threshold on the score, are used to describe the tone textually. We also use Wav2Vec 2.0 model <cit.>[https://huggingface.co/audeering/wav2vec2-large-robust-12-ft-emotion-msp-dimhuggingface.co/audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim] to predict scores for arousal, valence and dominance for each audio clip. Those scores are then categorized as "Low" or "High" using a threshold. The audio textual description is the concatenation of all tone and Arousal-Valence-Dominance textual description. §.§ Combination of transcripts, visual text, and audio text Some datasets provide video transcripts. When they are unavailable, we use the whisper-large-v2 <cit.> to generate transcripts. Once all textual descriptions are acquired, we combine them into a single prompt and feed it to an LLM such as LLaMA-3 <cit.>, in our case. Prompts use the following template: . A fully connected layer (FC) follows LLaMA-3 to perform classification. Both the LLM and the FC layer are finetuned using the class label. § RESULTS AND DISCUSSION Our goal is to compare feature-based vs. textualized approaches for compound ER in videos. For a fair comparison, our experimental setup for the two approaches is constrained to be as similar as possible. This section provides the experimental methodology, results, and discussion. §.§ Experimental Methodology (1) Datasets. Two video-based datasets for emotion recognition: (compound emotion), and (basic emotion). a): It is composed of 56 videos taken from 7th ABAW Challenge which is the test set. The full  <cit.> contains 400 videos with 200,000 frames in total. Each frame has an annotation with twelve compound emotions. For the 7th ABAW Challenge, only seven compound emotions are considered from the original twelve, which are Fearfully Surprised, Happily Surprised, Sadly Surprised, Disgustedly Surprised, Angrily Surprised, Sadly Fearful, and Sadly Angry. In this work, these 56 videos used for the test are referred to as . The challenge organizers provide it without annotation. To perform experiments earlier than the challenge deadline, we annotated these 56 videos by our internal expert team. Each video may have different parts where there are compound emotions. The annotation is done at frame level with the same seven compound emotions of the challenge in addition to the class 'Other' that represents any other emotion, compound or basic, that is different from the considered seven ones. Once annotated, we cut each original video into segments (clips) using the annotation timeline. Each segment contains only one compound emotion. We obtained 125 clips. We refer to this dataset as . Its class distribution is presented in Table <ref>. We split into 5-cross validation. Performance is reported on the validation set of each fold. b): The  <cit.> dataset is a multi-party dataset. It is created from video clipping of utterances from the TV show "Friends". The train, validation, and test sets consist of 9988, 1108, and 2610 utterances, respectively. Each utterance has one single global label from seven basic emotions: angry, sad, joy, neutral, fear, surprise, or disgust. In addition, a transcript of each utterance is provided. This dataset is unbalanced, where neutral is the most dominant label with 4710 utterances, while disgust is the least frequent label with 271 utterances. (2) Baseline Models. For a fair comparison, recent models are considered. For the feature-based approach, we used ResNet50 <cit.> for feature extraction over visual modality. It is pretrained on MS-CELEB1M <cit.> dataset as a facial recognition task. Then, it is fine-tuned on the FER^+ <cit.> dataset. VGGish <cit.> is used for audio. Over text modality, BERT model is used <cit.>. To leverage temporal dependency from videos, we used temporal convolutional network (TCN) <cit.>, which has shown to yield good results in previous ABAW challenges <cit.>. For the fusion module, we tested two approaches presented in <cit.>, which subsequently perform a classification task. All three feature extractors are frozen, and every subsequent model is finetuned. For the textualized approach, once the text of all modalities is acquired, it is fed to LLaMA-3 <cit.> language model, which is followed by a dense layer to perform classification. Both modules are finetuned using emotion labels. (3) Learning Strategies. Over and , we perform supervised learning where we train on the training set and evaluate on the validation set and or test set when available. However, for the 7th ABAW Challenge for compound emotion estimation, we train our model on over the seven basic emotions. We then evaluate the final model over the 56 unlabeled videos of the test set, following the challenge protocol by submitting the predictions to the organizers. Since the model is trained over basic emotions, for each pair of the seven compound emotions, we sum their corresponding probabilities and pick the pair with the highest score as precision for the compound emotion. (4) Performance Measures. To assess model performance over and , we use the average F1 score required in the 7th ABAW Challenge. It is defined as follows, { F1 = 2 ×Precision×Recall/Precision + Recall; Precision = TP/TP + FP; Recall = TP/TP + FN, . P = ∑_c=1^7F1_c/7. where c represents the class ID, TP represents True Positives, FP represents False Positives, and FN represents False Negatives. Following the literature, on the dataset, we use the weighted F1 score, which accounts for unbalanced classes. Over dataset, only global video level labels are available where a class is assigned to the entire video. Evaluation is required to be done also at video level and not frame level. To get a video-level prediction we post-process the frame level predictions using three different strategies: * Majority voting: We perform majority voting over the predicted classes of all the frames. The winning class becomes the prediction for the video. * Average logits: In this case, for each class, we average its logits across frames. This yields an avergae logits vector. The video class is the class with the maximum logit. * Average probabilities: This is similar to average logits, but it is done over probabilities instead. §.§ Results We briefly report the initial results obtained over dataset. Due to the limited time, we only train on 1% of the original trainset of . This amounts to 93 videos. Both approaches, text- and feature-based use identical video for training. The training is done over frame-level. To get the video level prediction, we follow the three prediction post-processing techniques presented earlier. Table <ref> shows the obtained results of both methods. apalike
http://arxiv.org/abs/2407.13730v1
20240718173112
Power-law hypothesis for PageRank on undirected graphs
[ "Florian Henning", "Remco van der Hofstad", "Nelly Litvak" ]
math.PR
[ "math.PR", "05C80 (Primary) 60J80, 60B20 (Secondary)" ]
Power-law hypothesis for PageRank on undirected graphs Florian HenningTUeEindhoven University of Technology, Department of Mathematics & Computer Science; f.b.henning@tue.nl, r.w.v.d.hofstad@tue.nl, n.v.litvak@tue.nl Remco van der Hofstad TUe Nelly LitvakTUe July 22, 2024 ========================================================================================================================================================================================================================= § ABSTRACT Based on observations in the web-graph, the power-law hypothesis states that PageRank has a power-law distribution with the same exponent as the in-degree. While this hypothesis has been analytically verified for many random graph models, such as directed configuration models and generalized random graphs, surprisingly it has recently been disproven for the directed preferential attachment model. In this paper, we prove that in undirected networks, the graph-normalized PageRank is always upper bounded by the degree. Furthermore, we prove that the corresponding (asymptotic) lower bound holds true under reasonable assumptions on the local weak limit, but not in general, and we provide a counterexample. Our result shows that PageRank always has a lighter tail than the degree, which contrasts the case of the directed preferential attachment model, where PageRank has a heavier tail instead. We end the paper with a discussion, where we extend our results to directed networks with a bounded ratio of in- and out-degrees, and reflect on our methods by contrasting the undirected and directed preferential attachment model. Mathematics Subject Classifications (2020). 05C80 (primary); 60J80, 60B20 (secondary) Key words. PageRank, Scale-free network, heavy tails, maximally assortative, local weak convergence. § INTRODUCTION AND RESULTS §.§ Introduction The PageRank algorithm was originally introduced by Page and Brin <cit.> to rank web pages by their position in the web-graph, where directed edges are hyperlinks from one web page to another. An important characteristic of the topology of the web-graph is the scale-free property, in that the empirical distribution of both in- and out- degrees is approximately a power law <cit.>. In the web-graph, it has also been observed that PageRank has a power-law distribution with approximately the same exponent as that of the in-degree <cit.>; these observations gave rise to the so-called power-law hypothesis that on every directed graph a power-law distribution of the in-degrees implies a power-law distribution of PageRank with the same exponent. This hypothesis has been recently proven to fail in the preferential attachment model <cit.>, one of the models for the web-graph <cit.>; specifically, the tail of the limiting PageRank distribution in this model is heavier than that of the in-degree <cit.>. For undirected graphs, PageRank is known to correlate highly with the degree. Analytically, the authors of <cit.> show that in undirected semi-sparse expanding graphs, the PageRank-distribution can be asymptotically approximated (w.r.t. the total variation distance) by a mixture of the restart distribution and the degree distribution. Main innovation in this paper. In this paper we analyze the power-law hypothesis in undirected graphs. Importantly, in the remainder of Section <ref>, we prove that for every undirected graph, the graph-normalized PageRank is componentwise bounded from above by the degree vector (Theorem <ref>). This in particular excludes the asymptotic PageRank distribution from having heavier tails than the degree distribution, in contrast to the directed setting <cit.>. Additionally, we provide a sufficient condition (Proposition <ref>) for an asymptotic lower bound on the PageRank distribution. Both bounds together imply that if the graph normalized PageRank has an asymptotic power law, then its exponent is the same as that of the degree distribution, provided that the condition for the lower bound is satisfied. The latter condition, in essence, prevents high-degree vertices from having too many high-degree neighbors, and we expect it to hold quite generally in a large class of random graph models and real-world networks. In Section <ref>, we explicitly check the assumptions for the lower bound for unimodular branching process trees and the Pólya point tree. Together with the results <cit.> on local weak convergence of the graph-normalized PageRank distribution, this implies that the power-law hypothesis holds true for, respectively, the undirected configuration model (Theorem <ref>) and the undirected preferential model (Theorem <ref>). To show that, in contrast to the upper bound, the lower bound is not generally true, in Section <ref> we provide an explicit construction (Theorem <ref>) of an evolving sequence of graphs exhibiting a power-law distribution with any prescribed exponent, whose associated PageRank distribution does not exhibit a power-law distribution. Finally, the Discussion Section <ref> deals with extensions (Theorem <ref> and Proposition <ref>) of the results from Section <ref> to directed networks assuming that the ratios of in- and out-degrees in the graph are uniformly bounded. The discussion closes with a heuristics on the different asymptotic behaviour of PageRank for the undirected- and the directed version of the preferential attachment model, thus explaining how the change in the PageRank power-law arises for the directed setting, while it fails to hold for the undirected setting. §.§ Basic notions First we start with providing the basic notions which will be employed in the reminder of this paper. For further explanations and introduction to mathematical models for complex networks, we refer the reader to <cit.> and <cit.>. Definitions. We consider a sequence (G_n)_n ∈ℕ of finite undirected graphs G_n=(V_n,E_n) without isolated vertices with respective vertex sets V_n=[n]:={1,2,…,n}, and edge sets E_n. We write i ∼ j if two vertices i,j ∈ V_n are connected by an edge. Further, A^( G_n)=(a_ij^( G_n))_(i,j) ∈ [n]∈{0,1}^n × n denotes the adjacency matrix of G_n, which in our undirected case is symmetric. Finally, let d^(G_n)=(d_i^(G_n))_i ∈ [n] denote the degree vector corresponding to G_n. Without loss of generality, we assume that d_i^(G_n)≥ 1 for all i∈ [n], i.e., there are no isolated vertices. From now on, we will omit the superscript (G_n) in, e.g., d_i^(G_n) when this notation is clear from the context. Throughout the paper, we print vectors and matrices in boldface in contrast to scalars, and define the partial order "≤" on ℝ^n as a≤b if and only if a_i ≤ b_i for all i ∈ [n]. As usual, ‖a‖_p:=(∑_i=1^n | a_i |^p )^1/p denotes the p-norm, p ∈ (0,∞), on ^n, where we will also abbreviate |v| :=‖v‖_1. Furthermore, by a⊙b we denote the elementwise (Hadamard) product of two vectors, i.e., (a⊙b)_i:=a_ib_i, i ∈[n]. Finally, 1_n ∈^n denotes the vector of ones and I_n ∈^n × n the n-dimensional identity matrix. For notational distinction from the above two symbols, we denote the indicator function on any set B by 1_B(·). We say that a real-valued random variable X is stochastically bounded from above by a real-valued random variable Y, denoted by X ≼ Y, iff ℙ(X ≥ x) ≤ℙ(Y ≥ y) for every x ∈. Moreover, we say that (cf. the relation ≤_K in <cit.>) a random vector X∈^n is stochastically bounded from above by a random vector Y∈^n, and write X≼Y, iff ℙ(X≥x) ≤ℙ(Y≥x) for every x∈^n. Note that if X=(X_i)_i ∈ [n] and Y=(Y_i)_i ∈ [n] have independent components, then X≼Y is equivalent to X_i ≼ Y_i for all i ∈ [n]. Finally, for functions f,g→, we write, for a ∈∪{-∞,+∞}, that f = o(g) as x tends to a iff lim sup_x → a|f(x)/g(x)|=0 and f = O(g) as x tends to a iff lim sup_x → a|f(x)/g(x)| <∞. The PageRank equation. Define the stochastic matrix P=P^(G_n)=(p_ij^(G_n))_i,j ∈ [n]∈ [0,1]^n × n by setting p_ij:=p_ij^(G_n)=a_ij^(G_n)/d_i^(G_n), where (a_ij^(G_n))_i,j∈ [n] is the adjacency matrix of G_n. Then the graph-normalized PageRank equation with damping factor c ∈ (0,1) reads R=c RP+(1-c)1_n, R∈ [0,n]^n. Note that (<ref>) can be written equivalently in terms of a fixed-point equation with an irreducible matrix acting on vectors of length n, so that the solution R appears as the Perron-Frobenius eigenvector and is hence unique. In particular, (<ref>) is uniquely solved for R by R=(1-c)1_n(I_n-cP)^-1. Componentwise, the above equation equals the Neumann series R_k=(1-c)∑_s=0^∞ c^s‖ (P^s)_·,k‖_1 =(1-c)∑_s=0^∞ c^s ∑_j=1^n (P^s)_jk , k∈[n]. §.§ Results §.§.§ Upper bound on PageRank Because of symmetry of the adjacency matrix A of an undirected graph, the next general upper bound holds true: Given the degree vector d of any undirected graph G_n=([n],E_n), the PageRank vector R^(G_n) in (<ref>) is componentwise upper bounded by the degree, i.e., R_i^(G_n)≤ d_i^(G_n) for all i ∈ [n]. Note that Theorem <ref> does not require any further assumptions on the graph or the degree sequence. Since the proof of Theorem <ref> is quite simple, we state it now. It is based on two lemmas. The first, Lemma <ref>, states that employing symmetry of the adjacency matrix A=A^(G_n) we can go over from Equation (<ref>) above to an equivalent expression, where the operator (I_n-cP)^-1 functions from the left instead of from the right. Afterwards, Lemma <ref> provides an upper bound for that new expression. Let G_n=(V_n,E_n) be a finite graph without isolated vertices with # V_n=n. Then the graph-normalized PageRank equation (<ref>) is solved by the undirected PageRank equation given by R^(G_n)=(1-c)d^(G_n)⊙ (I_n-cP^(G_n))^-1Q^(G_n)1_n, where Q=Q^(G_n)∈ℝ^n × n is the diagonal degree matrix defined by Q_i,i=1/d_i. To ease readability, we will omit the superscripts (G_n) in the proof. The ith component, i ∈ [n], of the PageRank-Equation (<ref>) reads R_i=c∑_j ∈ [n]p_jiR_j+(1-c). Divide both sides of the equation by the respective degree d_i, and insert the definition of p_ji which implies that d_i p_ij=a_ij=a_ji=p_jid_j, to obtain R_i/d_i =c∑_j ∈ [n]a_ji/d_id_jR_j+1-c/d_i=c∑_j ∈ [n]p_ijR_j/d_j+1-c/d_i, where the second equation follows from symmetry of the adjacency matrix A. Next define the vector v by v_i:=R_i/d_i, for i∈[n]. In matrix-notation, this change of variables transforms equation (<ref>) to v=c Pv+(1-c)Q1_n. As we have ‖ cP‖=c ‖P‖≤ c <1, where ‖P‖:=sup_‖w‖_2=1‖Pw‖_2 denotes the operator-norm, we can solve (<ref>) for v (see, e.g., <cit.>) to arrive at v=(1-c)(I_n-cP)^-1Q1_n, which, by definition of v, is in turn equivalent to the undirected PageRank equation R=(1-c)d⊙ (I_n-cP)^-1Q1_n. The second factor in (<ref>) satisfies (I_n-cP)^-1Q1_n ≤1_n/(1-c). From the assumption that d_i ≥ 1 for all i ∈ [n], we directly have Q1_n ≤1_n. Then express (I_n-cP)^-1 in terms of the Neumann series (I_n-cP)^-1=∑_s=0^∞ (cP)^s, and employ the fact that P is a (row-)stochastic matrix that functions from the left to arrive at (I_n-cP)^-1Q1_n ≤∑_s=0^∞ c^s 1_n, which finishes the proof. [Relation to <cit.>] The transformation v_i:=R_i/d_i that we use is similar to the transformation X_i:=𝒞ℛ_i employed in <cit.> to describe the limiting PageRank as a solution to a stochastic fixed-point recursion equation on a suitable weighted branching process tree. §.§.§ Lower bound on PageRank For the upper bound in Theorem <ref>, we have used that the degree vector is component-wise lower bounded by 1 (where we recall that the graph is assumed to have no isolated vertices). In the context of directed graphs where the degrees satisfy a power-law distribution, however, the family (d_k^(G_n))_n ∈ℕ, k ∈ [n] is clearly not bounded from above and a reasoning similar to the one in the proof of (<ref>) will not work to obtain a lower bound on PageRank. Nonetheless, in Proposition <ref>, below we state a sufficient condition on rooted random graphs for a lower bound to hold for the respective PageRank at the root, which we define in Definition <ref> below. The result of Proposition <ref> gets its full strength in combination with <cit.>, which shows that the limiting distribution of the graph-normalized PageRank is determined by the local limit of the graph sequence. We postpone a brief introduction on local weak convergence and its implications to PageRank to Section <ref>, and define rooted graphs and PageRank on infinite locally-finite graphs: By a rooted graph G_⋆, we mean a pair (G,ϕ) such that G=(V,E) is a graph and ϕ∈ V is a distinguished vertex called the root. We say that (G,ϕ) is locally finite when every vertex in V has a finite degree. Let 𝒢_⋆ denote the quotient space of the set of all locally-finite connected rooted graphs with respect to the equivalence relation ≅ given by root preserving graph-isomorphisms. We endow 𝒢_⋆ with the Borel-σ algebra generated by the local metric d_loc on 𝒢_⋆, which we will define later on in Definition <ref> when it is actually needed in the framework of local convergence. Let G_⋆=(G,ϕ) be any representative of an element in 𝒢_⋆ with degree vector d and adjacency matrix A. Consider the (possibly infinite-dimensional) matrix P=(p_ij)_i,j ∈ which is analogously to (<ref>) defined by p_ij=a_ij/d_i. The PageRank at the root or Root-PageRank is the random variable defined analogously to (<ref>) by R_ϕ=R_ϕ(G)=(1-c)∑_s=0^∞ c^s ∑_j ∈ V (P^s)_jϕ. [Uniqueness Root-PageRank] By linearity, the function R_ϕ is indeed a function on the quotient space 𝒢_⋆ and does not depend on the choice of the representative element chosen as G. For the root-PageRank, we have the following asymptotic lower bound. In its statement, we write =d_ϕ^(G_n,≥α)=#{j ∈V j ∼ϕ and d_j ≥α} for the number of neighbors of d_ϕ having degree at least α: [Root-PageRank lower bound] In the set-up of Definition <ref> assume that there exist α>0 and ε∈(0,1) such that, as k→∞, ℙ(d_ϕ >k, ≥ (1-ε)d_ϕ)=o(ℙ(d_ϕ >k)). Then, the solution R_ϕ to the root-PageRank-equation (<ref>) satisfies ℙ(R_ϕ>k) ≥ (1+o(1))ℙ(d_ϕ>α k/ε c(1-c)) as k →∞. Equation (<ref>) implies that the power-law exponent τ_R of R_ϕ, when it exists, is at most as large as the power-law exponent τ_d of d_ϕ. Together with the upper bound in Theorem <ref>, this lower bound implies that if d_ϕ follows a power-law with exponent τ_d, then so does R_ϕ. Thus, the power-law hypothesis holds. In words, (<ref>) means that the probability that the degree of the root is large and a positive proportion of neighbors of the root ϕ has degree at least α for some fixed, and possibly large, α, vanishes compared to the probability that the degree of the root is large. For (<ref>) to be false, most of the neighbors of high-degree vertices should have a high degree themselves as well. This means that the graph is highly assortative in a very strong sense. For most random graph models, such a property is not true, and neighbors of high-degree vertices have degrees that are bounded by α with high probability for large α. Thus, condition (<ref>) is quite reasonable. By restricting the outer sum in (<ref>) to the first two terms, we obtain ℙ(R_ϕ>k) ≥ℙ(1-c+c(1-c)∑_j ∼ϕa_jϕ/d_j>k) ≥ℙ(c(1-c)∑_j ∼ϕ1/d_j>k). Now, a case distinction whether the number of neighbors of the root ϕ whose degree is bounded from above by α, is bounded from below by ε d_ϕ or not gives ℙ(c(1-c)∑_j ∼ϕ1/d_j>k) ≥ℙ(c(1-c)∑_j ∼ϕ1/d_j>k,#{j j ∼ϕ and d_j < α}≥ε d_ϕ) ≥ℙ(d_ϕ>α/ε c(1-c)k,#{j j ∼ϕ and d_j < α}≥ε d_ϕ) ≥ℙ(d_ϕ>α/ε c(1-c)k)-ℙ(d_ϕ>α/ε c(1-c)k, ≥ (1-ε) d_ϕ). By Condition (<ref>), ℙ(d_ϕ>l, ≥ (1-ε) d_ϕ)/ℙ(d_ϕ>l)l →∞⟶ 0, which in combination with (<ref>) gives (<ref>). We expect Condition (<ref>) to hold very broadly. For its practical application to branching process trees in Section <ref>, we provide the following Corollary <ref>, that features two different conditions which imply (<ref>) and are easy to verify in specific (tree) settings: Conditionally on d_ϕ=k, let N_ϕ,k⊂{j ∈ V j ∼ϕ} be a subset of the neighbors of ϕ such that the sequence (N_ϕ,k)_k ∈ is almost surely non-decreasing with k and, almost surely as k →∞, n̅_ϕ,k :=#{j ∈ V j ∼ϕ}∖N_ϕ,k=o(k) * Fix k,l such that ℙ(d_ϕ=k, n̅_ϕ,k=l)>0. Assume that, conditionally on d_ϕ=k and n̅_ϕ,k=l, the family (d_j)_j ∈N_ϕ,k is stochastically bounded from above by an independent, but not necessarily identically distributed, family (d̃_̃j̃)_j ∈N_ϕ,k such that there is a uniform upper bound C < ∞ for (𝔼[d̃_j])_j ∈N_ϕ,k which depends neither on k nor on l. Then Condition (<ref>) holds true for α=2C and any ε<12. * Fix k,l such that ℙ(d_ϕ=k, n̅_ϕ,k=l)>0. Assume that, conditionally on d_ϕ=k and n̅_ϕ,k=l, the family (d_j)_j ∈N_ϕ,k=l is stochastically bounded from above by an i.i.d. family (d̃_̃j̃)_j ∈N_ϕ,k of -valued random variables, whose distribution depends neither on k nor on l. Then Condition (<ref>) holds true for α=F̅_d̃_1^←(1/2):=min{t ∈_0 |ℙ(d̃_1 ≥ t) ≤1/2} and any ε<12. In both cases a) and b), for any β>2α/[c(1-c)], ℙ(R_ϕ>k) ≥ (1+o(1))ℙ(d_ϕ>β k) as k →∞. Let Assumption a) or b) be fulfilled and choose α accordingly. For any k ∈, consider the family (Y_j)_j ∈N_ϕ,k of independent random variables defined by Y_j:= 1_{d̃_j ≥α}. Then 𝔼[Y_j]=ℙ(Y_j =1) ≤1/2, which in Case a) follows from the Markov inequality and in Case b) from the definition of the function F̅_d̃_1^←. In particular, (Y_j)_j ∈N_ϕ,k is stochastically bounded from above by an i.i.d. family (Z_j)_j ≥ 1 of Bernoulli(12)-random variables. The remainder of the proof is the same for both cases. For all k ∈ with ℙ(d_ϕ=k)>0, the assumption on (d̃_j)_j ∈N_ϕ,k guarantees that ℙ(≥(1-ε) d_ϕ|d_ϕ=k) ≤ℙ(#{j ∈∩N_ϕ,k} ≥(1-ε) d_ϕ-n̅_ϕ,k |d_ϕ=k) ≤ℙ(#{j ∈N_ϕ,k and d̃_j ≥α} ≥(1-ε) d_ϕ-n̅_ϕ,k |d_ϕ=k) ≤ℙ(1/k∑_j ∈N_ϕ,kY_j ≥1-ε-n̅_ϕ,k/k|d_ϕ=k) ≤ℙ(1/k∑_j ∈N_ϕ,kZ_j ≥1-ε-n̅_ϕ,k/k |d_ϕ=k). Here, the second inequality follows from total probability, writing ℙ(#{j ∈N_ϕ,kd_j≥α} ≥(1-ε) d_ϕ-n̅_ϕ,k |d_ϕ=k) =∑_l ∈ℙ(#{j ∈N_ϕ,k and d_j ≥α} ≥(1-ε) d_ϕ-n̅_ϕ,k |d_ϕ=k, n̅_ϕ,k=l)ℙ(n̅_ϕ,k=l |d_ϕ=k) ≤∑_l ∈ℙ(#{j ∈N_ϕ,k and d̃_j ≥α} ≥(1-ε) d_ϕ-n̅_ϕ,k |d_ϕ=k, n̅_ϕ,k=l)ℙ(n̅_ϕ,k=l |d_ϕ=k), as, conditionally on d_ϕ=k and n̅_ϕ,k=l, the ^l-valued random vector (d_j)_j ∈N_ϕ,k is stochastically bounded from above by (d̃_j)_j ∈N_ϕ,k by assumption. Now choose 0<ε<ε^*<12, and let K̂_1 be large enough such that 1-ε-n̅_ϕ,k/k≥ 1-ε^* > 1/2=𝔼[Z_j] almost surely, for all k ≥K̂_1. For k ≥K̂_1, a corollary of Cramér's Theorem (see <cit.>) gives ℙ(1/k∑_j ∈N_ϕ,kZ_j ≥ 1-ε-n̅_ϕ,k/k| d_ϕ=k) ≤ℙ(1/#N_ϕ,k∑_j ∈N_ϕ,kZ_j ≥ 1-ε^* | d_ϕ=k) ≤ 2exp(- #N_ϕ,kΛ^*(1-ε^*)) where Λ^*(1-ε^*)=(1-ε^*)log(2(1-ε^*))+ε^* log(2ε^*)>0 is the minimum of the Legendre transform of the moment generating function of Z_1 on the interval [1-ε^*,1]. As the r.h.s. in (<ref>) decreases with increasing k, we conclude that, for any l ∈ℕ_0, ℙ(1/k∑_j ∈N_ϕ,k+lZ_j ≥ 1-ε^* | d_ϕ=k+l) ≤ 2exp(- #N_ϕ,k+lΛ^*(1-ε^*)), which, in combination with the law of total probability, gives ℙ(1/k∑_j ∈N_ϕ,kZ_j ≥ 1-ε| d_ϕ≥ k) k →∞→ 0 exponentially fast. In combination with (<ref>), this shows that Condition (<ref>) holds for α and ε>12, which in view of Proposition <ref> concludes the proof. § APPLICATIONS In this section we briefly explain the notion of local weak convergence of random graphs and its implications for the asymptotic of the associated root-PageRank. This part is mostly based on <cit.>. Additionally, we define the notion of power-law decay. Afterwards we prove that the assumptions of Corollary <ref> hold true for two particular instances of undirected random graphs/ branching processes. These are unimodular branching process trees and the Pólya point tree. This then implies correctness of the power-law hypothesis for every evolving sequence of undirected graphs which converge locally weakly to one these two random graphs, in particular the undirected configuration model and the undirected preferential attachment model. §.§ Local weak convergence and the limiting PageRank distribution Local weak convergence was introduced in <cit.> to make the notion that finite graphs can look like infinite graphs from the perspective of a random vertex precise. Local weak limits of random graphs are random elements in the set 𝒢_⋆ of locally-finite connected rooted graphs up to equivalence by isomorphism (recall Definition <ref> above). For the concept of local weak convergence, the following two-step procedure is of relevance: First we draw a sequence of finite undirected deterministic or random graphs G_n=(V_n,E_n) with # V_n=n. Second, for each n ∈ we choose a vertex ϕ_n ∈ V_n uniformly at random to obtain a rooted graph (G_n,ϕ_n). Local weak convergence of the sequence (G_n)_n ∈ against a (possibly random) G_⋆∈𝒢_⋆ then means that the double-expectation (w.r.t. both stages of randomness) of every bounded continuous (w.r.t. the local metric of Definition <ref> below) function f𝒢_⋆→ converges to the expectation of f w.r.t. to the prescribed limiting distribution on 𝒢_⋆. To make this precise, for any finite graph G=(V,E), define the (random) probability measure (cf. the measure μ_H in <cit.>) 𝒫(G) on 𝒢_⋆ by setting 𝒫(G):= 1/# V∑_i ∈ Vδ_(G,i), i.e., 𝒫(G) describes the law of the graph G observed from a uniformly chosen root among its vertices. Now we describe the relevant topology on 𝒢_⋆. For a rooted connected graph (G,ϕ) let B^( G)_s(ϕ) denote the subgraph of (G,ϕ) of vertices at distance at most s from the root ϕ. The function d_loc on 𝒢_⋆×𝒢_⋆, defined by d_loc((G,ϕ),(G^',ϕ^') ):=1/1+inf_s ≥ 1{B^( G)_s(ϕ) B^( G^')_s(ϕ^') } is called the local metric on 𝒢_⋆. Note that we can adapt the metric d_loc to the set-up of marked and, in particular, directed graphs (see <cit.>). With these notions in mind, the formal definition of local weak convergence reads as follows: A deterministic or random sequence (G_n)_n ∈ℕ of random graphs is said to converge locally weakly to a (possibly random) rooted graph (G,ϕ) ∈𝒢_⋆ with law μ∈ℳ_1(𝒢_⋆,ℬ(𝒢_⋆)) iff for any bounded function f𝒢_⋆→ which is continuous with respect to the topology induced by d_loc, 𝔼[𝔼_𝒫(G_n)[f(G_n,ϕ_n)]] n →∞→𝔼_μ[f(G,ϕ)]. Here, the outer expectation is taken with respect to the distribution of G_n and the argument (G_n,ϕ_n) of f is restricted to the connected component of ϕ_n in G_n, which is an element in 𝒢_⋆. [Uniqueness of local weak limit] The space (𝒢_⋆,d_loc) is Polish (e.g., <cit.>), hence the (local) weak limit is unique (e.g., <cit.>). [Convergence of typical neighborhoods] To prove local weak convergence in the sense of Definition <ref> above it suffices to verify (<ref>) for all test functions f^H_s of the form f^H_s(G,ϕ):=1_{B_s^( G)(ϕ) ≅ H_⋆}, where s ≥ 1 and H_⋆∈𝒢_⋆. (e.g., <cit.>) Next we relate the above definition to the root-PageRank R_ϕ. By truncating the outer sum in (<ref>) at some level N (for reasons of continuity) and carefully controlling errors while letting N and n tend to infinity <cit.> proves that (<ref>) actually holds true for test functions of the desired form f_r(G,ϕ):=1_{R_ϕ(G)>r}. Here r>0 is any continuity point of the cumulative distribution function of the root-PageRank of the limiting graph: Let (G_n)_n ∈ be a sequence of deterministic or random graphs and let ϕ_n ∈ V_n be chosen uniformly at random. If (G_n)_n ∈ converges locally weakly to (G,ϕ) ∈𝒢_⋆, then 𝔼[R_ϕ(G)] ≤ 1 and R_ϕ_n^ (G_n) n →∞→ R_ϕ(G) in distribution, i.e.,ℙ(R_ϕ_n^ (G_n) >r) n →∞→ℙ(R_ϕ(G)>r) at any continuity point r>0 of the right-hand side. §.§ Power-law decay of degree distribution Let us recall the following analytic definitions: * A function ℒ: (0,∞ )→ (0,∞) is called slowly varying at infinity iff for any a>0, ℒ(ax)/ℒ(x)x →∞→ 1. * A function f (0,∞) → (0,∞) is called regularly varying at infinity with tail index α>0 and tail function ℒ iff ℒ is slowly varying at infinity and f(x)=ℒ(x) x^-α for all x ∈ (0,∞). * A random variable X is said to have a power-law distribution with exponent τ>1 and tail function ℒ iff the complementary distribution function F̅_X (0,∞)→ [0,1] given by F̅_X(x)=ℙ(X>x) is regularly varying at infinity with tail index τ-1 and tail function ℒ. * A random variable X satisfies power-law bounds with exponent τ>1 and tail function ℒ iff ℒ is slowly varying at infinity and there exist constants 0<a≤a̅ such that aℒ(k)k^-(τ-1)≤F̅_X(k) ≤a̅ℒ(k)k^-(τ-1). §.§ Application to branching-process trees and the configuration model The unimodular branching-process tree with integrable root-degree distribution (p_k)_k ∈ is characterized by the property that every vertex apart from the root has an offspring distribution described by the respective size-biased distribution p_k^*=(k+1)p_k+1/𝔼[d_ϕ] (see <cit.> or <cit.> for a relation of that notion to unimodular groups). As the family of degrees of each of the children of the root is in particular i.i.d., the unimodular branching process tree clearly meets the assumptions of Corollary <ref>. Consequently, if the root-degree distribution d_ϕ follows a power law, then the limiting root-PageRank associated with any sequence of undirected random or deterministic graphs which converges locally weakly to that unimodular branching process tree satisfies power-law bounds with the same exponent and tail-function (see Theorem <ref> below). As a particular application, we consider the undirected configuration model. The undirected configuration model CM_n(d) is a model for a random graph of size n which has a prescribed degree vector d∈_0^n as parameter. Each vertex i∈[n] is assigned d_i half-edges. Now, a random graph is sampled by choosing any permutation φ{1,2,…, |d|}→{1,2,…, |d|} uniformly at random and afterwards connecting the jth half-edge to the φ(j)th half -edge, where j runs from 1 to |d|. [Degree regularity condition] Let D_n=d_ϕ_n denote the degree of a uniform vertex in [n]. We impose the following two assumptions on the degree sequence: * D_n n →∞⟶ D in distribution; and * 𝔼[D_n] n →∞⟶𝔼[D]. For the directed configuration model, the power-law hypothesis is proven to hold true <cit.>, and the following theorem states that this is also the case for the undirected configuration model CM_n(d) associated with the sequence (d_n)_n ∈: Consider the undirected configuration model G_n=CM_n(d^(G_n)) where the degree distribution satisfies Condition <ref>. Let c ∈ [0,1] be a constant. Then the PageRank vector R^(G_n) satisfies that, for all n,k, ℙ(R_ϕ_n^(G_n)>k) ≤ℙ(D_n> k), while further, for any β>4𝔼[D]/c(1-c), lim inf_k→∞lim inf_n →∞ℙ(R_ϕ_n^(G_n)>k)/ℙ(D_n>β k)≥ 1. In particular, if the limiting degree distribution at the root D has a power-law distribution with any exponent τ>1 and tail function ℒ, then there is some 0< a≤ 1 such that the limiting root-PageRank R_ϕ satisfies the power-law bounds aℒ(k)k^-(τ-1)≤ℙ(R_ϕ>k) ≤ℒ(k)k^-(τ-1) for every k ∈ (0,∞). The upper bound (<ref>) follows directly from Theorem <ref>. By <cit.>, the undirected configuration model CM_n(d_n) converges locally in probability, and hence locally weakly, to the unimodular branching process tree. By Corollary <ref> a), the limiting root-PageRank satisfies ℙ(R_ϕ>k) ≥ (1+o(1)) ℙ(D>β k) as k →∞. On the other hand, by Theorem <ref> and the Portmanteau lemma (e.g. <cit.>) applied to the respective open interval (x,∞), for every x ∈ (0,∞), lim inf_n →∞ℙ(R_ϕ_n^(G_n)>x)/ℙ(R_ϕ>x)≥ 1. Hence, combining (<ref>) and (<ref>) gives lim inf_k →∞lim inf_n →∞ℙ(R_ϕ_n^(G_n)>k)/ℙ(D_n>β k) =lim inf_k →∞(ℙ(R_ϕ>k)/ℙ(D>β k)lim inf_n →∞ℙ(R_ϕ_n^(G_n)>k)/ℙ(R_ϕ>k)ℙ(D>β k)/ℙ(D_n>β k))≥ 1. Here, convergence in distribution in combination with the fact that both D_n and D are integer-valued guarantees that at any fixed k, the fractionℙ(D>β k)/ℙ(D_n>β k) converges to 1 as n tends to infinity. This proves (<ref>). To prove the power-law bounds, first write ℙ(D>k)/ℙ(R_ϕ>k) =lim inf_n →∞ℙ(D> k)/ℙ(D_n>k)ℙ(D_n> k)/ℙ(R_ϕ_n^(G_n)>k)ℙ(R_ϕ_n^(G_n)>k)/ℙ(R_ϕ>k), and employ Theorem <ref> and (<ref>) to conclude the asymptotic upper bound ℙ(R_ϕ>k) ≤ℙ(D>k) for any k>0. Now assume that D has a power-law distribution for some τ>1 and some function ℒ that is slowly varying at infinity. Then by (<ref>) and (<ref>) for every s ∈ (0,1) there is some K_s such that s β^-(τ-1)ℒ(β k)k^-(τ-1)≤ℙ(R_ϕ>k) ≤ℒ(k)k^-(τ-1) for all k ≥ K_s. As ℒ is slowly varying at infinity, for any ε∈ (0,1) we find a K̂_ε∈ such that ℒ(β k) ≥ (1-ε) ℒ(k) for all k ≥K̂_ε. Hence, with a(ε,s):=s β^-(τ-1)min{ min{ℒ(β k) k ∈{0,…, max{K̂_ε,K_s }}, 1-ε }∈ (0,1) we obtain a(s,ε)ℒ(k)k^-(τ-1)≤ℙ(R_ϕ>k) ≤ℒ(k)k^-(τ-1), and any choice of s, ε∈ (0,1) gives the desired power-law bounds for R_ϕ. §.§ Application to the Pólya point tree and the undirected preferential attachment model Let us denote by PA^m,δ_n the version of the preferential attachment model with parameters m ∈ and δ>-1 without self-loops at time n, in which we start at time n=2 with two vertices with labels 1 and 2 and m edges between them and at any time step n → n+1 we subsequently add m edges without creating loops. This is done in m sub-steps where at any sub-step the attachment rule is updated according to the new degree distribution. The update rule reads (see <cit.>): for the (j+1)th edge of vertex v_n+1, attached at time-step n+1, ℙ(v_j+1,n+1^(m) v_i^(m)|PA^m,δ_n)=D_i(n,j)+δ/2m(n-1)+j+δ n, where D_i(n,j) is the degree of vertex n after the jth edge of vertex v_n+1 has been added. Note that in <cit.>, this model is denoted by PA^m,δ(d) where the d abbreviates version d within the class of considered preferential attachment models; also see the erratum to <cit.>. The preferential attachment model converges locally weakly to the Pólya point tree (<cit.>), a continuous-time branching process tree to which our machinery applies as well. We stick to the notions employed in <cit.>. The Pólya point tree is a multi-type branching process, where labels come from the type space [0,1] ×{y,o}. Here, the first component refers to the age of a vertex (or date of its birth), where smaller values mean that the vertex is older. The second component describes whether the vertex is younger (y) or older (o) with respect to its parent. This distinction is specific to the undirected model and corresponds to the fact that in the preferential attachment model, a random vertex has edges to m older neighbors and then starts gaining connections from younger vertices. As a consequence, the offspring distribution of any vertex has a deterministic component of a fixed number of older vertices and a random component describing the number of younger vertices. In the directed version of the Pólya point tree in <cit.>, the directness of edges makes labelling of vertices superfluous. Formally, the Pólya point tree is defined recursively as follows: * The root ϕ has age U_ϕ∼Unif([0,1]) and no label; * For the recursion step consider a vertex w (in the Ulam-Harris representation) with age A_w. Then the number m_- of older vertices (i.e., having label o) depends on the label of w in that m_-(w)= m-1 if w has label y, while m_-(w)= m if w is the root or has label o. The ages of the m_-(w) older vertices w1,…,wm_-(w) are distributed according to A_wj=U_wj^χ A_w<A_w-a.s., where (U_wj)_j=1^m_-(w) are i.i.d. Unif([0,1]) random variables independently of everything else and χ=m+δ/2m+δ<1. The younger children of ω are described by their ages (A_w(m_-(w)+j))_j ∈ℕ, which are the ordered points of a Cox process on [A_w,1] with intensity measure ρ described by the Lebesgue-density ρ(dx)=Γ_w/τ-1x^1/τ-1-1/A_w^1/τ-1dx. Here, τ=3+δ/m and Γ_w ∼Γ(m+δ+1,1) if w has label o; Γ(m+δ,1) else, where Γ(r,λ) denotes a Gamma-distributed random variable having density f(t)=1_[0,∞)(t)λ^rt^r-1e^-λ t/Γ(r). The Pólya point tree has degree with a power-law distribution (see <cit.>), in that there exists a constant c_m,δ such that ℙ(d_ϕ=k)=c_m,δk^-τ(1+O(k^-1)). In particular, with ã_m,δ:=c_m,δ/τ-1, this translates into ã_m,δk^-(τ-1)(1+O(k^-1)) ≤ℙ(d_ϕ>k) ≤ã_m,δk^-(τ-1)(1+O(k^-1)). Consider the undirected preferential attachment model G_n=PA^m,δ_n, with m ≥ 1 and δ>-m. Let further c ∈ (0,1). Then the PageRank vector R^(G_n) with damping factor c satisfies ℙ(R_ϕ_n^(G_n)>k) ≤ℙ(d_ϕ_n^(G_n)> k) for all n ∈ and all k ∈ (0,∞), while further, for any β > 2⌈ 2m+δ⌉/[c(1-c)], lim inf_k →∞lim inf_n →∞ℙ(R_ϕ_n^(G_n)>k)/ℙ(d_ϕ_n^(G_n)>β k)≥ 1. In particular, the limiting root-PageRank R_ϕ has power-law tails with the same exponent τ=3+δ/m as the degree of the root of the Pólya point tree. The general structure of the proof is the same as the one of the proof of Theorem <ref>, with the main difference being that verifying the assumptions of Corollary <ref> for the asymptotic lower bound requires a more extensive reasoning. So let us focus on this aspect at first. The preferential attachment model with parameters m,δ converges locally weakly to the Pólya point tree (see <cit.> and <cit.>) with the respective parameters. We want to verify the Assumption b) of Corollary <ref> with N_ϕ,k being the set of younger children of the root vertex, ignoring the deterministic number m of older children. It suffices to stochastically dominate the offspring distribution of the younger children by an i.i.d. family of -valued random variables. Let us first gather some important properties of the underlying Cox-process: Let k>m be an integer and t ∈ (0,1). Further let π{m+1,m+2,…,k}→{m+1,m+2,…,k} be a permutation chosen uniformly at random and independent of everything else. Consider the Cox-process η describing the birth-times of the younger children of the root with intensity given by (<ref>). * Conditionally on U_ϕ=t and d_ϕ=k, the family (A_π_j)_j=m+1^k of birth-times of the k-m younger children of the root is i.i.d. ∼ A^(t)_π_m+1 described by the Lebesgue-density f_t,τ(x)=1_[t,1](x)x^1/τ-1-1/(τ-1)(1-t^1/τ-1). * For every t ∈ (0,1) the random variable A^(t)_π_m+1 is stochastically bounded from below by the random variable A^(0)_π_m+1 described by the Lebesgue-density f_0,τ(x)=1_[0,1](x)x^1/τ-1-1/τ-1. Statement a) is a folklore property of Poisson- and Cox-processes, noting that the random scalar factor Γ_ϕ in the intensity measure ρ in (<ref>) does not exhibit any temporal dependence. In more detail, let ρ_y,t denote the intensity measure ρ of the process η in (<ref>) conditioned on Γ_ϕ=y and U_ϕ=t, which is an inhomogeneous Poisson process. By standard arguments (e.g. <cit.>), this Poisson process is distributed as a mixed binomial process on [t,1] with sampling distribution described by the density f_t,τ(x)=ρ_y,t(dx)/ρ_y,t([t,1]) and mixing distribution Poi(ρ_y,t([t,1])). Hence, we have that conditional on U_ϕ=t, d_ϕ=k and Γ_ϕ=y the family (A_π_j)_j=m+1^k is i.i.d. ∼ A^(t)_π_m+1 with density f_t,τ, which does not depend on y. In particular, for any measurable g^k-m→, 𝔼[g((A_π_j)_j=m+1^k ) | d_ϕ=k, U_ϕ, Γ_ϕ]= 𝔼[g((A_π_j)_j=m+1^k ) | d_ϕ=k, U_ϕ] a.s., because by the factorization Lemma (e.g. <cit.>) and the fact that f_t,τ does not depend on y the l.h.s. is already measurable w.r.t. σ({d_ϕ=k},U_ϕ) ⊆σ({d_ϕ=k},U_ϕ,Γ_ϕ). This implies that, for all Borel-measurable B_m+1, …, B_k, ℙ(⋂_j=m+1^k {A_π_j∈ B_j }| d_ϕ=k, U_ϕ) =ℙ(⋂_j=m+1^k {A_π_j∈ B_j }| d_ϕ=k, U_ϕ, Γ_ϕ) =∏_j=m+1^kℙ(A_π_j∈ B_j| d_ϕ=k, U_ϕ, Γ_ϕ) =∏_j=m+1^kℙ(A_π_j∈ B_j| d_ϕ=k, U_ϕ) a.s. In particular, the family (A_π_j)_j ∈{m+1,m+2,…,k} is independent conditional on d_ϕ=k and U_ϕ. To prove Statement b), we note that, for every t ∈ (0,1) and all x ∈, ℙ(A^(t)_π_m+1≥ x) =1_(-∞, t)(x)+1_[t,1](x)1-x^1/τ-1/1-t^1/τ-1 ≥1_(-∞, 0)(x)+1_[0,1](x)(1-x^1/τ-1)=ℙ(A^(0)_π_m+1≥ x). Hence, A^(0)_π_m+1≼ A^(t)_π_m+1. Conditional on d_ϕ=k and U_ϕ=t, the degrees (d_π_j)_j ∈{m+1,m+2,…,k} of the younger children of the root are i.i.d., where we have for every n ≥ m ℙ(d_π_m+1=n | d_ϕ=k, U_ϕ=t)= ∫_t^1ℙ(d_π_m+1=n | A_π_m+1=x)ℙ^A_π_m+1^(t)(dx), independently of k. Here, ℙ^A_π_m+1^(t) denotes the distribution of A_π_m+1 conditional on U_ϕ=t and d_ϕ=k. Moreover, conditional on d_ϕ=k and U_ϕ=t, the degrees (d_π_j)_j ∈{m+1,m+2,…,k} are stochastically bounded from above by the i.i.d.-family (d̃_j)_j ∈{m+1,m+2,…,k} with, for every n ≥ m, ℙ(d̃_m+1=n) = ∫_0^1ℙ(d_π_m+1=n | A_π_m+1=x)ℙ^A_π_m+1^(0)(dx) independently of k and t. In particular, for every n ≥ m, ℙ(d̃_m+1=n)=m+δ/(n+δ)(n+δ+1). We start by recalling two properties that directly follow from the definition of the Pólya point tree: Conditionally on d_ϕ=k and (A_π_j)_j=m+1^k, the random variables d_π_m+1,d_π_m+2, …, d_π_k describe the total number of points of independent Cox-processes and are thus independent random variables. Further conditioning on U_ϕ does not add any information. More precisely, for any measurable g^k-m→ the conditional expectation of g((d_π_j)_j=m+1^k) given d_ϕ=k, (A_π_j)_j=m+1^k and additionally U_ϕ is already measurable w.r.t. d_ϕ=k and (A_π_j)_j=m+1^k. By these two properties, we conclude that, for any integers n_m+1, …,n_k, ℙ(d_π_m+1=n_m+1, …, d_π_k=n_k| d_ϕ=k, U_ϕ=t) =∫_(0,1)^k-mℙ(d_π_m+1=n_m+1, …, d_π_k=n_k| (A_π_j)_j=m+1^k=(x_j)_j=m+1^k,d_ϕ=k, U_ϕ=t) ×ℙ(A_π_m+1∈ dx_m+1, …, A_π_k∈ dx_k| d_ϕ=k, U_ϕ=t) =∫_(0,1)^k-mℙ(d_π_m+1=n_m+1, …, d_π_k=n_k| (A_π_j)_j=m+1^k=(x_j)_j=m+1^k,d_ϕ=k) ×ℙ(A_π_m+1∈ dx_m+1, …, A_π_k∈ dx_k| d_ϕ=k, U_ϕ=t) =∫_(0,1)^k-m∏_j=m+1^kℙ(d_π_j=n_j| (A_π_j)_j=m+1^k=(x_j)_j=m+1^k,d_ϕ=k) ×ℙ(A_π_m+1∈ dx_m+1, …, A_π_k∈ dx_k| d_ϕ=k, U_ϕ=t). By construction of the Pólya point tree, the conditional distribution of each random variable d_π_j given (A_π_l)_l ∈{m+1,…,k} and d_ϕ=k is already measurable w.r.t. A_π_j. Hence, the last expression of (<ref>) equals ∫_(0,1)^k-m∏_j=m+1^kℙ(d_π_j=n_j| A_π_j=x_j)ℙ(A_π_m+1∈ dx_m+1, …, A_π_k∈ dx_k| d_ϕ=k, U_ϕ=t). Now employ Lemma <ref> to conclude that A_π_m+1, …, A_π_k are i.i.d. given d_ϕ=k, U_ϕ=t. By Fubini's Theorem, (<ref>) thus equals ∫_(0,1)ℙ(A_π_m+1∈ dx_m+1| d_ϕ=k, U_ϕ=t) …∫_(0,1)ℙ(A_π_k∈ dx_k| d_ϕ=k, U_ϕ=t) ×∏_j=m+1^kℙ(d_π_j=n_j| A_π_j=x_j) = ∏_j=m+1^k ∫_(0,1)ℙ(d_π_j=n_j| A_π_j=x_j)ℙ(A_π_j∈ dx_j| d_ϕ=k, U_ϕ=t). This proves the conditional independence with, for every n ≥ m, ℙ(d_π_m+1=n | d_ϕ=k, U_ϕ=t)=∫_t^1 ℙ(d_π_m+1=n | A_π_m+1=x)ℙ^A_π_m+1^(t)(dx). Next, for the stochastic bound (<ref>), we first note that by (<ref>), for any 0<x≤x̅<1, the intensity ρ([A_π_m+1,1]) conditionally on A_π_m+1=x̅ is stochastically bounded from above by the intensity ρ([A_π_m+1,1]) conditionally on A_π_m+1=x. Hence, also d_π_m+1 conditionally on A_π_m+1=x̅ is stochastically bounded from above by d_π_m+1 conditionally on A_π_m+1=x. This means that, for every fixed n ≥ m, the function g_n (0,1) → [0,1]; x ↦ℙ(d_π_m+1 < n | A_π_m+1=x) is non-decreasing. Hence, from A_π_m+1^(0)≼ A_π_m+1^(t) it follows 𝔼[g_n(A_π_m+1^(0))] ≤𝔼[g_n(A_π_m+1^(t))] for every t ∈ (0,1) (e.g., <cit.> or <cit.>). This implies that ℙ(d̃_m+1<n) ≤ℙ(d_π_m+1 < n | d_ϕ=k, U_ϕ=t) for all possible choices of k,n and t and thus proves the proposed stochastic bound. To calculate the probability mass function of d̃_m+1, we insert the expression ℙ(d_π_m+1=n | A_π_m+1=x)= Γ(n+δ)/(n-m)!Γ(m+δ)(1-x^1/τ-1)^n-m x^m+δ/τ-1 as given in <cit.> and the density f_0,τ(x)=x^2-τ/τ-1 into (<ref>), which gives ℙ(d̃_m+1=n)=1/τ-1Γ(n+δ)/(n-m)!Γ(m+δ)∫_0^1 (1-x^1/τ-1)^n-m x^m+δ+2-τ/τ-1dx. Substituting u=x^1/τ-1, i.e., dx=(τ-1)u^τ-2 in the integral in (<ref>) we arrive at ∫_0^1 (1-x^1/τ-1)^n-m x^(m+δ+2-τ)/(τ-1)dx =(τ-1)∫_0^1u^m+δ(1-u)^n-mdu =(τ-1)Γ(m+δ+1)Γ(n-m+1)/Γ(n+δ+2), which leads to ℙ(d̃_m+1=n) =Γ(n+δ)/(n-m)!Γ(m+δ)Γ(m+δ+1)Γ(n-m+1)/Γ(n+δ+2) =Γ(n+δ)Γ(m+δ+1)/Γ(m+δ)Γ(n+δ+2)=m+δ/(n+δ)(n+δ+1). Continuation of the proof of Theorem <ref>. As its distribution does not explicitly depend on the “internal" random variable U_ϕ, the family (d̃_j)_j ∈{m+1,m+2, …,k} is a suitable stochastic upper bound in the sense of Corollary <ref> b). In particular, we can condition on U_ϕ in the second inequality of (<ref>) and similarly to (<ref>) employ total expectation. For the scaling factor β we need to calculate min{t ∈|ℙ(d̃_1 ≥ t) ≤ 1/2}. By using a telescoping sum identity, we obtain ℙ(d̃_m+1≥ t) =(m+δ)∑_n=t^∞1/(n+δ)(n+δ+1)=(m+δ)∑_n=t^∞(1/n+δ-1/(n+δ)+1) =m+δ/t+δ, hence ℙ(d̃_m+1≥ t) ≤ 1/2 is equivalent to t ≥⌈ 2m+δ⌉. From Corollary <ref> b) we thus obtain that for every β>2⌈ 2m+δ⌉ ℙ(R_ϕ>k) ≥ (1+o(1)) ℙ(d_ϕ>β k) as k →∞. The remainder of the proof of Theorem <ref> is a minor modification of that of Theorem <ref>. § A COUNTEREXAMPLE ON THE LOWER BOUND Condition (<ref>) in Proposition <ref> can be violated once the dependencies between the degrees of the neighbors of a vertex become too large, as we show in this section. We show that, for any prescribed power-law distribution p=(p_k)_k ∈, there exists a sequence (G̃_n^(p))_n ≥ n_p, n_p ∈, of connected graphs whose degree distribution converges to p, whereas the corresponding PageRank at a uniformly chosen root converges to one in probability. This implies that there is no general non-trivial asymptotic lower bound on the PageRank vector. Our construction follows a two-step procedure: First we construct disconnected graphs G_n described as the disjoint union of o(n) many degree-regular graphs of appropriate size depending on (p_k)_k ∈. The corresponding PageRank-matrix P^( G_n) is doubly-stochastic, and therefore the PageRank vector of such graph has all elements equal to 1. This step is explained in Section <ref>. Afterwards we connect the formerly disjoint components by adding o(n) edges. After showing that our specific choice of G_n converges locally weakly, we then employ Theorem <ref> and uniqueness of the local weak limit to conclude that the PageRank associated with the so-obtained connected graphs converges in probability to 1. This step is explained in Section <ref>. §.§ Disjoint unions of regular graphs Let us start by formalizing the above ideas: Let 2 ≤ k < n be integers such that k is even. Then let G_k,n denote the k-regular graph with n vertices which is constructed as follows (see the graphs G_2,10, G_4,8 and G_6,7 in Figure <ref> for an illustration). The set of vertices is identified with the one-dimensional torus _n =/ n. Every vertex i ∈_n is connected by an edge to the vertices i ± 1, i ±2, …, i ± k/2. The graph is circulant in that cyclic permutations of its vertices are graph-automorphisms. Let n ≥ 2. Let p=(p_k)_k ∈ be any probability mass function such that p_k=0 for all odd k, and set N_k,n:=N^(p)_k,n:=⌊ np_k⌋. Let (M_n)_n ∈ be sequence in that diverges to infinity slowly enough, so that it in particular satisfies that N_k,n≥ k+1 for all k ∈ [M_n]. Let n_p :=min{ n ∈ M_n ≥ 2}. For every n ≥ n_p let G_n^(p):=∐_k ∈ [M_n] G_k,N_k,n denote the graph constructed as the non-empty disjoint union of the circulant graphs G_2,N_2,n, …, G_M_n,N_M_n,n, where we note that # G_n^(p)=n(1+o(1)) as n →∞. Further, we obtain G̃^(p)_n by connecting two formerly disjoint G_k,N_k,n whose degrees differ minimally, to turn G_n^(p) into a connected graph G̃^(p)_n. As each component of G_n^(p) is invariant under all cyclic permutations, the concrete choice of these M_n-1 edges is irrelevant up to isomorphisms. The sequence of disconnected graphs constructed in Definition <ref> above has the desired properties: [Degree and PageRank structure union of disconnected graphs] Let p=(p_k)_k ∈ be any probability mass function such that p_k=0 for all odd k and X^(p) be distributed according to p. Then for the graphs (G_n^(p))_n ≥ n_p described in Definition <ref>, * lim_n →∞ℙ(d_ϕ_n^ (G_n^(p))=k)=p_k. * R_v^(G_n^(p))=1 for every n ≥ n_p and v ∈ V(G_n^(p)). Statement (a) follows directly from the construction of the graphs G^(p)_n. Statement (b) is a consequence of the fact that the corresponding PageRank-matrix P^( G_n^(p)) is doubly-stochastic. We may replace the circulant regular graphs G_k,n in Definition <ref> by any degree-regular graphs of the prescribed degree and size provided that the sequence (G_n^(p))_n ∈ converges in the local weak sense. The concrete choice from Definition <ref> above provides a specific illustrative example for which the local weak convergence is easy to prove (see Proposition <ref> below), at the price that only regular graphs of even degree occur. §.§ Connecting the disjoint components We next study the local weak limits of G_n^(p) and G̃^(p)_n: [Local limit of disjoint unions of circulant graphs, and their connected versions] Let p=(p_k)_k ∈ be any degree distribution such that p_k=0 for all odd k. Then, (G_n^(p))_n ∈ described in Definition <ref> converges locally weakly to a rooted graph (G_∞^(p),ϕ). In particular, also the sequence (G̃^(p)_n)_n ∈ of connected graphs converges locally weakly to the same limit (G_∞^(p),∞). By Remark <ref> above, for fixed k ∈ 2, the sequence (G_k,n)_n ≥ n_p, M_n ≥ k converges locally weakly to the graph (G_k,∞,ϕ) on with ϕ≡ 0, where two vertices in G_k,∞ are connected by an edge iff their distance is at most k/2 (see Figure <ref> for the specific graph G_4,∞). Hence, the sequence G_n^(p) converges locally weakly to the random rooted graph (G_∞^(p),0) which equals (G_k,∞,0) with probability lim_n →∞N_k,n/n=p_k. As each component of G_n^(p) is invariant under all cyclic permutations, the concrete choice of these M_n-1 edges is irrelevant up to isomorphisms. The operation of adding edges affects the degree of o(n) vertices and thus, by Remark <ref>, does not change the local weak limit. From Propositions <ref> and <ref> we obtain the following theorem that produces a counter example to the power-law hypothesis: Let p=(p_k)_k ∈ be any probability mass function such that p_k=0 for odd k. Then for the graphs (G̃_n^(p))_n ≥ n_p, * lim_n →∞ℙ(d_ϕ_n=k)=p_k. In particular, if p is a power-law, then d_ϕ_n converges in distribution to a power-law with the same tail-index. * R^(G̃_n^(p))_ϕ_nn →∞⟶ 1 in probability. As M_n=o(n), Statement (a) follows readily from Statement a) of Proposition <ref>. By Statement (b) of Proposition <ref>, we know that R^(G_n^(p)))_ϕ_nn →∞⟶ 1 in probability and in distribution. By Proposition <ref>, the sequence (G̃_n^(p)))_n ∈ of connected graphs converges locally weakly to the same random graph (G_∞,ϕ) as the sequence (G_n^(p)))_n ∈. Hence, employing Theorem <ref>, we conclude that the sequences (R^(G̃_n^(p)))_ϕ_n) and (R^(G_n^(p)))_ϕ_n) have the same limit R^ (G_∞)_ϕ in distribution. As the distributional limit of real-valued random variables is unique (e.g., <cit.>) we conclude that R^(G̃_n^(p)))_ϕ_nn →∞→ 1 in distribution. In particular, convergence in distribution to a constant limit implies convergence in probability (e.g., <cit.>), which finishes the proof. § DISCUSSION AND OPEN PROBLEMS In this section, we provide a discussion of our results and state some open problems. We start in Section <ref> by discussing extensions of our work to directed graphs. In Section <ref>, we provide an intuitive explanation why the power-law hypothesis holds true for undirected preferential attachment models, while it fails for their directed versions. We close in Section <ref> by giving conclusions and open problems. §.§ Extensions to directed setting In this section, we investigate to what extent our results extend to the directed setting. We discuss both a setting in which the PageRank can be upper bounded by the in-degree, as well as an extension of our counter example. We start by recalling how the PageRank vector is defined in the directed setting. For a directed graph G_n=(V_n, E⃗_⃗n⃗) with #V_n=n, in-degree vector d^-=(d_i^-)_i ∈ V_n, and out-degree vector d^+=(d_i^+)_i ∈ V_n, the directed analogue of (<ref>) is defined as p^(G_n)_ij:=a_ij^(G_n)/d_i^+, and the graph-normalized PageRank equation reads R^(G_n)=c R^(G_n)P^(G_n)+(1-c)1_n, R^(G_n)∈ [0,n]^n. §.§.§ Upper bound in directed setting with bounded ratio of in- and out-degrees The simple proof of Theorem <ref> is based on employing the symmetry of the adjacency matrix, which is a stronger assumption than requiring the in- and out-degree vector to coincide. However, to extend the scope of Theorem <ref> similar to (<ref>) and omitting the G_n-dependency in notation, we can write R_i/d_i^- =c∑_j ∈ [n]a_ji/d_i^-d_j^+R_j+1-c/d_i^-=c∑_j ∈ [n]d_j^-/d_j^+a_ji/d_i^-R_j/d_j^-+1-c/d_i^- =c∑_j ∈ [n]d_j^-/d_j^+p^rev_jiR_j^-/d_j+1-c/d_i^-, where p^rev_ij:=a_ji/d_i^- defines a stochastic matrix P^rev. Now assume that there is exists a K < ∞ such that max_i∈ [n]d_i^-/d_i^+≤ K. Then defining v^dir by v_i^dir:=R_i/d_i^- for i∈[n], we obtain from (<ref>) v≤ c KP^revv+(1-c)Q1_n. In case that ‖ cKP^rev‖ <1, which is certainly satisfied when c<1/K, we can invert the operator I_n-cKP, to obtain (1-c)(I_n-cP^rev)^-1Q1_n ≤v≤ (1-c)(I_n-cKP^rev)^-1Q1_n, which is a generalization of (<ref>). Now, for the upper bound in (<ref>), we proceed as in the remainder of the proof of Theorem <ref> to obtain the following result: Let G_n=(V_n,E⃗_n) be any any directed graph with # V_n=n such that m_n:=min_i ∈ V_nd_i^-≥ 1 and K_n:=max_i ∈ V_nd_i^-/d_i^+ < m_n/c. Then R_i^(G_n)≤K_n/m_nd_i^- for all i ∈ V_n. A special case of Theorem <ref> arises for directed Eulerian graphs, in which d_i^+=d_i^-≥ 1 for all i∈ V_n, and for which K_n=m_n=1. On the other hand, the directed version of the preferential attachment model clearly fails the assumption of a bounded ratio of out- and in-degrees, which aligns with the known result that the limiting root-PageRank for that model has heavier tails than the in-degree of the root (<cit.>). [Interpretation of P^rev] In view of the random surfer interpretation (see <cit.>) of the PageRank vector as the stationary distribution of a random walker who at each step with probability c chooses one of the outgoing edges uniformly at random and with probability 1-c jumps to a vertex chosen uniformly at random from the entire vertex set V_n, the matrix P^rev describes the reversed random walk, in which the random walker follows edges in the opposite direction. Such random walk, too, was employed for characterizing other vertex centrality measures, e.g. CheiRank centrality <cit.>. §.§.§ Lower bound in directed setting <cit.> relates local weak convergence of graph sequences to convergence in distribution of the associated root-PageRank. This theorem covers the case of directed graphs as well, with a specific form of local convergence (where we consider the in-components and keep the out-degrees of vertices as vertex marks). Taking that into account, an inspection of the proof of Proposition <ref> immediately leads to the following generalization, where we now introduce =#{j | j →ϕ and d^+_j ≥α}. [Lower bound on PageRank for directed graphs] Let G=(V,E⃗,ϕ) be an infinite rooted directed random graph with in-degree vector d^- and out-degree vector d^+ which arises as the local weak limit of a sequence (G_n)_n ∈ of directed graphs. Let further R_ϕ denote the limiting (in distribution) root-PageRank. Assume that there are α>0 and ε>0 such that, as k→∞, ℙ(d^-_ϕ >k, ≥ (1-ε)d^-_ϕ)=o(ℙ(d^-_ϕ >k)). Then, ℙ(R_ϕ>k) ≥ (1+o(1)) ℙ(d^-_ϕ>α/ε c(1-c)k) as k →∞. Also Corollary <ref> can be obviously generalized to the directed setting, but we refrain from doing so. As a particular application of Proposition <ref>, we directly obtain the bound ℙ(R_ϕ>k) ≥ (1+o(1))ℙ(d_ϕ^->β k) as k →∞ for the limiting root-PageRank of the directed version of the preferential attachment model, for any β>1/[c(1-c)]. To see this, note that in the directed version of the Pólya point tree (<cit.>), each vertex has out-degree 1, so (<ref>) is fulfilled for any choices of α>1 and 0<ε<1. However, this lower bound is not sharp, as the tails of the limiting root-PageRank are actually heavier than those of the in-degree (<cit.>) §.§ Intuition for the difference in directed and undirected preferential attachment models It is surprising that the tail of the PageRank is drastically different in the directed and undirected preferential attachment models. Indeed, in the directed model, PageRank has a heavier tail than the degrees <cit.>. Furthermore, the authors in <cit.> prove that this result persists in multi-type preferential attachment networks and even in multi-type uniform attachment networks. In the latter case, surprisingly, the degree-distribution of the vertices of each type is geometric, while PageRank, for all types, has a power law distribution with the same exponent for each type of vertices <cit.>. At the same time, our Theorem <ref> states that in any undirected graph, PageRank cannot have a heavier tail than the degree. An intuitive explanation of this drastic difference between directed and undirected graphs stems from the fact that PageRank of a vertex is proportional to the sum of the probabilities of all paths leading to this vertex <cit.>; consequently, the limiting distribution of PageRank is defined by the sum of the probabilities of all paths leading to the root vertex in the local weak limit. Such path-counting arguments have been successfully applied also, e.g., to the personalized PageRank with restarts on undirected graphs <cit.>. In the directed preferential attachment model, the local weak limit is a directed tree, and the sum of the probabilities of all (directed) paths to the root equals the discounted tree size. Specifically, the size of the generation at distance l from the root, is discounted by the factor (c/m)^l with m being the constant and deterministic out-degree. Furthermore, this tree is a realization of a continuous-time branching process, stopped at a random time (more precisely, this random time has an exponential distribution with parameter equal to the growth rate of the process, i.e., the so-called Malthusian parameter). Up to that random time, the degree of the root grows exponentially in time, but the discounted tree size grows exponentially as well, at a faster rate. These growth rates enter the power-law exponent making the tail of PageRank heavier than that of the degree. In contrast, in the undirected preferential attachment model, a step on the path through vertex v is discounted by the factor c/d_v. Our analysis shows that this difference is crucial: it makes PageRank of the root grow at most as fast as its degree. Note also that in the undirected model, the sum of the probabilities of all paths to the root is not the same as the discounted tree size because edges can be traversed both ways. This makes the path counting in the undirected case more difficult, but this is not the feature that affects the power law exponent of PageRank because each extra step is penalized by the factor c. Altogether, we believe that the differences between the directed and undirected models are educational, and their better understanding will lead to new results in the future. §.§ Conclusion and outlook The upper bound for PageRank stated in Theorem <ref> holds true for every finite undirected graph. It also extends to the case of directed graphs with a bounded ratio of degrees. On the contrary, the counter-example presented in Theorem <ref> shows that a similar lower bound fails when the degrees of vertices are equal to those of their neighbors (i.e., maximal degree-degree correlations). Proposition <ref> provides a sufficient assumption for an asymptotic lower bound, which is particularly easy to check for sequences of graphs which converge to (branching-process) trees in the local weak sense. Combining Theorem <ref> and Proposition <ref>, we obtain a statement on the asymptotic behavior of PageRank which holds under assumptions that are reasonably simple to check in applications, which is one of the strengths of the result. Compared to the related work in e.g., <cit.>, the proofs in the present paper are not based on studying a stochastic fixed-point equation on a suitable (marked) branching-process tree, but instead rely on applying the recent result of <cit.> on convergence in distribution for the root-PageRank associated with locally weakly converging sequences of graphs. This simplification, however, comes at the expense of a somewhat weaker result, in that we cannot identify the asymptotic constant a such that ℙ(R_ϕ>k)=aℙ(d_ϕ>k)(1+o(1)) as k→∞. It would be of interest to identify such a constant in general like it is the case, e.g., for the directed configuration model (<cit.>). It remains an open question whether the sufficient Condition (<ref>) in Proposition <ref> for the respective asymptotic lower bound actually is an if and only if condition. Moreover, we believe Condition (<ref>) to hold much more generally than for the tree-scenarios considered in the applications, with the counterexample presented in Theorem <ref> being a rather artificial exception. It would hence be of interest to further examine the scope of our main result. Acknowledgements. The work of RvdH and NL is supported in part by the Netherlands Organisation for Scientific Research (NWO) through Gravitation-grant NETWORKS-024.002.003.
http://arxiv.org/abs/2407.12937v1
20240717180853
Multi-Band Wi-Fi Neural Dynamic Fusion
[ "Sorachi Kato", "Pu Perry Wang", "Toshiaki Koike-Akino", "Takuya Fujihashi", "Hassan Mansour", "Petros Boufounos" ]
eess.SP
[ "eess.SP" ]
Multi-Band Wi-Fi Neural Dynamic Fusion Sorachi Kato, Pu (Perry) Wang, Toshiaki Koike-Akino, Takuya Fujihashi, Hassan Mansour, Petros Boufounos Part of this paper was presented in ICASSP 2024 <cit.>. The work of S. Kato was done during his visit and internship at MERL. He was also supported by Japan Society for the Promotion of Science (JSPS) KAKENHI under Grant 23KJ1499. PW, TK, HM, and PB are with Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA 02139, USA. SK and TF are with the Graduate School of Information Science and Technology, Osaka University, Suita, Osaka, Japan. *Corresponding author: pwang@merl.com July 22, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Wi-Fi channel measurements across different bands, e.g., sub-7-GHz and 60-GHz bands, are asynchronous due to the uncoordinated nature of distinct standards protocols, e.g., 802.11ac/ax/be and 802.11ad/ay. Multi-band Wi-Fi fusion has been considered before on a frame-to-frame basis for simple classification tasks, which does not require fine-time-scale alignment. In contrast, this paper considers asynchronous sequence-to-sequence fusion between sub-7-GHz channel state information (CSI) and 60-GHz beam signal-to-noise-ratio (SNR)s for more challenging tasks such as continuous coordinate estimation. To handle the timing disparity between asynchronous multi-band Wi-Fi channel measurements, this paper proposes a multi-band neural dynamic fusion (NDF) framework. This framework uses separate encoders to embed the multi-band Wi-Fi measurement sequences to separate initial latent conditions. Using a continuous-time ordinary differential equation (ODE) modeling, these initial latent conditions are propagated to respective latent states of the multi-band channel measurements at the same time instances for a latent alignment and a post-ODE fusion, and at their original time instances for measurement reconstruction. We derive a customized loss function based on the variational evidence lower bound (ELBO) that balances between the multi-band measurement reconstruction and continuous coordinate estimation. We evaluate the NDF framework using an in-house multi-band Wi-Fi testbed and demonstrate substantial performance improvements over a comprehensive list of single-band and multi-band baseline methods. WLAN sensing, 802.11bf, Wi-Fi sensing, ISAC, localization, multi-band fusion, and dynamic learning. § INTRODUCTION Wi-Fi sensing, e.g., device localization and device-free human sensing, has received a great deal of attention in the past decade from both academia and industry. This trend has been manifested by the establishment of 802.11bf WLAN Sensing task group in 2020 to go beyond data transmission and meet industry demands for wireless sensing <cit.>. Existing Wi-Fi sensing mainly relies on coarse-grained received signal strength indicator (RSSI) and fine-grained channel state information (CSI) at sub-7-GHz bands <cit.>. At a high frame rate, CSI reflects intrinsic channel statistics in the form of channel frequency responses (CFR) over subcarrier frequencies (delay) and multiple transmitter-receiver antenna pairs (angle). At the same time, it may experience channel instability due to even small-scale environment changes. On the other hand, mid-grained mmWave beam training measurements at 60 GHz, e.g., beam SNR, have shown better channel stability over time <cit.>. These beam SNR measurements originate from sector-level directional beam training, a mandatory step for mmWave Wi-Fi to compensate for large path loss and establish the link between access points (APs) and users. However, they suffer from low frame rates and irregular sample intervals due to the beam training overhead and follow-up association steps. Fusion-based approaches have been considered in the literature for robustness and higher accuracy. Heterogeneous sensor fusion was studied between Wi-Fi and other modalities, e.g., Wi-Fi and vision <cit.>, Wi-Fi and ultra-wideband (UWB) <cit.>. Within Wi-Fi channel measurements, CSI and RSSI can be simply concatenated for joint feature extraction <cit.>. It is also possible to fuse the phase and amplitude of the fine-grained CSI for localization <cit.>. For multi-band Wi-Fi fusion, our previous work in <cit.> appears to be the only effort considering the fusion between CSI at sub-7-GHz and beam SNR at 60 GHz. However, <cit.> only considered simple classification tasks, e.g. pose classification (over 8 stationary poses), seat occupancy detection (8 stationary patterns), and fixed-grid localization. Despite being sampled at different time instances, both channel measurements can be simply combined on a frame-to-frame basis as these asynchronous samples correspond to the same stationary label (e.g., pose, occupancy, location) and their respective sampling time becomes irrelevant for the fusion task; see the top plot of Fig. <ref> and notice the asynchronous time instances t^c_n and t^b_n, n=1, 2, …. For more challenging tasks of continuous-time object trajectory estimation using asynchronous multi-band Wi-Fi measurements, several challenges need to be addressed. First, asynchronous channel measurements at different bands need to be aligned to estimate object trajectory. As illustrated in Fig. <ref> (the bottom plot), there exists time disparity between the CSI measurements at t^c_n and beam SNR measurements at t^b_n. In addition, there may exist time disparity between the input measurements at either t^c_n or t^b_n, and the desired trajectory estimates at t^p_n. Second, mmWave beam SNRs are sampled at a much lower frame rate than the CSI measurements. In an ideal scenario, beam SNRs can be obtained at a frame rate of 10 Hz for a typical beacon interval of 100 ms. However, multiple users need to contend the channel time for (uplink) beam training during each beacon interval, resulting in a lower frame rate. Third, the contention-based channel access further results in irregularly sampled beam SNR measurements at AP for a given user. To address the aforementioned challenges, we propose a multi-band Wi-Fi neural dynamic fusion (NDF) framework. This framework evolves from the stationary frame-to-frame basis in <cit.> (illustrated in the upper plot of Fig. <ref>) to a dynamic asynchronous sequence-to-sequence basis (the bottom plot of Fig. <ref>), thus supporting more challenging downstream tasks, e.g., regression in a continuous space and continuous-time object trajectory estimation. The proposed multi-band NDF substantially extends our previous work on a beam SNR-only framework of <cit.> and <cit.> to a neural network architecture comprising multiple encoders, latent dynamic learning modules, a post-ODE fusion module, and multiple decoders, as depicted in Fig. <ref>. Our main contributions are summarized below: * To the best of our knowledge, this is the first effort to address multi-band asynchronous fusion between sub-7-GHz CSI and 60-GHz beam SNR for trajectory estimation of moving objects. * We present a multi-encoder, multi-decoder NDF network in Fig. <ref>. It utilizes the two encoders acting like an initial latent condition estimator for the two distinct input sequences, employs an ordinary differential equation (ODE) modeling <cit.> for latent dynamic learning and latent state alignment, and fuses these aligned latent states via the post-ODE fusion module. * We consider multiple fusion schemes such as a multilayer perceptron (MLP) fusion, a pairwise interaction fusion, and a weighted importance fusion for the post-ODE fusion module. * We derive a loss function building upon the variational evidence lower bound (ELBO) between prior and approximate posterior distributions of the initial latent conditions as well as the likelihood of multiple decoder outputs. This ELBO-based loss function incorporates both unsupervised multi-band reconstruction loss and supervised coordinate estimation loss. * We build an automated data collection platform utilizing commercial-of-the-shelf 802.11ac/ad-compliant Wi-Fi routers and a TurtleBot as a mobile user. This platform continuously gathers CSI at 5 GHz and beam SNR at 60 GHz from the TurtleBot, while simultaneously recording its ground truth positions. * We conduct a comprehensive ablation study on trajectory estimation performance, generalization capability, and interpretation using real-world experimental data. The remainder of this paper is organized as follows. Section <ref> introduces the problem formulation, followed by a brief review of existing multi-band Wi-Fi fusion solutions. Section <ref> details the proposed multi-band NDF framework, with subsections dedicated to each module and the derivation of the loss function. Section <ref> describes our in-house multi-band Wi-Fi data collection testbed and performance evaluation, followed by the conclusion in Section <ref>. § PROBLEM FORMULATION AND EXISTING SOLUTIONS §.§ Problem Formulation We formulate the trajectory estimation as a continuous regression problem with asynchronous CSI and beam SNR sequences. As illustrated in Fig. <ref>, at each time instance t_n^b, we collect a set of M_b beam SNR values _n = [b_n,1, b_n,2, ⋯, b_n,M_b]^⊤∈ℝ^M_b × 1, each corresponding to one beam training pattern. At time instance t_n^c, we collect a CSI measurement _n ∈ℂ^N_ Tx× N_ Rx× N_ s with the (i, j, k)-th element C_n(i,j,k) given by the CFR from transmitting antenna i, receiving antenna j and subcarrier k. For a time window size or sequence length Δ T_w, we group N_b beam SNRs and N_c CSI measurements as two input sequences. The problem of interest is to estimate the object trajectory _n at N_p desired time instances t^p_n within the time window Δ T_w, {_n, t^b_n}^N_b_n=0, {_n, t^c_n}^N_c_n=0→{_n, t^p_n}^N_p_n=0, where _n = [x_n, y_n]^⊤ consists of two-dimensional coordinates at t^p_n. We follow standard practices to calibrate the raw CSI measurements _n due to the lack of synchronization between the Wi-Fi transmitter and receiver <cit.>. Specifically, we use SpotFi <cit.> to remove linear phase offsets caused by sampling time offset (STO) and apply an antenna-wise conjugate multiplication <cit.> to minimize packet-to-packet phase fluctuation. Once the CSI measurements are calibrated, we employ a pretrained convolutional autoencoder (CAE) to compress each calibrated CSI measurement into a CSI embedding vector _n ∈ℝ^M_c × 1, where M_c is the dimension of the CSI embedding. More details about the CSI calibration and embedding can be found in Appendix <ref>. As illustrated in Fig. <ref>, the equivalent input sequences become _n and _n, and the problem of interest reduces to {_n, t^b_n}^N_b_n=0, {_n, t^c_n}^N_c_n=0→{_n, t^p_n}^N_p_n=0. §.§ Existing Solutions §.§.§ Frame-to-Frame Fusion Given the time disparity between t^c_n and t^b_n, a simple way to combine the two input sequences is to align the CSI and beam SNR sequences at the input level. This can be accomplished using either linear or nearest-neighbor interpolation. For the linear interpolation (LinearInt) scheme, for a given output time instance t^p_n, we first identify the intervals i_c and i_b in the CSI and beam SNR sequences, respectively, such that t^c_i_c≤ t^p_n ≤ t^c_i_c+1 and t^b_i_b≤ t^p_n ≤ t^b_i_b+1 and then interpolate the input sequence at t^p_n using the two input measurements at both ends of the identified interval ^Lin_n = _i_b + t^p_n - t^b_i_b/t^b_i_b+1-t^b_i_b (_i_b+1 - _i_b), ^Lin_n = _i_c + t^p_n - t^c_i_c/t^c_i_c+1-t^c_i_c (_i_c+1 - _i_c), where n=0, ⋯, N_p. On the other hand, the nearest-neighbor interpolation (NearestInt) finds the input element from the two input sequences at the time instance that is closest to the desired output time instance t^p_n ^Nea_n = _i_b, if t^p_n - t^b_i_b≤ t^b_i_b+1-t^p_n _i_b+1, otherwise, ^Nea_n = _i_c, if t^p_n - t^c_i_c≤ t^c_i_c+1-t^p_n _i_c+1, otherwise. Given the aligned input sequences {^Lin/Nea_n, t^p_n}^N_p_n=0, {^Lin/Nea_n, t^p_n}^N_p_n=0, we can fuse them in a frame-to-frame fashion and regress the fused sequence to the trajectory coordinate _n = ℳ(ℱ(^Lin/Nea_n, ^Lin/Nea_n)), where ℱ represents a fusion scheme, e.g., concatenation or other considered options in Section <ref>, and ℳ denotes a multi-layer perceptron (MLP) network. One may also try other interpolation schemes such as the spline and piecewise polynomial interpolation. §.§.§ Sequence-to-Sequence Fusion As opposed to the frame-to-frame fusion, one can use a recurrent neural network (RNN) to capture recurrently updated hidden features from the entire input sequence <cit.>. By fusing the hidden states corresponding to the CSI and beam SNR sequences, one can achieve what we refer to as the sequence-to-sequence fusion. For an input sequence {_n }_n=0^N (_n can be either the CSI _n or beam SNR _n sequence), a standard RNN unit updates its hidden state _n-1 at time t_n-1 to _n at time t_n with the input measurement _n at time instance t_n as _n = ℛ(_n, _n; ), _n = _n-1, where ℛ is an Long Short-Term Memory (LSTM) <cit.> or Gated Recurrent Unit (GRU) <cit.> unit, and _n is an auxiliary vector. In the standard RNN, it assumes that the sampling intervals Δ t_n = t_n - t_n-1 are uniform, i.e., Δ t_1 = ⋯ = Δ t_N. Consequently, the auxiliary vector is simply given by the previous hidden state _n = _n-1. Refer to Appendix <ref> for details on the standard LSTM unit update. To address irregularly sampled sequences where Δ t_n ≠Δ t_n+1, we consider the following sequence-to-sequence baseline methods. One such method is RNN-Decay <cit.>, which decays the previous hidden state exponentially with respect to the time interval before being fed into the RNN unit, _n = ℛ(_n, _n; ), _n= _n-1e^-Δ t_n, where the auxiliary vector _n accounts for the irregular sampling intervals. Another method is RNN- <cit.>, which accounts for the irregular sampling interval by augmenting the input _n = ℛ(_n, _n; ), _n = _n-1, _n = [_n^⊤, Δ t_n]^⊤, while keeping the auxiliary vector as the previous hidden state. For a desired output time instance t^p_n, we first identify its immediate preceding time instances i_c and i_b in the CSI and beam SNR sequences as t^c_i_c≤ t^p_n ≤ t^c_i_c+1 and t^b_i_b≤ t^p_n ≤ t^b_i_b+1 with ^c_i_c and ^b_i_b previously updated using either (<ref>) or (<ref>). Then we propoagate ^c_i_c and ^b_i_b to the output time instance t^p_n as ^p_c_n ^c_t^p_n = ℛ(^c_i_ce^-(t^p_n-t^c_i_c), 0; ), ^p_b_n ^b_t^p_n = ℛ(^b_i_be^-(t^p_n-t^b_i_b), 0; ), for the RNN-Decay update and ^p_c_n ^c_t^p_n = ℛ(^c_i_c, [0^⊤, (t^p_n-t^c_i_c)]^⊤ ; ), ^p_b_n ^b_t^p_n = ℛ(^b_i_b, [0^⊤, (t^p_n-t^b_i_b)]^⊤ ; ), for the RNN-Δ update, where n=1, ⋯, N_p, by setting the current input _n=0 at t^p_n. We can then fuse the aligned hidden states ^p_c_n and ^p_b_n at t^p_n and regress the trajectory coordinate _n = ℳ(ℱ(^p_c_n, ^p_b_n)), where ℱ represents a fusion scheme and ℳ is an MLP. § MULTI-BAND NEURAL DYNAMIC FUSION In the following, we provide a detailed module-by-module explanation of the multi-band NDF framework illustrated in Fig. <ref>. This framework utilizes separate encoders to sequentially map the CSI embedding _n and beam SNR _n sequences (along with their respective time stamps t^c_n and t^b_n) into the latent space and estimate initial latent conditions, i.e., ^c_0 and ^b_0. In the latent dynamic learning module, both initial latent conditions are propagated using a learnable ODE model <cit.>, generating virtual latent states (i.e., ^p_c_n and ^p_b_n) at the same time instances t^p_n for alignment. These aligned latent states are then fused in a post-ODE fashion before being fed into a coordinate decoder for trajectory estimation. Meanwhile, one can also utilize the same learnable ODE model to regress latent states (i.e., ^c_n and ^b_n) at the input time instances t^c_n and t^b_n for the CSI and beam SNR. These regressed latent states can be fed into either the CSI or beam SNR decoder for waveform reconstruction. §.§ Encoders: An Estimator for Initital Latent Conditions The purpose of the encoder is to obtain the posterior distribution of an initial latent condition corresponding to an input sequence. We will first consider the CSI encoder. We start by reversing the input sequence _N_c, ⋯, _0 from the last time instance t^c_N_c towards the initial time instance t_0. Then, we map _n into a hidden vector _n^c ∈ℝ^H_c × 1 with the help of an auxiliary vector ^c_n ^c_n = ℛ(^c_n, _n; _g^c), where ℛ can be either GRU or LSTM unit with learnable parameters _g^c. To handle the temporal irregularity Δ t^c_n = t^c_n - t^c_n-1≠Δ t^c_n+1 of the input sequence, one can utilize a numerical ODE solver, e.g., the Euler or Runge-Kutta solvers, to propagate the hidden vector ^c_n+1 at time t^c_n+1 to the auxiliary vector ^c_n at time t^c_n in Fig. <ref> <cit.>: ^c_n = 𝒮(𝒪_e,^c_n+1, (t_n+1^c, t_n^c); _e^c) =^c_n+1 + ∫_τ = t_n+1^c ^ t_n^c𝒪_e((τ), τ; _e^c) dτ, where 𝒪_e is a learnable ODE function represented by a neural network with parameters _e^c. By iterating between (<ref>) and (<ref>), we can propagate the hidden vector from t^c_N_c to t_0 and output _0^c, which is used to estimate the initial condition _0^c in the latent space and approximate its distribution by a Gaussian distribution with mean ^c and variance (σ^c)^2 q_θ_c(^c_0 | _N_c,⋯, _0)=q_θ_c(^c_0 | ^c_0) = 𝒩( ^c, (σ^c)^2), Following the variational autoencoder framework <cit.>, we infer ^c and σ^c from _0 as ^c, σ^c = ℳ(^c_0; _z^c), with ℳ denoting an MLP network with parameters _z^c. Since the initial latent condition ^c_0 is stochastic, we sample it as ^c_0 = ^c + σ^c ⊙^c, ^c ∼𝒩(0, _L_c), where ^c is a standard Gaussian sample of dimension L_c and ⊙ represents the Hadamard product. Similarly, we can repeat the process from Eq.(<ref>) to (<ref>) to approximate the posterior distribution and obtain initial latent condition corresponding to the beam SNR sequence q_θ_b (^b_0 | _N_b,⋯, _0) = 𝒩( ^b, (σ^b)^2), ^b_0 = μ^b + σ^b ⊙^b, ^b ∼𝒩(0, _L_b), where L_b is the dimension of ^b_0. §.§ Latent Dynamic Learning Modules: Alignment in Latent Space Now, given the initial latent condition _0 for the input sequence, we employ a unified continuous-time ODE function 𝒪_d, modeled by a neural network with parameters θ_d, to unroll the latent dynamics at any query time instance t^q_n. Depending on the query time instance, we have the following three cases. §.§.§ for latent state alignment We first look into the case that the query time t^q_n is the same as the time instance for coordinate estimation t^p_n. This is also the case that we can use the same time instance to align the latent states between the CSI and beam SNR measurements. Specifically, we directly populate the initial latent condition to the latent state at the same query time instance for both CSI and beam SNR <cit.> ^p_c_n_t_n^p^c = _0^c + ∫_t_0^t_n^p𝒪_d (_t^c, t; _d^c) dt, ^p_b_n_t_n^p^b = _0^b + ∫_t_0^t_n^p𝒪_d (_t^b, t; _d^b) dt, where _0^c and _0^b are, respectively, the initial latent conditions for the CSI and beam SNR. In practice, we incrementally align the latent states at one query time instance at a time and then use the aligned latent states to calculate the next latent states at the next query time t^q_n+1. This is illustrated in Fig. <ref> (a), where we resort to align the latent states at the first time instance t^p_1 for the respective initial conditions ^c_0 and ^b_0, ^p_c_1 = _0^c + ∫_t_0^t_1^p𝒪_d (_t^c, t; _d^c) dt, ^p_b_1 = _0^b + ∫_t_0^t_1^p𝒪_d (_t^b, t; _d^b) dt, and then ^p_c_2 = _1^p_c + ∫_t_1^p^t_2^p𝒪_d (_t^c, t; _d^c) dt, ^p_b_2 = _1^p_b + ∫_t^p_1^t_2^p𝒪_d (_t^b, t; _d^b) dt, where the neural networks to represent latent ODE functions for ^c_t and ^b_t are parameterized by ^c_d and ^b_d, respectively. §.§.§ for latent state recovery of CSI Similarly to the above case, we can recover the latent states of the CSI measurements at their original time instances t^c_n ^c_n_t_n^c^c = _0^c + ∫_t_0^t_n^c𝒪_d (_t^c, t; _d^c) dt. §.§.§ for latent state recovery of beam SNR The last case is to recover the latent states of the beam SNR measurements at their original time instances t^b_n ^b_n_t_n^b^b = _0^b + ∫_t_0^t_n^b𝒪_d (_t^b, t; _d^b) dt. Note that we set that t_0 ≤min(t_0^c, t_0^b, t_0^p). In other words, the time instance for the initial latent conditions are prior to the time instance of the first measurement, either CSI or beam SNR, even before the start of the time window Δ T. More details of setting t_0 can be found in Sec. <ref>. §.§ Post-ODE Latent Fusion Once the latent states between CSI and beam SNR measurements are aligned at {t^p_n}_n=1^N_p in (<ref>) and (<ref>), one can fuse them together as _n^p ∈ℝ^L_p of dimension L_p to support the subsequent coordinate estimate. In the following, we consider three multi-band fusion schemes. §.§.§ MLP Fusion As shown in Fig. <ref> (a), the MLP fusion scheme starts by lifting the aligned latent states _n^p_c and _n^p_b to a higher dimension of L_f via separate MLP networks and then projecting the concatenated latent state to a fused latet state ^p_n as ^p_n = ℳ( ℳ(_n^p_b; _f^b) ⊕ℳ(_n^p_c; _f^c); _f^p ), where _f^b, _f^c, and _f^p are learnable parameters for the three MLP networks, and ⊕ denotes the vector concatenation, and ^p_n is of dimension L_p. §.§.§ Pairwise Interaction Fusion We combine the two aligned latent states, i.e., ^p_c_n and ^p_b_n, along with their pairwise interaction ^p_b_n ⊗^p_c_n ∈^L_b L_c × 1, and feed the expanded multi-band latent states to an MLP for fusion, ^p_n = ℳ(_n^p_b⊕_n^p_c⊕ (_n^p_b⊗_n^p_c); _f^p ), where ⊗ denotes the Kronecker product, and _f^p represent the MLP parameters. The Kronecker term ^p_b_n ⊗^p_c_n accounts for cross-modal nonlinearity by expanding the dimension from L_b or L_c to L_b L_c and including all possible element-wise multiplications between the two latent states. This is illustrated in Fig. <ref> (b). §.§.§ Weighted Importance Fusion We also consider a weighted fusion between the two aligned latent states with their respective importance estimated directly from their initial latent conditions. This is illustrated in Fig. <ref> (c). Specifically, we first convert the initial latent states _0^b, _0^c to importance weight vectors of the same dimension L_f: _b = ℳ(_0^b; _w^b)∈ℝ^L_f, _c = ℳ(_0^c; _w^c)∈ℝ^L_f, where _w^b and _w^b are learnable parameters. Then, we apply the softmax on [_b, _c] ∈^L_f × 2 over each row such that [_b, _c] σ ([_b, _c]) ⟶_b+ _c =1_L_f where σ(·) denotes the softmax for importance weight normalization and 1_L_f is the all-one vector of dimension L_f. Meanwhile, we lift the aligned latent states _n^p_c and _n^p_c to a space of dimension L_f and fuse them by weighting corresponding normalized importance weights as _n^p = ℳ( [ ℳ(_n^p_b; _f^b) ⊙_b ] + [ ℳ(_n^p_c; _f^c) ⊙_c]; _f^p ), where _f^b, _f^c, and _f^p are MLP parameters. §.§ Decoders for Trajectory Estimation and Input Sequence Reconstruction In Fig. <ref>, the NDF consists of three decoders: one for estimating trajectory coordinates and the other two for reconstructing the CSI embedding and beam SNR sequences. §.§.§ Trajectory Decoding Given the fused latent state ^p_n at desired time instances t^p_n, we simply employ an MLP network ℳ parameterized by _p as a coordinate estimation decoder for estimating trajectory coordinates at t^p_n _n = ℳ (_n^p; _p), n=0, ⋯, N_p, where _n = [x̂_n, ŷ_n]^T is the coordinate estimate at t^p_n and _p is shared over all time instances of t^p_n. By combining the latent dynamic learning modules of (<ref>) and (<ref>), the post-ODE fusion module (either (<ref>), (<ref>) or (<ref>)), and the above trajectory decoder of (<ref>), we establish the Integrated Trajectory Decoder 𝒫(·). This integrated decoder can be considered to directly take the two initial latent conditions _0^c and _0^b and output the coordinate estimate _n _n = 𝒫(_0^c, _0^b, t_0, t^p_n; _dp), where _dp = {_d^c, _d^b, _f, _p} with _f encompassing all learnable parameters in the post-ODE fusion model. For instance, _f ={^c_f, ^b_f, ^p_f } for the MLP fusion, while _f ={^c_w, ^b_w,^c_f, ^b_f, ^p_f } for the weighted importance fusion. As illustrated in Fig. <ref>, this integrated decoder structure directly links the initial latent conditions to the coordinate output and simplifies the derivation of the ELBO-based loss function in the next section. We hereafter group the estimated trajectory coordinates as = {_n}_n=0^N_p. §.§.§ CSI Decoding Given the CSI latent states ^c_n of (<ref>) at their original time instances t^c_n, we employ another MLP decoder with parameters _c to project ^c_n back to the CSI embedding sequence as _n = ℳ (^c_n; _c), n=0, ⋯, N_c, where _c is shared over all time instances of t^c_n. Similar to the integrated trajectory decoder 𝒫(·), we combine the latent dynamic learning (<ref>) and the above CSI decoder (<ref>) and establish the Integrated CSI Decoder 𝒞(·) _n = 𝒞(_0^c, t_0, t^c_n; _dc), where _dc = {_d^c, _c}. Equivalently, the integrated CSI decoder takes the initial latent condition _0^c corresponding to the CSI input sequence and reconstructes the CSI embedding sequence as _n at t^c_n. We also group the estimated CSI embedding sequence as = {_n}_n=0^N_c. §.§.§ Beam SNR Decoding The last decoder is to project the latent states ^b_n of (<ref>) at t^b_n back to the beam SNR sequence _n = ℳ (^b_n; _b), n=0, ⋯, N_b, where _b is shared over all time instances of t^b_n. Combining the latent dynamic learning (<ref>) and the above beam SNR decoder (<ref>), we establish the Integrated Beam SNR Decoder ℬ(·) as _n = ℬ(_0^b, t_0, t^b_n; _db), where _db = {_d^b, _b}. We also group the estimated beam SNR as = {_n}_n=0^N_b. § ELBO-BASED LOSS FUNCTION In the following, we derive an ELBO-based loss function that accounts for the multi-encoder, multi-decoder NDF architecture. By grouping ={_n}_n=0^N_b, ={_n}_n=0^N_c, and ={_n}_n=0^N_p as illustrated in Fig. <ref>, The modified ELBO can be expressed as <cit.> ELBO = 𝔼_q(_0^b, _0^c|, )[log p(|_0^b, _0^c)] + λ_1 𝔼_q(_0^b|)[log p(|_0^b)] + λ_2 𝔼_q(_0^c|)[log p(|_0^c)] - λ_3 D_ KL[q(_0^b, _0^c|, )||p(_0^b, _0^c)] (a)≈ 1V∑_v=1^V log p(|_0^b(v)_0^c(v)) + λ_1V∑_v=1^V log p(|_0^b(v)) + λ_2V∑_v=1^V log p(|_0^c(v)) - λ_3 D_ KL[q(_0^b, _0^c|, )||p(_0^b, _0^c)], where q(_0^c|) and q(_0^b|) are the approximate posterior distributions defined in (<ref>) and (<ref>), respectively, the joint posterior distribution of _0^c and _0^b can be factorized as q(_0^b, _0^c|, )=q(_0^c|)q(_0^b|), due to the independence assumption between the two input sequences and the use of separate encoders, {λ_i}_i=1^3 are regularization weights, p(_0^b, _0^c) are the joint prior of _0^c and _0^b that can be also factorized as p(_0^b, _0^c) = p(_0^b) p(_0^c), with p(_0^b)∼N(0, _L_b) and p(_0^c)∼N(0, _L_c), and p(|_0^b, _0^c), p(|_0^c) and p(|_0^b) denote the output likelihood functions of the three integrated (trajectory/CSI/beam SNR) decoders in Fig. <ref>. In the above equation, (a) holds as we replace the posterior mean by its sample mean over V samples of the two initial latent conditions _0^c and _0^b according to (<ref>) and (<ref>), respectively, with V independent realizations of ^c and ^b. In practice, the number of initial latent conditions is set to V=1 as one can average over the independent realizations within the minibatch samples. For the KL divergence term D_ KL[q(_0^b, _0^c|, )||p(_0^b, _0^c)], we invoke the independent condition between the posterior distributions of ^b_0 and ^c_0 given the input sequences and and between the prior distributions of ^b_0 and ^c_0 D_ KL[q(_0^b, _0^c|, )||p(_0^b,_0^c)] (a)= D_ KL[q(_0^b|)q(_0^c|)||p(_0^b)p(_0^c)] (b)= D_ KL[q(_0^b|)||p(_0^b)] + D_ KL[q(_0^c|)||p(_0^c)], where (a) holds due to the factorization in (<ref>) and (<ref>), and (b) can be derived using (<ref>) in Appendix <ref>. Then it is straightforward to show that D_ KL [q(_0^c| )||p(_0^c)] = D_ KL[𝒩(^c, σ^c)||𝒩(0, _L_c)] = 12∑^L_c_l=1((μ_l^c)^2 + (σ_l^c)^2 - 1 - log (σ_l^c)^2), D_ KL [q(_0^b| )||p(_0^b)] = D_ KL[𝒩(^b, σ^b)||𝒩(0, _L_b)] = 12∑^L_b_l=1((μ_l^b)^2 + (σ_l^b)^2 - 1 - log (σ_l^b)^2), where μ_l^b/c and σ_l^b/c are the l-th element of μ^b/c and σ^b/c, respectively. For output log-likelihood functions, we start with the integrated trajectory decoder P(·) that takes the two initial latent conditions and estimates the trajectory coordinates at t^p_n, log p(|_0^b, _0^c) = log p(_0, _1, ⋯, _N_p|_0^b, _0^c) (a)≈∑_n=1^N_plog p(_n|_0^b, _0^c), where the approximation (a) holds as we invoke an independent assumption over the sequential coordinate outputs over the time instance n. We assume that each element in _n=[x_n, y_n]^⊤ follows a Laplace distribution: p(x_n|_0^b, _0^c) = 12b_pexp(-|x_n -x̂_n|b_p), p(y_n|_0^b, _0^c) = 12b_pexp(-|y_n -ŷ_n|b_p), where b_p ∈ℝ is a scaling parameter and _n = [x̂_n, ŷ_n]^⊤ = 𝒫(_0^c, _0^b, t_0, t^p_n; _dp) is the estimated trajectory coordinate at t_n^p. As a result, we can show that log p(_n|_0^b, _0^c) ∝ - _n - _n_1b_p, where ·_1 denotes the ℓ_1 norm. Assuming b_p=1 and plugging the above equation back to (<ref>), the output log-likelihood function of the integrated trajectory decoder is given as log p(|_0^b, _0^c) ∝ -∑^N_p_n=1_n - _n_1. It is seen that maximizing this log-likelihood is equivalent to minimizing mean absolute error (MAE) between ground truth and estimated trajectory coordinates. We can follow Eq. (<ref>) to (<ref>) for the output log-likelihood functions of the integrated beam SNR and CSI decoders, log p(|_0^b) ∝ -∑^N_b_n=1_n - _n_1, log p(|_0^c) ∝ -∑^N_c_n=1_n - _n_1. Combining the KL divergence term and the output log-likelihood functions of the integrated decoders, the modified ELBO (<ref>) reduces to the following loss function ℒ = ∑^N_p_n=1_n - _n_1 + λ_1 ∑^N_b_n=1_n - _n_1 + λ_2 ∑^N_c_n=1_n - _n_1 + λ_3 ∑^L_b_l=1((μ_l^b)^2 + (σ_l^b)^2 - 1 - log (σ_l^b)^2) + λ_4 ∑^L_c_l=1((μ_l^c)^2 + (σ_l^c)^2 - 1 - log (σ_l^c)^2) where we relax the regularization weight λ_3 for the joint KL term to different regularization weights λ_3 and λ_4 for individual KL terms of beam SNR and CSI, respectively. § PERFORMANCE EVALUATION §.§ In-House Testbed and Data Collection We upgrade our previous in-house testbed in <cit.> and <cit.> from collecting single-band beam SNRs to simultaneously gathering both 5-GHz CSI and 60-GHz beam SNRs. As shown in Fig. <ref>, we mount two routers on a TurtleBot as a mobile user: one router is the 802.11ac-compliant ASUS RT-AC86U device for collecting 80-MHz CSI data at 5 GHz and the other 802.11ad-compliant TP-Link Talon AD7200 router for 60-GHz beam SNR. The mobile user TurtleBot moves along predefined rectangular trajectories (denoted by red dot lines in the right plot of Fig. <ref>) in a large conference room. Positioned at the lower left corner of the rectangular trajectory, another pair of identical routers act as a multi-band AP. To enable data collection on these commercial-off-the-shelf routers, we replace the original firmware with open-source ones <cit.> and follow the methods of <cit.> and <cit.> to extract the beam SNR and CSI from the commercial routers. From the four antennas (three external and one internal) of the ASUS router, we are able to extract N_ Tx× N_ Rx = 4 × 2 spatial streams of CSI over N_ s=234 subcarriers, excluding null subcarriers. Each raw CSI frame _n ∈^4 × 2 × 234 is calibrated and compressed into the CSI embedding input _n ∈ℝ^36× 1 with M_c=36, as described in Appendix <ref>. On the other hand, the TP-Link router employs an analog phase array of 32 antenna elements and sequentially scans over M_b = 36 predefined directional beampatterns, leading to _n ∈^36 × 1. Our testbed is also equipped with a LiDAR and a wheel encoder to self-localize over a predefined map. The self-localized coordinates, recorded at a frame rate of 10 frames per second (fps), are then used as ground-truth labels _n for trajectory estimation. The system clocks of all networked devices, including the routers and the TurtleBot, are precisely synchronized using the Network Time Protocol (NTP) with a central desktop acting as the NTP server. The desktop controls and aligns the clocks of all other devices over the network connection, ensuring that the timestamps across the network refer to the same clock. Consequently, we obtained 43,277 frames for CSI and coordinate labels, and 9,590 frames for beam SNRs. §.§ Implementation We set Δ T_w = 5 seconds to group all collected CSI frames, beam SNRs, and coordinate labels into sequences. Fig. <ref> shows the histograms of the number of CSI (top) and beam SNR (bottom) frames over non-overlapping sequences of 5 seconds. It reveals that most CSI sequences have 20-30 frames over a period of 5 seconds, yielding an average frame rate of 4-6 fps. In comparision, the beam SNR sequences contain much less number of frames over 5 seconds, yielding an avarage frame rate of 1 fps. For each sequence, all timestamps {t_n^b}_n=0^N_b, {t_n^c}_n=0^N_c, and {t_n^p}_n=0^N_p are normalized into [0, 1] by dividing the relative timestamps by 5 seconds. We consider three data splits for performance evaluation: * Random Split: We group the frames into 1,778 non-overlapping sequences of 5 seconds (with a stepsize of 5 seconds) and randomly divide these non-overlapping sequences into train, validation, and test sets with a ratio of 80:10:10. The results in the following subsections are based on this data split. * Temporal Split: We divide all frames into training and test sets strictly according to their chronological order. Specifically, we group the first collected s% of frames into the training set, and the remaining frames into the test set. In other words, all test frames represent future data that was not seen during the training phase. This sequential split is used to evaluate the temporal generalization performance in Section <ref> with different values of s. With a sequence length of 5 seconds, we group the training and test frames into sequences with stepsizes of 1 and, respectively, 5 seconds. * Coordinate Split: We also divide all frames into training and test sets according to their ground truth coordinates. Specifically, we keep frames from a particular area (e.g., a corner) in the test set, completely unseen from the training set. This split is used to evaluate the generalization performance at unseen coordinates in Section <ref>, which is referred to as the spatial generalization. We use an autoencoder with 3 1D convolutional layers and 3 MLP layers for the pretraining discussed in Appendix <ref> to obtain the CSI embedding vector _n. Both beam SNR and CSI embedding sequences are normalized to [0, 1]. For the encoder, we use the GRU unit with a hidden dimension of H_b=H_c=20 for the beam SNR and CSI input sequences. We set L_b = L_c = 20 for the dimensions of the initial latent condition and unrolled latent states ^b_n and ^c_n, n=0, 1, ⋯, N_b/N_c. We lift the aligned latent state to a space of dimension L_f=128 before projecting it back to the fused latent space of L_p = 20. We employ the Euler and Dopri5 ODE solvers for encoding and latent dynamic ODE, respectively. The decoders for trajectory (<ref>), CSI (<ref>) and beam SNR (<ref>) share the same MLP architecture of three MLP layers. The set of regularization parameters is chosen by performing a hyperparameter search in the interval of [0, 1] using Optuna <cit.>. It is based on the validation loss transition of coordinate estimation within 125 epochs and 100 trials are executed. Fig. <ref> illustrates the loss function as a function of regularization parameters (λ_1, λ_3) for beam SNR and (λ_2, λ_4) for CSI, where red dots denote the values of hyperparameter pairs achieving the smallest validation loss over 125 epochs or the smallest intermediate loss if terminated in an earlier epoch. As a result, we set the regularization parameters as λ_1 = 0.7, λ_2 = 1.0, λ_3 = 0.0010, λ_4 = 0.25 in the ELBO-based loss function of (<ref>). To train the NDF network, we set the minbatch size to 32 and the maximum number of epochs is 250, and we save the model achieving the best validation loss while training. We used the Adamax optimizer with the maximum learning rate of 4e-3 with the OneCycle learning rate scheduling for fast convergence <cit.>. §.§ Comparison to Baseline Methods For performance comparison, we consider a comprehensive list of baseline methods * Single-band methods (either CSI or beam SNR): 1) Linear interpolation (LinearInt) of (<ref>); 2) Nearest interpolation (NearestInt) of (<ref>); 3) RNN-Decay of (<ref>); 4) RNN-Δ of (<ref>); 5) DDND of <cit.>. * Frame-to-Frame Fusion methods: 1) LinearInt fusion of (<ref>) and (<ref>); 2) NearestInt fusion of (<ref>) and (<ref>). * Sequence-to-Sequence Fusion methods: 1) RNN-Decay fusion of (<ref>), (<ref>) and (<ref>); 2) RNN-Δ fusion of (<ref>), (<ref>) and (<ref>). Table <ref> summarizes the trajectory estimation performance of all baseline methods and the proposed NDF method under the random sequence split. By comparing the mean, median, and the 90th percentile of the localization error in the unit of meters, it is seen that, for a given method, e.g., the linear interpolation or the RNN-Decay, the multi-band fusion improves the localization performance from either the CSI-only or the beam SNR-only methods. Comparison between the interpolation (i.e., linear and nearest) and RNN methods (i.e., RNN-Decay and RNN-Δ) shows that the RNN-based methods can significantly improve the CSI-only performance and contribute to the overall improvement using both CSI and beam SNR. If we narrow down to the last column of Table <ref>, it is clear that, by properly aligning the latent states using the latent dynamic ODE, the NDF can further reduce the location error from the best multi-band baseline (i.e., RNN-Decay) to a mean localization error of 14.8 cm. Fig. <ref> highlights the cumulative distribution functions (CDFs) of the localization error from the multi-band methods and the NDF. To qualitatively compare the baseline and proposed methods, we overlap the estimated trajectories (in red dots) with the groundtruth coordinates (in dim blue dots) in Fig. <ref> for selected multi-band fusion baseline methods (nearest interpolation, RNN-Decay, RNN-Δ) and the proposed NDF method. The improvement from the frame-to-frame fusion (nearest interpolation) to the sequence-to-sequence fusion (RNN-Decay, RNN-Δ) is noticeable as there are less localization errors at the center of the rectangular area. The NDF shows the best results by significantly reducing the outliers and forcing the trajectory estimates along the rectangular track. §.§ Impact of Sequence Length In the following, we investigate the impact of sequence length Δ T_w on the trajectory estimation performance. Given the frame rate of about 5 Hz for CSI and about 1 Hz for beam SNR, the number of effective samples is proportional to the sequence length Δ T_w. For a given sequence length Δ T_w, we follow the random split protocol to segment the raw data into non-overlapping Δ T_w-sec sequences with Δ T_w = {2, 8} seconds. Table <ref> lists the trajectory estimation errors in terms of mean, median, and the 90-th percentile of the CDF for three choices of Δ T_w. Overall, it confirms that, the longer the sequence, the better the trajectory estimation performance. In the case of Δ T_w=2 seconds, there might not be sufficient beam SNR samples for latent dynamic learning as the frame rate is limited to about 1 Hz. The choice of Δ T_w=8 seconds appears to give lower median and CDF@0.9 localization errors while keeping the mean error close to that of Δ T_w=5 seconds. §.§ Impact of Fusion Scheme In the following, we examine the impact of the three fusion schemes in Sec. <ref> on trajectory estimation accuracy, using the random split protocol of nonoverlapping 5-sec input sequences. As shown in Table <ref>, the MLP fusion scheme delivers the best results in terms of mean, median, and 90th-percentile CDF, with the weighted importance fusion scheme exhibiting nearly identical performance. Moreover, the three fusion schemes outperform all multi-band baseline methods listed in Table <ref>. This seems to imply that once the latent states between the beam SNR and CSI are aligned, the choice of fusion scheme has only a marginal impact on the final localization performance. §.§ Generalization under Temporal Split Under temporal split, sequences in the training and test sets are distinctly separated in the temporal domain such that they do not intertwine. This separation allows us to effectively assess the generalization capability on future Wi-Fi measurements. We consider various training data ratios of s%=20%, 40%, 60%, 80% corresponding to, respectively, (20:80, 40:60, 60:40, 80:20) training-test data split ratios. For better use of training data when the training data size is small, we segment the training data into 5-sec overlapping sequences using a stepsize of 1 second. Table <ref> shows the localization errors for various choices of s. It is seen that a temporal training-test split ratio of 60:40 provides the best localization performance across all evaluated metrics. The NDF with a temporal training-test split ratio of 80:20 results in less accurate performance compared to the random training-test split ratio of 80:10:10 in Table <ref>. For example, the mean localization error increases from 26.3 cm under the random split to 50.1 cm under the temporal split. Such degradation is anticipated due to the temporal fluctuation in Wi-Fi measurements, which may stem from channel instability over time. §.§ Generalization under Coordinate Split Under coordinate split, we can test the generalization capability on test data collected from unseen positions. Fig. <ref> illustrates the estimated trajectories (in red dots) alongside the ground truth trajectories in the upper right corner, with training trajectories plotted in blue. As shown in Fig. <ref> (a) and (b), the frame-to-frame fusion baseline methods (LinearInt and NearestInt) fail to leverage latent dynamics, resulting in scattered estimated trajectories. In contrast, the sequence-to-sequence baseline methods, specifically RNN-Decay fusion and RNN-Δ fusion in Fig. <ref> (c) and (d), demonstrate improved alignment between the estimated red trajectories and the training blue trajectories. Fig. <ref> (e) presents the NDF results, clearly showing that the estimated trajectories effectively complement the training trajectories, closely mirroring the ground truth rectangular trajectories. §.§ Latent Space Visualization As shown in Fig. <ref> (a), we identify 8 local regions along the trajectory. We gather all data frames with their corresponding ground truth locations from the same region and map them into latent states using selected baseline methods and the proposed NDF method. These high-dimensional latent states are then visualized by projecting them onto a 2D plane using t-distributed Stochastic Neighbor Embedding (t-SNE). Fig. <ref> (b) presents the t-SNE visualization results for the single-band baseline (beam SNR-based DDND), the RNN-Decay fusion baseline, and the proposed NDF method. The sequence-to-sequence RNN-Decay fusion baseline exhibits much clearer separation compared to the single-band beam SNR-based DDND, highlighting the advantages of utilizing multi-band Wi-Fi channel measurements in our experiment. Nonetheless, it is seen that the latent states from regions 2 and 6 overlap within the upper right cluster. In contrast, Fig. <ref> demonstrates that our NDF learns a compact and well-separated representation in the latent space, with denser latent distributions for each region. These results further suggest that the low-dimensional latent space effectively preserves the trajectory geometry within the NDF framework. Notably, each latent cluster is spatially connected to the edge of its adjacent cluster, creating a continuous latent space that seamlessly transitions from one region to another. § CONCLUSION In this paper, we introduce the NDF framework, which utilizes asynchronous multi-band Wi-Fi channel measurements to estimate trajectories in a continuous-time manner. This is achieved through a multiple-encoder, multiple-decoder architecture that aligns latent states across different input sequences and fuses them for trajectory estimation. Latent state alignment is facilitated by a learnable ODE model and the initial latent conditions from the encoders. Evaluated with real-world multi-band Wi-Fi data, the NDF framework demonstrates significant performance enhancements compared with a comprehensive set of single-band and multi-band baseline methods. IEEEbib § CSI CALBIRATION AND EMBEDDING The extracted CSI suffers from both magnitude and phase offsets including carrier frequency offset (CFO), sample time offset (STO) <cit.>. We choose the SpotFi <cit.> calibration method to eliminate the linear phase offset caused by STO and CSI conjugate multiplication to cancel out packet-wise random phase offset and improve the stability of the waveform. The calibration is performed packet-wise and antenna-wise at the receiving side. For each receiving antenna, we first unwrap the CSI phase, then we obtain the best linear fit of the unwrapped phase as [τ̂_i, n, β̂_i,n] = min_τ, β∑_j, k=1^N_Rx, N_s (ψ_n(i, j, k) - 2π f_δ (k-1) τ + β)^2, where ψ_n(i, j, k) = ∠C_n(i, j, k) is the unwrapped phase of the n-th packet from transmitting antenna i and receiving antenna j at subcarrier k, and f_δ is the frequency spacing between two adjacent subcarriers. We obtain the calibrated phase by subtracting phase offset as ψ̂_n(i, j, k) = ψ_n(i, j, k) - 2π f_δ (k-1) τ̂_i,n. We further perform CSI conjugate multiplication across the receiving antennas to remove random phase fluctuation over packets <cit.>. This leads to the calibrated CSI element C̃_n(i, j, k) = C_n(i, j, k)C^*_n(i, j+1, k), where j=1, ⋯, N_Rx-1, and * represents the complex conjugate. Grouping all calibrated CSI elements C̃_n(i, j, k) over transmitting antenna i, receiving antenna j, and subcarrier k, the calibrated CSI tensor is given by _n ∈ℂ^N_Tx× (N_Rx-1) × N_s. To balance between the two input (CSI and beam SNR) sequences, we employ a pretrained convolutional autoencoder (CAE) to compresscalibrated CSI tensor _n into an embedding vector _n ∈ℝ^M_c × 1 with the following steps: * Complex-to-Real Conversion: we convert the complex-valued _n into a real-valued matrix ^f_n ∈ℝ^N_Tx(N_Rx-1)N_s × 4. This is achieved by by splitting each element into four parts: real, imaginary, phase, and magnitude. Each of these parts is then vectorized into a 1D vector and putting these four vectors togther yields ^f_n. * Embedding from Autoencoder: The real-valued matrix ^f_n is fed to the CAE as _n =ℰ__AE^e(_n^f) and _n^f = 𝒟__AE^d(_n), where _n is the CSI embedding vector, ℰ__AE^e and 𝒟__AE^d represent the encoder and decoder of CAE, respectively, and _n^f is the reconstructed real-valued CSI matrix at the decoder output. The CAE is pretrained by minimizing the reconstruction error between ^f_n and _n^f. § LSTM UPDATE STEP Given the measurement _n at time step n and the auxiliary variable _n, one can use a standard LSTM unit to update the latent variable _n =ℛ(_n, _n; ), n=0, 1, ⋯, N, where ℛ(·, · |) is implemented with the following process (with abuse of notation) _n = tanh(_r c_n+_h c_n+_c), _n =σ(_rf_n+_h f_n+_f), _n =σ(_ri_n+_hi_n+_i). The above process consists of three gates: * a memory gate of (<ref>) uses the tanh function to combine the auxiliary hidden state _n and the current input _n into a value range of (-1, 1). * a forget gate of (<ref>) also acts on (_n, _n) but compresses the value into (0, 1) with the sigmoid function σ(·) to determine how much of the old memory should retain. * an input gate of (<ref>) compresses (_n,_n) into another value in between 0 and 1 and decides how much information we should take from the new input _n, along with weight matrices _rc/rf/ri/hc/hf/hi and bias terms _c/f/i. Then new hidden state _n is updated as _n =tanh(_n) ⊙_n, where the new memory variable _n updates its “old" memory _n-1 passing through the “current" forget gate output _n and adds new memory cell _n weighted by the “current" input gate output _n: _n =_n⊙_n-1+_n⊙_n, and the output gate _n is computed as _n =σ(_r o_n+_h o_n+_c o⊙_n+ _o). It is seen that the parameters in the LSTM update step is given as ={_rc/rf/ri/hc/hf/hi/ro/ho/co, _c/f/i/o}. § GENERAL DERIVATION OF ELBO Evidence lower bound, or ELBO, is a lower bound on the log-likelihood of observed data. We first express the log-likelihood of the input data as log p() = log p() ∫ q(|) d = ∫ q(|) (logq(|)p(|) + logp(, )q(|)) d = D_ KL[q(|)||p(|)] + ∫ q(|) logp(, )q(|) d, where D_ KL[· || ·] is the Kullback–Leibler (KL) divergence between two given distributions. Given that D_ KL[q(|)||p(|)] ≥ 0, (<ref>) follows as <cit.> log p() ≥ ∫ q(|) log q(|) d + ∫ q(|) logp()q(|) d = 𝔼_q(|)[log q(|)] - D_ KL[q(|)||p()]. We extend the above lower bound to the case where two inputs and with corresponding _x and _y as 𝔼_q(_x, _y|, )[log q(, |_x, _y)] - D_ KL[q(_x, _y|, )||p(_x, _y)]. The KL divergence term can be decomposed to the sum of two KL divergence terms by using the independent assumptions between and and between _x and _y, D_ KL[q(_x, _y|, )||p(_x, _y)] = ∫∫ q(_x|)q(_y|) (logq(_x|)p(_x) + logq(_y|)p(_y)) d_x d_y = ∫ q(_y|) d_y_=1∫ q(_x|) logq(_x|)p(_x)d_x + ∫ q(_x|) d_x_=1∫ q(_y|) logq(_y|)p(_y) d_y = D_KL[q(_x|) || p(_x)] + D_KL[q(_y|) || p(_y)].
http://arxiv.org/abs/2407.12980v1
20240717195253
A Framework for testing Federated Learning algorithms using an edge-like environment
[ "Felipe Machado Schwanck", "Marcos Tomazzoli Leipnitz", "Joel Luís Carbonera", "Juliano Araujo Wickboldt" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.DC", "cs.NI", "C.2.4; I.2.11" ]
inst1]Felipe Machado Schwanck [inst1]organization=Federal University of Rio Grande do Sul (UFRGS), Institute of Informatics, addressline=Av. Bento Gonçalvez, 9500, city=Porto Alegre, postcode=91.509-900, state=RS, country=Brazil inst1]Marcos Tomazzoli Leipnitz inst1]Joel Luís Carbonera inst1]Juliano Araujo Wickboldt § ABSTRACT Federated Learning (FL) is a machine learning paradigm in which many clients cooperatively train a single centralized model while keeping their data private and decentralized. FL is commonly used in edge computing, which involves placing computer workloads (both hardware and software) as close as possible to the edge, where the data is being created and where actions are occurring, enabling faster response times, greater data privacy, and reduced data transfer costs. However, due to the heterogeneous data distributions/contents of clients, it is non-trivial to accurately evaluate the contributions of local models in global centralized model aggregation. This is an example of a major challenge in FL, commonly known as data imbalance or class imbalance. In general, testing and assessing FL algorithms can be a very difficult and complex task due to the distributed nature of the systems. In this work, a framework is proposed and implemented to assess FL algorithms in a more easy and scalable way. This framework is evaluated over a distributed edge-like environment managed by a container orchestration platform (i.e. Kubernetes). Federated Learning Edge Computing Kubernetes Microservices Development Framework § INTRODUCTION Federated Learning (FL) is a machine learning solution designed to train machine learning models while keeping the data private and decentralized <cit.>. The main idea of FL is for each client to train its local model using its data and, afterward, upload the generated local model to a single centralized server, where the regional models of the participant clients will be aggregated and weighted to create a global model. FL is commonly used in edge computing, which involves placing computer workloads (both hardware and software) as close as possible to the edge, where the data is being created and where actions are occurring, enabling faster response times, greater data privacy, and reduced data transfer costs <cit.>. Thus, FL can be used with a variety of clients, such as smartphones, sensors, IoT devices, and data silos (distributed databases that need to keep their data private from the outside world). However, as seen in <cit.>, in general, the data distribution of the mobile systems and other similar settings is imbalanced, which can increase the bias of the model and impact in a negative way its performance. Different approaches have been proposed, such as Deep Reinforcement Learning (DRL) <cit.>, as a solution for this problem. Although regular centralized machine learning may outperform FL in prediction performance <cit.>, the entire dataset must be shared. Its first application was in Google GBoard <cit.>, which learns from every smartphone using Gboard without sharing user data. Since then, FL applicability has advanced to various fields such as autonomous vehicles, traffic prediction and monitoring, healthcare, telecom, IoT, pharmaceutics, industrial management, industrial IoT, and healthcare and medical AI <cit.>. Moreover, the number of academic publications with FL as main subject has increased significantly since its conception <cit.>. The increase in IoT devices has greatly enabled FL. The total number of IoT connections will reach 83 billion by 2024, rising from 35 billion connections in 2020, a growth of 130% over the next four years. The industrial sector has been identified as a critical driver of this growth. Expansion will be driven by the increasing use of private networks that leverage cellular network standards <cit.>. The evolution of the IoT device's computational power has also enabled FL; since an edge computer can process data locally, its sensors (e.g. cameras) could collect samples (e.g. images or frames) at a higher resolution and a higher frequency (such as frame rate) than would be possible if the data had to be sent to the cloud for processing <cit.>. One of the many challenges that have come up with the advance of FL is dealing with data imbalance and heterogeneity, as seen in <cit.>. In FL, using their local data, each edge node trains a shared model. As a result, data distribution from those edge devices is based on their many uses. Imagine a scenario, for example, of the distribution of cameras in a surveillance system. Compared to cameras located in the wild, cameras in the park, for example, capture more photographs of humans. Also, the size of the dataset that each one of those cameras will have to train their local models might differ by a large magnitude since a park might have much more data to input than a camera in the wild. Furthermore, an approach that has been proposed in many studies to dynamically address weight for the local models of clients participating in the FL global model and, therefore, deal with the heterogeneous data is DRL <cit.>. DRL has gathered much attention recently. Recent studies have shown impressive results in activities as diverse as autonomous driving <cit.>, game playing <cit.>, molecular recombination <cit.>, and robotics <cit.>. In all those applications, it has been used in computer programs to teach them how to solve complex problems, for example, how to fly model helicopters and perform aerobatic maneuvers, and, in some applications, it has already outsmarted some of the most skilled humans, such as in Atari, Go, poker and StarCraft. Testing and evaluating FL algorithms can be a very difficult task to accomplish. FL intrinsically creates complex distributed systems with non-trivial interactions among its participants. Therefore, this work proposes and implements a framework for testing FL algorithms that enables users to easily create different training scenarios by simply changing configuration parameters. These include computing and data distributions, datasets, global model aggregations, local client models, and server/client training parameters. The framework is also capable of collecting and visualizing training results and resource usage metrics (e.g., CPU, memory, etc.). Experiments have been conducted over a realistic edge-like environment managed by a Kubernetes container orchestration platform, with varied parameters on top of the proposed framework to demonstrate its capabilities. The remainder of this work is organized as follows. Section <ref> presents a literature review for edge computing and FL, respectively. Sections <ref> and <ref> present the conceptual framework architecture proposed and the tools and frameworks used to implement the PoC solution. Section <ref> details the PoC solution developed. Sections <ref>, <ref>, and <ref> present the experimental setup, the results obtained, and how resource usage and network traffic can be monitored, respectively. Finally, Section <ref> concludes the work with final remarks and a perspective on future work. § LITERATURE REVIEW This section presents the main topics of this work and provides an overview of the current state of the art of edge computing, federated learning, and deep reinforcement learning. §.§ Edge Computing Data is increasingly produced at the edge of the network; therefore, processing the data at the network's edge would be more efficient. With the advancement of telecommunication services and the increase of the necessity for low-latency computing, the edge computing paradigm has been motivated. In edge computing, instead of having computer workloads (both hardware and software) centralized in a data center (cloud), we have them as close as possible to the edge, where the data is being created and where actions are occurring, thus benefiting lower latency, greater data privacy, and reduced data transfer costs. For <cit.>, edge computing refers to the enabling technologies allowing computation to be performed at the edge of the network, on downstream data on behalf of cloud services, and upstream data on behalf of IoT services. Therefore, edge devices can be any device with Internet access, such as smartphones, smart cars, or other IoT devices. Multi-access Edge Computing (MEC) <cit.> is proposed as a critical solution that enables operators to open their networks to new services and IT ecosystems and leverage edge-cloud benefits in their networks and systems since it places storage and computation at the network edge. The proximity of the end users and connected devices provides low latency and high bandwidth while minimizing centralized cloud limitations such as delay, access bottlenecks, and single points of failure. MEC use cases can be seen in real-time traffic monitoring <cit.> and autonomous vehicles <cit.>. In traffic monitoring, real-time and accurate video analysis is critical and challenging work, especially in situations with complex street scenes; therefore, edge computing-based video pre-processing is proposed to eliminate the redundant frames edge devices need to process since a considerable amount of vehicle video data is generated. Also, the decentralized and highly available nature of multi-access edge computing is taken advantage of to collect, store, and analyze city traffic data in multiple sensors. For autonomous vehicles, a large amount of real-time data processing from different sensors at high speed is needed to guarantee driver safety. §.§ Federated Learning Federated learning (FL) is a distributed form of machine learning proposed by Google <cit.> to train models at scale while allowing the user data to be private. In Federated Averaging (FedAvg), the server aggregates the model updates using simple averaging. It returns the new model parameters to the client devices, which continue training using the updated model parameters. Google's proposal provided the first definition of federated learning, as well as the Federated Optimization <cit.> approach to improve these federated algorithms further. Advanced Federated Optimization <cit.> is a variant of the Stochastic Gradient Descent (SGD) algorithm commonly used in centralized training. In FedOpt, each local node applies SGD to its local data to compute the gradients and then sends them to a central server. The server then aggregates the gradients from all the nodes to update the global model. Adaptive Federated Optimization with Yogi (FedYogi) incorporates a momentum-based optimizer called Yogi after the central server aggregates the model updates using the FedAvg algorithm to improve the convergence rate. Federated Averaging with Momentum <cit.> incorporates a momentum-based optimizer, similar to the Yogi optimizer in FedYogi. The critical difference is that it uses a combination of the gradients from the current iteration and the gradients from the previous iteration to update the model parameters. This allows the optimizer to maintain a direction of movement even when the gradient changes direction, which helps to smooth out noisy gradients and accelerate convergence. In the “Federated Learning: Collaborative Machine Learning without Centralized Training Data" blog post <cit.>, Google explains how FL is enabling mobile phones to collaboratively learn a shared prediction model while keeping all the training data on the device, decoupling the ability to do machine learning from the need to store the data in the cloud. In addition, the current use of FL to predict keyboard words in Google's Gboard <cit.> and how it can be used for photo ranking and further improving language models. FL usually deals with data distributed across multiple devices. In such settings, data is usually non-independently and identically distributed (i.e., non-IID). One of the main challenges in FL is dealing with the heterogeneity of the data distribution among the parties since data distribution from those edge devices is based on their many uses <cit.>. Furthermore, many use cases in FL have data samples distributed among multiple devices, which are not always synchronized and may have limited connectivity. Thus, it cannot train these devices in parallel and directly aggregate them, as it cannot guarantee device availability or data homogeneity. Understanding how to properly select clients and weigh each client's contributions in the global model remains an open problem in FL. An example of a practical scenario of data imbalance and heterogeneity in which DRL was used in FL as a solution can be seen in recent work proposed to deal with blade icing detection in distributed wind turbines <cit.>. Wind turbines closer to the sea experience windy and snowy weather, while those closer to the continent deal with windy and rainy conditions. This heterogeneity introduces a bias in the local models since one might be more susceptible to icing than another. Therefore, since our objective is to identify icing, clients who experience more icing should have a different weight assigned to their contribution to the global model than others without. § CONCEPTUAL FRAMEWORK The main objective of the proposed framework is to enable FL testing in a platform where users can easily change parameters to create different scenarios in a distributed computing environment. These include different computing and data distributions, datasets, global model aggregations, client models, and server and client training parameters. The framework should also be capable of collecting training results and resource usage metrics. Figure <ref> presents an overview of the proposed framework main components in a conceptual architecture. The framework is designed in these layers to have the highest level of independence within them. This enables easier development and segregation of functions, so if improvements are done in one layer, it does not affect the others. The following subsections detail each one of the components of the layers. §.§ Distributed Infrastructure Layer This bottom layer contains all the infrastructure needed to create a distributed computing environment. It is responsible for scaling computing, storage, and networking from a pool of resources to meet user input parameters. The user parameters will reflect how the resource layer will be laid out to create the scenario desired for testing. For example, let's say a user wants to configure a scenario of FL using ten clients, whereas each client has a specific computing requirement of four CPU cores and 4GB of RAM per client. The distributed infrastructure manager will then configure the resource layer from its pool of resources (computing, storage, and networking) with ten clients, each with the specified specifications to meet user requirements. Having an independent infrastructure from the resource and application layers enables the usage of different computational devices since the application only sees the resource layer. This will be fundamental to allow different scenarios where other edge computing devices can be used as clients. §.§ Resource Layer The resource layer can be found on top of the distributed infrastructure layer. As mentioned before, it will match the configuration the user sets to set the FL training scenario. We can categorize the distribution of resources into four major categories: server resources, client resources, monitoring resources, and experiment results resources. The server resource will contain all the necessary resources to run the FL server matching the user configuration, and the same can be said for the client resources. The monitoring resource will include all the resources needed to run the monitoring server, which will monitor the distributed infrastructure layer and record the usage of the resources. The experiment results resource will have all the resources necessary to run an application to display the results of the experiments run in the framework. §.§ Application Layer The application layer will run on top of the resource layer in each correspondent resource. It contains all the applications necessary to run an FL algorithm in the framework, such as the FL server application, FL client application, monitor agent application, monitor server application, and the experiment results visualization application. Subsections <ref> to <ref> detail each of the mentioned applications. §.§.§ FL Server Application The FL Server application can be divided into five major components that enable FL testing in a distributed environment. Server is responsible for communicating with the clients and controlling the entire FL learning process, which includes selecting available clients to start training; control of the number of FL server rounds and round timeout; global model parameters aggregation and distribution and retrieval of model parameters. Model is responsible for server-side initialization of the global model parameters since some global model aggregation strategies need it to start FL learning training and enable the usage of different models to learn weight balancing of client models to distribute parameters. Strategies selects and configures supported global model aggregation strategies from the user-specified configuration. Dataset is a collection of datasets supported by the framework used by the application to handle the dataset configuration in memory and to properly obtain the raw data to create data distributions via Storage Manager. Storage Manager is responsible for distributing the raw data of the configured dataset throughout the distributed storage infrastructure. For example, suppose a user wants to test an FL algorithm in a dataset using an unbalanced non-iid data distribution with ten clients. The storage manager will separate the data into ten unbalanced and biased non-iid data parts. The storage manager of the server application is also responsible for managing where each experiment will write its data and where the client will output their results data in the system. §.§.§ FL Client Application The FL client application can be divided into three major components. Client: is responsible for the training and testing algorithms run on the client side and the connection with the server. Model: is responsible for selecting the desired model configured by the user to be used by the client to train it. Storage Manager: is responsible for loading the distributed data for training in the client and handling the experiments and test results of the client in the storage. §.§.§ Monitoring Application The Monitoring Server Application is responsible for gathering resource usage data of the distributed infrastructure and storing it for further visualization and analysis by the Monitoring Visualization Application. Each Monitoring Agent is responsible for gathering local resource usage data from the distributed resources and sending the metrics to the Monitoring Server Application. §.§.§ Experiment Visualization Application This application is responsible for querying data from already run experiments and enabling the user to visualize the data for each step of the FL training experiment in customizable dashboards. § TOOLS AND FRAMEWORKS This section provides an overview of the tools and frameworks considered to implement the conceptual framework described in Section <ref>. §.§ Orchestration, Deployment, and Building This section presents tools related to orchestration, deployment, and building of containers. §.§.§ Docker Docker[https://docker.com/ (accessed April 24th, 2024)] provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allow many containers to run simultaneously on a host. Containers provide a robust solution for bundling software and its dependencies into a transportable unit that can run seamlessly across diverse computing environments. A containerized application encapsulates all its libraries, settings, and tools within an isolated environment. Developers can avoid the arduous task of addressing compatibility issues with varied hardware and software and concentrate solely on the application's functionality. This inherent portability of containers eradicates the need to reconfigure applications for distinct environments, ensuring uniformity and dependability. §.§.§ Kubernetes Kubernetes[https://kubernetes.io/ (accessed April 24th, 2024)] is a portable, extensible, open-source platform for managing containerized workloads and services, which facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. A Kubernetes cluster consists of a set of worker machines, called nodes, which run containerized applications. Every cluster has at least one worker node. The worker node(s) host the Pods, which are the components of the application workload. Heterogeneous edge computing devices can be easily integrated and managed by Kubernetes as worker nodes. This also facilitates the deployment of different FL algorithms and the creation of varying testing scenarios in a distributed infrastructure environment. §.§ Libraries and Frameworks This section approaches the utilized libraries and frameworks to run federated learning (FL). §.§.§ PyTorch PyTorch[https://pytorch.org/ (accessed April 24th, 2024)] is a popular open-source machine learning library that was created by Meta's AI research team. It is used to develop and train deep learning models and is written in Python, which makes it easy to use and integrate with other Python libraries. PyTorch was the main library for training and testing the FL models used in this work. §.§.§ Poetry Poetry[https://python-poetry.org/ (accessed April 24th, 2024)] is a tool for dependency management and packaging in Python. It allows you to declare the libraries your project depends on, and it will manage (install/update) them for you. Poetry offers a lock file to ensure repeatable installs and can build your project for distribution. It was used in this work to create the packages of the FL client and server applications. §.§.§ Flower As discussed in this work, the concept of federated learning emerged in response to the need to leverage data from multiple devices while ensuring its privacy. However, federated learning introduces two additional challenges not present in traditional machine learning: scaling to various clients and dealing with data heterogeneity. To address these challenges, as proposed in <cit.>, Flower (flwr) has been developed as an open-source framework for building federated learning systems. Flower provides two primary interfaces: the client and the server. These interfaces enable the decentralization of standard centralized machine learning solutions by implementing the necessary methods, making building and deploying federated learning systems easier. In Flower's architecture[https://flower.dev/docs/architecture.html (accessed April 4th, 2024)] each edge device in the training process runs a Flower client containing a local machine-learning model. Flower provides a transparent connection via the Edge Client Proxy using an RPC protocol such as gRPC to ensure connectivity between the clients and the server. §.§ Monitoring and Visualization Finally, monitoring, storage, and data visualization tools are presented in this section. §.§.§ Prometheus Prometheus[https://prometheus.io/ (accessed April 24th, 2024)] is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, many companies and organizations have adopted this tool, and the project has a very active developer and user community. It is now a standalone open-source project maintained independently of any company. Prometheus was used as the primary application for monitoring the usage of resources in the PoC solution of this work. §.§.§ Rook Ceph Rook Ceph[https://rook.io/ (accessed April 24th, 2024)] is a storage solution that combines the capabilities of the Rook storage orchestrator with the Ceph distributed storage system. Rook is an open-source tool for managing storage systems on Kubernetes, while Ceph is a distributed object and file storage system that provides scalability, reliability, and performance. Rook Ceph was the persistent storage for the client and server application. §.§.§ Grafana Grafana[https://grafana.com/ (accessed April 24th, 2024)] is a data visualization tool commonly used with Prometheus that allows you to query, visualize, alert on, and understand your metrics. It was used as the primary tool to visualize Prometheus's resource usage. § PROOF-OF-CONCEPT IMPLEMENTATION Previous sections detailed the conceptual framework proposed and the tools considered for its implementation. This section details the current PoC implemented to run fully functional FL scenarios in a distributed infrastructure with multiple clients and different data distributions. The source code of the implementations of the software components described in this section, as well as configuration files used to allow the deployment and execution of the experiments discussed in the following sections, are available at our GitHub repository[ https://github.com/Open-Digital-Twin/framework-fl-testing]. The PoC consists of an end-to-end edge-like environment solution using edge computing devices orchestrated by Kubernetes to run the FL applications, the server and the clients, and the applications for monitoring resource usage. Figure <ref> shows the complete diagram of the PoC solution implemented. The distributed infrastructure layer comprises the Kubernetes cluster deployed at the Institute of Informatics, which serves as the foundation for each resource. The application layer can be visualized on top of the resources as containerized Docker images of each application running in the cluster pods. Further sections will go through more details about the layers, including how they were assembled with the tools and frameworks presented in Section <ref>, the underlying infrastructure of the Kubernetes cluster, and how the components of the PoC work. §.§ Distributed Infrastructure Layer The Kubernetes cluster at the Institute of Informatics includes a varied number of computing resources, with 352 CPUs, 544 GiB of RAM, and 6.3 TiB of disk space spread across 43 distinct nodes, as indicated in Table <ref>. These nodes are conveniently categorized into three classifications: “computer", “edge" and “server" enabling experimentation with pods that are constrained to specific machines with hardware that closely matches the real-life devices being simulated. For instance, the “edge" label encompasses a total of twenty-three Raspberry Pi devices (three Raspberry Pi 4s and twenty Raspberry Pi 3s) with less powerful hardware specifications relative to the “computer" and “server" machines. A network switch interconnects the cluster nodes and has distributed storage configured using Rook Ceph Filesystem (subsection <ref>) to build a layer on top of the storage resources. This enables the mounting of Persistent Volumes (PV) from the storage pool via a Persistent Storage Claim (PVC) into the container images that can be shared between applications to enable data distribution by the server and saving experiment results from each client. When a pod requests storage resources, it creates a PVC that specifies the amount of storage required and any other requirements, such as access mode and storage class. The Kubernetes scheduler then looks for an available PV that matches the requirements specified in the PVC. If a matching PV is found, it is bound to the PVC, and the pod can use the storage resource provided by the PV. The advantage of using PVs and PVCs is that they provide a level of abstraction between the pod and the underlying storage infrastructure. This allows pods to request storage resources without having to know the details of the underlying storage infrastructure. Knowing the context and making a parallel with the conceptual framework, in the PoC, the infrastructure of our Kubernetes cluster serves as the distributed infrastructure, and Kubernetes plays the infrastructure manager role. §.§ Resource Layer Kubernetes utilizes labels to organize objects. They are key-value pairs that can be attached to Kubernetes objects such as pods, services, nodes, and deployments and can be used to identify and group related objects. This is a powerful mechanism for selecting and manipulating subsets of objects based on specific criteria and can also be used to manage nodes in a Kubernetes cluster. Nodes are the worker machines that run containerized applications and services in a Kubernetes cluster. By attaching labels to nodes, we can assign specific roles or attributes to them and use those labels to manage and schedule workloads on those nodes. For example, one can label nodes based on their hardware characteristics, such as CPU or memory capacity, and then use those labels to schedule workloads that require specific hardware requirements. The Kubernetes Cluster of the Institute of Informatics uses the “node-type" label to identify if a node is a “computer", “server" or “edge" type of computational device. Finally, in the PoC solution, the resource layer used is a total reflex of the configuration set in the Kubernetes deployment file. Labels were used to assign where each containerized application would run and how many node resources would be available for them to use. §.§ Application Layer The application layer of the PoC solution is composed of all the Docker images built by the author, such as the FL server, client, and experiment results images, which contain all the implemented logic for FL algorithm testing. For monitoring purposes, Prometheus monitoring installed in the Kubernetes cluster was used to capture resource usage by the applications, and Grafana was used to visualize the data in charts. Figure <ref> shows how an experiment starts from the beginning to the end in the PoC solution from a high-level overview. In the blue boxes, we can see the actions of the Kubernetes cluster, the yellow ones from the FL Server application, and the orange ones from the FL clients. From a high-level perspective, the user configures a scenario in Lens[Lens is a GUI to simplify the interaction with kubectl: https://k8slens.dev/ (accessed April 24th, 2024)] for deployment, and Kubernetes scales the necessary resources and deploys the applications. Afterward, the FL server handles the data distribution of the dataset and starts the FL training algorithm. The client waits for the server to connect with it, runs local training rounds, and returns the model parameters to the server. The server receives those parameters, aggregates them using the configured aggregation method by the user in the experiment layout, and distributes them back to the clients, who will start local training rounds again with the new parameters. After all server rounds are done, the FL server will save the results in the correct output folder of the experiment, and the experiment will be over. The GitHub repository used in the development of the solution was organized into isolated packages containing all necessary code, data, or declarations of each component to maintain independence and enable better re-usability. The FL server and client applications were also developed to run in bare-metal environments. In addition, to allow easy application testing during development, you can run a docker-compose[https://docs.docker.com/compose/ (accessed April 24th, 2024)] environment to deploy the application locally using the built-in docker images. Further subsections will detail each one of the applications and their role in the PoC solution. §.§.§ Server Application The server is a containerized Python application with the FlowerML server framework as a dependency. The server storage manager is responsible for initializing the experiment root path in the distributed storage, where the results and logs of the currently deployed run of each client will be saved. It will also be responsible for receiving the models trained by the clients, averaging the received parameters using the selected strategy, and then updating the clients' models with the averaged parameters. The connection is done to the clients through a gRPC connection with SSL encryption. The algorithm is outlined in Algorithm 1. The server address, number of server rounds, dataset, data distribution, global aggregation strategy, client local rounds, and minimum number of connected clients are parameterized for the server application and can be changed in each deployment. From the folder structure and source code files available at our GitHub repository, it is easy to correlate each part of the application with the conceptual framework since each source code file encapsulates its corresponding component, as we explain in the following. dataset.py contains all the implemented classes of the supported datasets by the framework used by the application to handle the dataset configuration in memory and how to obtain the raw data of the dataset properly. At the time of this work, the CIFAR-10, CIFAR-100, and FMNIST datasets were implemented. Further development of this class can enable any dataset to be compatible with the framework. Section <ref> will detail each one of the currently supported datasets. main.py contains the main program loop and the server component of the application, which is responsible for communicating with the clients and controlling the entire FL learning process, which includes selecting available clients to start training; control of the number of FL server rounds and round timeout; global model parameters aggregation and distribution and retrieval of model parameters. All the implemented packages are included here, and the central server program runs as demonstrated in Algorithm 1. model.py contains the supported models and is responsible for server-side initialization of global model parameters since some global model aggregation strategies need it. At the time of this work, the current used models for testing are a simple CNN, googlenet <cit.> and resnet <cit.>. These models were chosen due to the simplicity and popularity. Further development of classes can enable any models to be compatible with the framework. storage.py contains the storage manager class, which is responsible for distributing the raw data of the configured dataset throughout the distributed infrastructure storage to achieve user requirements for testing. The storage manager of the server application is also responsible for managing where each experiment will write its data and where the client will output their results data in the system. At the time of this work, the storage manager can create data distributions by changing the following parameters: Balance boolean parameter specifies if the data should be balanced between clients. If true, the dataset is balanced, meaning the data batches have the same number of samples. Otherwise, if false, the number of data samples differs between them. Non-IID boolean parameter specifies whether the data should be Non-IID or IID between clients. If true, the dataset is Non-IID, meaning there is a class imbalance between the clients. Otherwise, if false, the dataset has a balanced class distribution. Distribution this parameter specifies how data should be distributed regarding the dataset classes if Non-IID = true. If set to pat, it will generate a pathological scenario of class imbalance, whereas each client will have only a subset of classes. If set to dir:, it will use a heterogeneous unbalanced Dirichlet distribution of the classes, similar as seen in <cit.>, an α parameter can modify that to change the distribution aspects. Further development of this class can enable more data distributions to be compatible with the framework. Further development of the storage manager class can enable more types of data distributions to be compatible with the framework. strategies.py contains the strategy for supported global model aggregation strategies from the user-specified configuration. At the time of this work, the supported strategies implemented are FedAvg, FedAvgM, FedYogi, and FedOpt. Further development of this class can enable more strategies to be compatible with the framework. §.§.§ Client Application The client waits for the server storage manager to initialize the experiment path in the distributed storage, connects to the server through a gRPC connection encrypted with SSL, and performs the pre-defined number of epochs received from the server, which is the number of times that the data set passes through the neural network. When multiple clients are running, an individual client is oblivious to the existence of the other clients – it can only communicate with the server. The algorithm for the client application can be seen in Algorithm 2. The client will only upload the model when the server requests the models, and the loop will end when the server finishes its rounds. From our folder structure at GitHub, it is easy to correlate each part of the application with the conceptual framework since each source code file encapsulates its corresponding component. The client's models and the address they will connect are parameterized for the client application and can be changed in each deployment. client.py contains the client classes with the utilized training and testing algorithms. At the time of this work, it would only support one PyTorch training algorithm. main.py contains the client application main loop described in Algorithm 2. It is responsible for initializing the environment variables used to set parameters, such as the local model utilized for training, and common variables, such as the experiment path, to save results. model.py contains all the supported models described in the server application section. storage.py contains the storage manager class of the client, which is responsible for loading the distributed data batch for training and testing the distributed storage and saving the experiments test results of the client in the storage. §.§.§ Monitoring Application Prometheus uses the Kubernetes API to discover the various resources it needs to monitor in the cluster, such as pods, services, deployments, nodes, and more. It does this by querying the Kubernetes API server for information about the desired resources and then collecting metrics data from these resources. For example, to monitor a Kubernetes pod, Prometheus will query the Kubernetes API server to get information about the pod's name, namespace, labels, and other metadata. Prometheus will then use this information to collect metrics data from the pod, such as CPU and memory usage, network traffic, and other metrics. Grafana queries the data from the Prometheus database to enable real-time visualization of resource usage in dashboards. Some sample charts of these dashboards are presented in Section <ref>. §.§.§ Experiments Results Application To retrieve the experiment results from the Kubernetes cluster, a container running an Ubuntu base image from Docker was used to mount the used PVs and access the results. Since the results are stored in files, we can copy them to the local machine to read and interpret them. Figure <ref> illustrates how the experiments that run in the PoC are saved in the distributed storage. .temp contains the partial results while running the application. This directory is deleted after the server application storage manager has moved the finalized run into its correct directory. data contains each client's training and testing data batches used in the experiment. runs contains the results for each experiment run. The logs subfolder of a run holds all the logs from the server and client applications, and the results subfolder contains the evaluation matrix for each local epoch run in the clients. A copy of the configuration used in the experiment run is saved to enable visualization of how the data was distributed in each client, how the classes were distributed inside each data batch, and which model and global aggregation strategy was used. config.json contains the last configuration used in the experiment. This enables testing of the same data distribution using several parameters, strategies, and models. § EXPERIMENTAL SETUP This section presents the experimental setup used to evaluate the solution proposed in Section <ref>. Three experiments were conducted to demonstrate the framework's capabilities. The adjusted parameters are the dataset, global model aggregation, local client model, client epochs, and server rounds. Each experiment was run with ten clients — named client-0 through client-9 and taken from the set of computational devices labeled computer in the cluster — running with different data distributions. Also, in the Flower framework, the fraction fit was set to 1, and the minimum available clients was set to 10 to ensure running every server round with all the clients participating. The specific clients involved in a given experiment may vary from one run to another, showing how the framework can help reproduce real-life scenarios effectively. §.§ Datasets The datasets considered for the experiments were the following: CIFAR-10 Consists of 60,000 32x32 color images divided into ten classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images <cit.>. CIFAR-100 Similar to CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a fine label (the class to which it belongs) and a coarse label (the superclass to which it belongs) <cit.>. Fashion-MNIST A dataset of Zalando'sarticle images consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to be a direct drop-in replacement for the original MNIST dataset <cit.> for benchmarking machine learning algorithms. It shares the exact image size and structure of training and testing splits. §.§ Performance Metrics The results from each experiment are saved in performance score matrices and written to log files, as described in Section <ref>. The F1 score is a standard metric for evaluating classification models and is particularly useful in cases with imbalanced data. It is the harmonic mean of two competing scores: precision and recall. Precision is the proportion of true positives out of all predicted positives (true positives + false positives), measuring how often the model correctly predicts a positive class, and recall is the proportion of true positives out of all actual positives (true positives + false negatives), measuring how well the model can detect positive classes. Thus, for binary classification problems, the F1 score is calculated by F1 score = 2 × precision × recall/precision + recall Given the F1 score of each class, as defined by (<ref>), we can aggregate them into different metrics typically used for evaluating the performance of multi-class classification models: the micro, macro, and weighted average F1 scores. These metrics are computed for each client and saved to their log files. In short, the performance metrics analyzed were the following: Accuracy Measures the model's accuracy as the ratio between the number of correctly predicted samples and the total number of samples in the test dataset. For binary class datasets, the micro average F1 score equals accuracy. Macro Averaged F1 Score Useful when dealing with balanced datasets or evaluating the model's overall performance across all classes equally. It is measured by computing the average F1 score of all classes in the dataset. Weighted Averaged F1 Score Useful for imbalanced datasets, as it assigns more importance to classes with more samples by considering their distribution. It is measured by computing the F1 score of each class and weighting the average F1 score by the number of instances in each class. Losses Distributed Represents the average loss across all clients. The server's log file contains the average loss of each client. Accuracy Distributed Useful to assess the overall FL accuracy. It is measured by computing the weighted average accuracy of all clients, where the accuracy of each client is weighted by the number of samples in its dataset. The server's log file contains the accuracy of each client. It should be noted that while prediction performance is a critical metric for any machine learning algorithm, it is not the focus of this work. The primary purpose of the experiments is to demonstrate the framework's capabilities and how it can provide an easy way to completely change an FL testing scenario by only changing input parameters in a deployment file; higher prediction performances would require much more training time, making running numerous experiments in a reasonable time impracticable. It would also require further tweaks on each model run, which is out of the scope of this work. § RESULTS The results were collected directly from the output folder and aggregated into visualization charts to ease analysis. In total, 143 containers (server and clients) were run in the cluster nodes, resulting in 143 log files containing performance measures through each server round. Each experiment run generates 11 log files, one from the server's container and 10 from the clients' containers. As the number of output files can grow exponentially, we considered only the last run of each experiment, i.e., 33 files were analyzed to generate the results. We used external commercial tools to manipulate the data collected from all log files and show the results through meaningful charts. As a future work, the framework can include automatic chart generation from the performance measures generated from the experiments. §.§ Experiment 1 Table <ref> details the parameters used for the first experiment and the data distribution across clients. The FMNIST dataset was used with a non-IID and unbalanced data distribution. Also, the classes were distributed using a Dirichlet distribution with α = 0.1. The global model aggregation used was FedOpt, and the experiment ran with ten local epochs and ten server rounds. Figure <ref> shows each client's accuracy through the server rounds. The x-axis indicates the server round number, and the y-axis indicates the computed accuracy. It is important to notice that accuracy measures the percentage of correctly classified cases regardless of their classes, assigning the same importance to all instances. The figures indicate that clients with large amounts of data had a relatively high initial accuracy in the first rounds. In contrast, the accuracy of clients with little data was initially low. Over the server rounds, the chart reveals that the aggregate model had a subtly negative impact on the accuracy of clients with more data available while benefiting the accuracy of clients with less data. Figure <ref> shows the macro F1 score of each client through the server rounds. Notice that macro F1 assigns the same importance to each class of the dataset, which means this metric tends to exhibit better results as the performance of all classes in the dataset improves evenly. The results suggest that the performance may seem unacceptable when focusing on averaging the performance of different classes with the same weight, despite some clients having a reasonable accuracy, as shown in Figure <ref>. Even so, as a general trend, the chart reveals performance increases over the server rounds. Another way to evaluate the model's performance is to measure the clients' weighted F1 score, i.e., the average F1 score weighted by the number of instances in each class, as exemplified in Figure <ref>. The chart shows that some clients (5, 6, and 8) had a relatively high initial performance with a slight decrease over time. Contrasting with the results in Figure <ref>, we can note that the performance of these clients in the majority classes (with more instances) is considerably higher in the initial rounds. In these majority classes, however, the aggregated model worsened the performance over the server runs, even though the overall performance considering all classes improved. When observing client-6 in Figure <ref>, for example, there is a performance leap between rounds 4 and 5, suggesting that the aggregated model improved the model's overall performance despite reducing performance for the majority class. Figure <ref> shows the distributed accuracy over all clients and the distributed loss of the FedOpt aggregation strategy in each server round. The x-axis indicates the server round number, the left y-axis indicates the accuracy, and the second y-axis indicates the losses. We can see that the overall performance of the FL algorithm converged to an accuracy between 60% and 70% with less than 100 losses. Further analysis could be done to understand better how the dataset's class distribution impacted the results, as the framework saves this information in the configuration file of each experiment run. Recall that the purpose of this work is not to present a comprehensive evaluation of FL algorithms but to demonstrate how the proposed framework can aid in doing so. Therefore, further analysis can be subject of future work. §.§ Experiment 2 As shown in Table <ref>, the second experiment was run using the CIFAR-10 dataset with non-IID and unbalanced data distribution, similar to the first experiment. However, we used a pathological class distribution for two classes per client and the FedAvg global model aggregation. The experiment ran with fifteen local epochs and ten server rounds. The results analysis follows the same approach used in Section <ref>. Figure <ref> shows the accuracy of all clients through the server rounds. We can see that clients 0 and 1 can initially classify a good part of the test instances, but most clients cannot classify any cases. As more server rounds are run, however, the performance of clients with high accuracy decreases while the performance of clients with low accuracy increases. As a result, the overall accuracy stabilizes in a medium range between 30% and 60%, i.e., the aggregation model was able to distribute the performance among clients. Figure <ref> presents the macro F1 score of all clients through the server rounds. It shows that clients 0 and 8 initially managed to obtain an excellent performance in the two classes from which they had data. However, the aggregation drastically harmed their performance, reducing it to less than 20%. On the other hand, the aggregation positively affected clients who performed quite poorly initially. Nevertheless, the general performance of all clients, considering all their classes equally important, turned out to be very low. In contrast, Figure <ref> shows the weighted F1 score, suggesting a more erratic behavior over the server rounds. We can note that clients 0 and 8, which had a noteworthy decline in performance in the first round when all classes had the same weight, as shown in Figure <ref>, had a different behavior when considering the importance of each class proportional to the number of instances. The performance of these clients for their majority classes dropped after round 0 to less than 40% but increased after round 1 to between 60% and 70% and then dropped again throughout the server rounds. Deeper analyses must be carried out to investigate such a behavior. Conversely, we also can note that the performance of clients who had poor performance at the beginning had consistent performance improvements over the server rounds. In general, the average performance of all clients stabilized in the range between 30% and 40%. Figure <ref> shows the distributed accuracy over all clients and the aggregated loss of the FedAvg strategy in each server round. The overall performance of the FL algorithm converged to an accuracy between 40% and 50% with almost 300 losses. §.§ Experiment 3 The last experiment was run using the CIFAR-100 dataset, which also had non-IID but a balanced data distribution. Similar to the second experiment, a pathological distribution of twenty classes per client was used. The global model aggregation used was FedYogi, and the experiment ran with ten local epochs and fifteen server rounds. Table <ref> shows the parameters and the balanced data distribution used in the experiment. The experiment deals with a classification problem that is fairly difficult in itself because there are more classes to predict, and originally, there were relatively few instances per class available. Furthermore, in the distributed context, each client has access to only 20 of the classes and less than half of the original dataset samples per class. Due to this, we expect poor performance in this scenario. The chart in Figure <ref> shows that the proportion of predicted instances generally increases for all clients, with some exceptions (see between rounds 9 and 13), demonstrating that, in general, aggregation benefits clients over time. Figure <ref> shows the f1-macro measure. Thus, considering the performance for all classes and those with equal importance, the general performance of all clients is poor, not even reaching 10% of the macro f-measure. However, verifying the general beneficial effects of aggregation throughout the rounds is possible, with some exceptions. Figure <ref>, with the f1-weighted measure, shows a scenario similar to that seen in charts 6.9 and 6.10. That is, in general, the performance of most clients improves with aggregation over the rounds, with some exceptions. However, in this scenario, the classes represented in each client gain more importance, which is responsible for increasing the performance in this metric compared to the macro average. Figure <ref> shows the aggregated accuracy over all the clients and the aggregated loss of the strategy in each server round. The X-axis indicates the server round number, the left Y-axis indicates the accuracy, and the second Y-axis shows the losses. It can be noted that the overall performance of the FL algorithm converged between 40-45% accuracy and closer to 800 of loss. It is curious how the loss of the strategy decreased in the beginning and afterward increased to a higher level than it began. It reflects the overall poor performance of the clients. § RESOURCE USAGE MONITORING The PoC solution uses the Grafana application to monitor the use of computing and networking resources during the experiments, allowing the visualization of the data collected by the Prometheus. To demonstrate some of the capabilities of the PoC solution on monitoring, an analysis of CPU usage and network traffic was made in Experiment 3 (subsection <ref>). It is important, once again, to emphasize the focus on the demonstration of the capabilities and not on the achieved performance. Similar analyzes can be done for all the experiments run since all the metrics are saved in the Prometheus server database. §.§ CPU Usage Figure <ref> shows each client and server's CPU resource usage in the third experiment. The X-axis indicates the timestamp of the metric, and the Y-axis indicates the CPU usage in percentage, and each data sample has a 10-second interval between one another. We can notice that the figure is quite chaotic since all the ten clients and the server are plotted in it. However, we can see the execution of each of the fifteen server rounds ran in the experiment and that some of the clients had a CPU usage of 20-30% while others stayed between 1-5%. This can be explained by the client's CPU processing power heterogeneity since some clients run on different hardware profiles. We also noticed that the workload was distributed evenly across the clients since the data distribution was balanced. The choice of using Grafana for experiment data visualization allows us to dynamically select which data points to plot in the chart. So, to better analyze the assumption about the first chart, we can choose some clients of specific hardware and others of another specification to compare them. Figure <ref> demonstrates this selection using the same chart with now only client-5, client-4, and the server CPU usage being plotted. It becomes evident that client-4 (Intel Core i5-3210M CPU @ 2.50GHz) presented higher CPU resource usage and needed more time to process the same number of samples as client-5 (Intel Core i7-5500U CPU @ 2.40GHz). As for the server (orange line in Figure <ref>), the peak of CPU usage is noticeable at the beginning of the experiment as a brief bump (near 1%) before the first training round. This can be explained by the fact that the data distribution created by the server is done at the beginning of the experiment. We can also see that the server's CPU resource usage is almost negligible compared to any client's. The server only uses the CPU in between training rounds to aggregate the parameters received from the clients, which is a pretty simple task compared to the training done by the clients, thus explaining this phenomenon. The hardware profile of the server node is the same as client-5 (Intel Core i7-5500U CPU @ 2.40GHz), thus the amount of CPU resources needed for client and server tasks can be analyzed and understood in perspective. To demonstrate that the analysis done in experiment 3 could be done in other experiments, we present a sample chart from experiment 1. Figure <ref> shows the same chart of figure <ref> plotted with data from the first experiment. It is interesting to see how the data distribution affects the CPU usage of the clients. We can visualize the gaps in between them due to the size of the samples being different for each client, even for those running in similar hardware profiles. Further analysis can be done in the Grafana dashboard to understand the behavior of each one of the clients individually and in any time frame specified. We will not present an exhaustive use of interactions with Grafana charts in this paper for the sake of brevity. §.§ Network Traffic Similar to the CPU usage charts presented in the previous subsection, the same type of analysis can be done to visualize the network traffic between the clients and the server. Prometheus gathers information on traffic flowing in and out of Pods, collecting metrics such as, packets in/out, bytes in/out, and errors/drops, whereas Grafana enables interactive visualization and filtering of these metrics in different types of charts. To demonstrate this we created a simple dashboard with line charts containing bytes in/out of all Pods participating in the experiments. Two of these charts are presented in the following with data from the third experiment over the same time frame previously used in Figures <ref> and <ref>. Figure <ref> shows the received (inward) traffic for each one of the clients involved in the third experiment as stacked areas. The X-axis indicates the timestamp of the metric, and the Y-axis indicates the network traffic in Megabytes per second (MBps). Each data sample is also collected with a 10-second interval between them. The first aspect to notice is that peaks of traffic are presented at the beginning and end of each round, which is expected since that is when communication between client and server should occur. The aggregated traffic peak considering all 10 clients was around 80 MBps. We did not show traffic sent (outward) from each client Pod because the communication follows a client-server paradigm, i.e., all the traffic flowing out of clients goes into the server and vice-versa. To better understand client and server behavior, we can filter only one client and the server in the same chart. For example, Figure <ref> shows the traffic received (inward) in the server and client-9 for the third experiment. One can see that the communication pattern is always the server receiving data and, afterward, the client. This is expected since, initially, the clients connect to the server, and the server accepts the connection. During training, the pattern is maintained since the server receives the parameters from the clients, and the client receives the aggregated parameters from the server. It is possible to visualize that inward traffic at the server side (received parameters from all clients) generates traffic peaks at most around 10-12 MBps, which means that in this experiment clients send less data than they receive. In other words, the amount of data required to represent the client parameters is smaller than the aggregated parameters generated at the server. This type of deeper understanding of the resource consumption patterns in the client-server interaction can help developers to better tune their FL-based systems. The Prometheus monitoring application collects many other metrics, such as network packets, memory usage, or any other custom metric from applications configured in the system. For the purposes of demonstration, in this work, we considered only the CPU and the network due to their relevance in FL. Nevertheless, the interested experimenter can easily create new dashboards to monitor their experiment metrics in Grafana as they wish. § CONCLUSION FL is an ML paradigm that is constantly being adopted due to its distributed approach, the necessity of data privacy, and the vast increase in IoT devices. Heterogeneous data distributions/contents of clients proved to be a real challenge and gained great notoriety over the last few years. Proposals, such as DRL algorithms to dynamically learn the weight of the contributions of each client at each round, continue to be developed and will be fundamental to increasing the accuracy and usability of FL in more applications. Testing and assessing this FL algorithm can be a challenging and complex task due to the systems' distributed nature and the possible scenarios. To address this complexity, this work proposed a conceptual framework to facilitate testing federated learning scenarios in distributed computing environments with different types of data distributions. This work achieves the intended proposal by proposing a conceptual framework for testing federated learning scenarios and demonstrating an implementation of those concepts in the PoC solution. The solution developed shows that creating an edge-like FL testing framework that can scale to several different types of real-life scenarios using distributed heterogeneous computing and other data distributions is possible, inspiring further development of the concepts and improvement of the PoC solution. To prove the capabilities of the PoC solution, three experiments with three different FL scenarios were conducted. The results showed how it is possible to analyze the impacts of class and data imbalance in a real-life distributed system of FL through the framework via the f1-measures outputted in the experiment results. It was also possible to see the resource usage of the applications via the monitoring solution and to demonstrate the impact of the underlying heterogeneous infrastructure used. The critical point taken from this work is that designing a solution from the beginning that has independence between infrastructure and applications has shown to be very efficient and effective during the development phase. Containers enabled re-usability, isolation, and easy testing of the applications despite the environment in which they were deployed, either locally or in the distributed cluster. Also, this allowed the system to scale horizontally by adding more nodes to the cluster from an infrastructure perspective or by creating more application replicas from an application point of view. Further improvement could be made to generate automated visualizations of each client and the server's aggregated results (section <ref>). For instance, it should be possible to output the results of the FL algorithm's performance to the Prometheus server so that it can be further visualized by Grafana or another visualization tool with access to its database. This work also opens new opportunities to develop experiments with federated learning scenarios. Fault tolerance experiments could also be done in the PoC solution to enable further improvement in the framework's reliability. From an application point of view, one way would be to test limited connectivity scenarios by disconnecting clients between training. Stress testing from an infrastructure point of view would also be interesting to understand the platform's limitations. All of the code is publicly available and has been developed with extensibility in mind. Future works could also extend the datasets, models, and FL strategies supported and better improve the applications as a whole. § ACKNOWLEDGMENTS This study was partially funded by CAPES, Brazil - Finance Code 001. This work was also supported by the São Paulo Research Foundation (FAPESP), Brazil grant 23/00673-7. elsarticle-num
http://arxiv.org/abs/2407.12136v1
20240716194534
Molecular Topological Profile (MOLTOP) -- Simple and Strong Baseline for Molecular Graph Classification
[ "Jakub Adamczyk", "Wojciech Czech" ]
cs.LG
[ "cs.LG", "q-bio.QM" ]
123 A]Jakub Adamczyk0000-0003-4336-4288Corresponding Author. Email: jadamczy@agh.edu.pl. B]Wojciech Czech0000-0002-1903-8098 [A]Faculty of Computer Science, AGH University of Krakow, Krakow, Poland [B]Faculty of Computer Science, AGH University of Krakow, Krakow, Poland § ABSTRACT We revisit the effectiveness of topological descriptors for molecular graph classification and design a simple, yet strong baseline. We demonstrate that a simple approach to feature engineering - employing histogram aggregation of edge descriptors and one-hot encoding for atomic numbers and bond types - when combined with a Random Forest classifier, can establish a strong baseline for Graph Neural Networks (GNNs). The novel algorithm, Molecular Topological Profile (MOLTOP), integrates Edge Betweenness Centrality, Adjusted Rand Index and SCAN Structural Similarity score. This approach proves to be remarkably competitive when compared to modern GNNs, while also being simple, fast, low-variance and hyperparameter-free. Our approach is rigorously tested on MoleculeNet datasets using fair evaluation protocol provided by Open Graph Benchmark. We additionally show out-of-domain generation capabilities on peptide classification task from Long Range Graph Benchmark. The evaluations across eleven benchmark datasets reveal MOLTOP's strong discriminative capabilities, surpassing the 1-WL test and even 3-WL test for some classes of graphs. Our conclusion is that descriptor-based baselines, such as the one we propose, are still crucial for accurately assessing advancements in the GNN domain. § INTRODUCTION Graph classification has become a crucial type of supervised learning problem, increasingly relevant across various scientific domains. This surge in importance is largely attributed to the expanding quantity of structured datasets that represent pairwise relationships among various types of modeled entities. Graph classification algorithms are utilized in a variety of fields, particularly in chemoinformatics, where their application in Quantitative Structure-Activity Relationship (QSAR) modeling plays a critical role in predicting the functions of biochemically significant molecules <cit.>. Particularly, the prediction of ADME (Absorption, Distribution, Metabolism, Excretion) pharmacokinetic properties plays a pivotal role in supporting contemporary in-silico drug design <cit.>. Graph classification confronts a fundamental difficulty: measuring the dissimilarity between objects that are not situated in a metric space. Therefore, graphs, unlike more straightforward tabular or categorical data, require special methods to capture their complexity and relationships. Traditionally, this problem was solved by extracting isomorphism-invariant representations of graphs in the form of feature vectors, also known as graph embeddings, descriptors, or fingerprints. Alternatively, explicit pairwise similarity measures, known as graph kernels, can be constructed to systematically compare graph substructures <cit.>. Both methods remain intrinsically unsupervised or task-independent, however domain-specific knowledge can be incorporated by careful feature engineering. Although graph descriptors have achieved success in various benchmark classification tasks, more recently, they are often surpassed by the more advanced graph representation learning models exemplified by Graph Neural Networks (GNNs). They learn task-specific representations and can take advantage of pre-training to reduce negative effects of limited training data <cit.>. In developing a universal framework for graph classification, GNN models frequently incorporate descriptors, either as a method of input data augmentation or as supplementary global features in the readout layer <cit.>. Given the computational expense of graph representation learning, the requirement for extensive training data, the challenge of transferring pre-trained knowledge to specialized prediction tasks, and the prevalence of domain-specific graph descriptors, comparing GNNs with traditional methods remains valuable. This is particularly true when descriptor-based methods serve as a baseline, indicating whether GNNs can learn additional, task-specific features. The studies <cit.> and <cit.> have identified significant obstacles hindering progress in the field of machine learning. These include challenges in effectively evaluating models, particularly issues related to non-replicable results and comparisons using inadequate baselines. Besides, the study presented in work <cit.> advocates for statistical rigor, when comparing classifiers across multiple datasets. More specifically, in the graph classification field, the authors of <cit.> describe problems with replicating GNN results caused by lack of strict separation between model selection and model evaluation step. Moreover, they show that under a fair comparison framework, simple structure-agnostic baselines can outperform GNN models such as GIN or GraphSAGE. In <cit.> the authors demonstrate that trivial 1-layer GCN can perform on par with complex GNNs such as DiffPool. The work <cit.> similarly notes the effectiveness of training-free vertex descriptors in link prediction tasks. In the realm of molecular graph classification it was shown that descriptor-based models, particularly those utilizing molecular fingerprints, not only yield better average prediction results than GNN models but also are computationally cheaper by an order of magnitude <cit.>. The clear need of comparable prediction results and maintaining fair leaderboards led to the creation of benchmark datasets and related evaluation protocols such OGB <cit.>, MoleculeNet <cit.> or TDC <cit.>. Motivated by research underscoring the value of robust baselines, and inspired by recent methods utilizing graph topology descriptors <cit.>, we propose Molecular Topological Profile (MOLTOP), a baseline method for molecular graph classification, utilizing both topological descriptors and simple atom and bond features. The resulting baseline, under the fair evaluation protocols offered by modern benchmarks, results in a surprisingly efficient and strong model, able to outperform contemporary GNNs. Our method is fast, scalable, robust in distinguishing graphs, non-parametric, and it exhibits low-variance in prediction tasks. Additionally, we present the studies verifying expressive power and feature importance of the proposed representation. The code is available at https://github.com/j-adamczyk/MOLTOPhttps://github.com/j-adamczyk/MOLTOP. § RELATED WORKS Graph descriptors, which generate isomorphism-invariant vectors representing graphs, exemplify the feature-engineering approach to graph classification. Descriptors are versatile in representing features at different levels – from granular to aggregated, local to global <cit.>, and from purely structural aspects to those including multidimensional labels <cit.>. In practical applications, the descriptors from spectral graph theory <cit.> or the ones using histogram aggregation of vertex/edge topological features <cit.> have successfully rivaled more complex methods. The approach of generic graph descriptors was expanded by incorporating domain-specific representations, like molecular fingerprints, which have become widely used in predicting biochemical properties and molecular database search. Typical fingerprints are bit-vectors of a given size, built based on depth-first search explorations from each atom, and incorporating its 2D <cit.> or 3D structure <cit.>. The molecular property prediction based on molecular fingerprints can be highly competitive to GNNs, as shown in <cit.> and evident from OGB leaderboards <cit.>. Bypassing the need for manual feature engineering, GNNs provide an automated method for extracting task-specific graph features and transporting them directly to a trainable readout layer. Starting from early works introducing Graph Convolutional Network (GCN) <cit.> and GraphSAGE <cit.> the field of graph representation learning has evolved significantly, leading to the development of numerous models, as categorized by <cit.>. Some of these models, e.g., Graph Isomorphism Networks (GIN) <cit.> achieved state-of-the-art performance in benchmark graph classification tasks including molecular property prediction. GIN was designed to match the discriminative power of the Weisfeiler-Lehman isomorphism test, thereby offering additional insights into the representational capabilities of GNNs. Subsequently, in <cit.> the authors proposed a hybrid model D-MPNN, which combines edge-centered graph convolutions and molecular descriptors concatenated at the readout layer. That work represents a significant advancement in molecular graph classification, notable not only for its comprehensive and detailed analysis of model efficiency but also for its successful integration of the strengths of both GNNs and descriptors. In their work, <cit.> adopted the graph attention mechanism to develop the AttentiveFP model. This method is capable of utilizing atom and bond features, effectively extracting both local and global properties of a molecule. When operating in a low-resource learning regime, GNNs often struggle to build discriminative representations of graphs. The success of transfer learning in the field of Natural Language Processing (NLP), coupled with the scarcity of training data in molecular property prediction, has inspired researchers to adopt different pre-training strategies tailored for GNNs. In their work, <cit.> introduced a comprehensive framework that employs pre-training techniques like context prediction or attribute masking. This approach enables the transfer of knowledge from large molecular datasets to general-purpose GNNs, enhancing classification accuracy on benchmark tasks. In parallel, the transformer-style architecture GROVER was introduced by <cit.>, reporting notable advancements over existing state-of-the-art methods. It utilized the largest pre-training database at the time, comprising 10 million unlabeled molecules. Graph Contrastive Learning (GraphCL) <cit.> was introduced as another self-supervised learning method, leveraging parameterized graph augmentations and maximizing the mutual information between augmentations sharing the same semantics. GraphCL was further extended by enabling automatic, adaptive selection of augmentation parameters <cit.> (JOAO). New pre-training strategies leveraging 2D topological structures extracted by encoders and enriched by 3D views led to development efficient GraphMVP framework <cit.>. More recently, the GEM model <cit.> proposed incorporating 3D molecular properties, based on Merck molecular force field (MMFF) simulations. Combining those geometric features with the GIN model and pretraining on 20 million molecules from the ZINC database led to exceptional performance in a range of graph classification and regression tasks, although at a very high computational cost. The most recent work, <cit.> describes relative molecule self-attention transformer (R-MAT), which uses atom embeddings reflecting relative distance between atoms. R-MAT reports SOTA results of molecular benchmarks, but uses different datasets and data splits than other models, therefore it is difficult to compare to this approach. In contrast to multiple works reporting high efficiency of pre-trained GNN models, many thorough ablation studies, such as <cit.>, provide contrary results. They present important findings on why feature engineering combined with low parameter machine learning can still outperform complex models, and why the pre-training benefits can be diminished in practical property prediction setups. § PRELIMINARIES Molecular graph. Let G = (V, E) denote an undirected graph representing a molecule, where V and E are the sets of vertices (nodes, atoms) and edges (links, bonds), respectively. We also mark G = (A, X_n, X_e), where A is the adjacency matrix, X_n is the node feature matrix and X_e is the edge feature matrix. Graph notation. We denote single vertices as v or u, and edges as two element vertex sets e = {u, v}. 𝒩(v) is the set of neighbors of a node v, deg(v) is the degree of a node v, i.e. the number of its neighbors, deg(v) = |𝒩(v)|. Graph classification. We consider the graph classification task, where we are given a dataset 𝔻 = ( G^(i), Y^(i)), i = 1, 2, ..., N, of Ngraphs and their labels. Class (label) for a given graph Y^(i) is a boolean for single-task datasets. For multitask datasets, it is a binary vector of length T (for T binary classification tasks), and it can have missing labels. § METHOD We propose Molecular Topological Profile (MOLTOP) as a baseline method for benchmarking against GNNs in molecular graph classification tasks. Baselines are simple and computationally cheap methods, expected to provide a reference point for more sophisticated methods. While not a focus point of any paper, they are a necessary part of fair valuation of new algorithms, especially on new datasets. For MOLTOP, given its role as a baseline method, simplicity and speed are just as crucial as classification accuracy. In order to achieve good performance on chemical data, we utilize both topological and molecular features. The method relies on extracting feature vectors from graphs independently, and using Random Forest to classify resulting tabular data. In contrast to previous baselines, either purely topological (e.g. LDP <cit.> and LTP <cit.>), or purely feature-based (molecular fingerprint from <cit.>), it incorporates both graph structure and atoms and bonds features, all of which are crucial in chemistry. The first group of features we consider are vertex degree statistics, to directly summarize the basic topology of a 2-hop neighborhood around each node <cit.>. We denote the multiset of vertex neighbors' degrees as DN(v) = {deg(u) | u ∈𝒩(v) }. For each atom, we then calculate the following statistics: deg(v), min(DN(v)), max(DN(v)), mean(DN(v)), std(DN(v)). In order to create graph-level features, they are compactly represented using histograms, a technique akin to the global readout in GNNs, but with higher expressivity than just simple mean or sum <cit.>. For molecular graphs, especially in medicinal chemistry, having a degree higher than 8 is very rare. Using the same number of bins for all features would result in a very large number of all-zero features for many molecules. Therefore, we propose to reduce the number of bins to 11 for deg(v), min(DN(v)) and max(DN(v)). This covers singular hydrogens, covalent bonds, and nearly all atoms with higher degrees than 8 (e.g. due to ionic or metallic bonding) in typical biochemistry. Inspired by previous structural approaches and path-based molecular fingerprints, we add further topological descriptors to enhance this representation. We select features that work well for describing molecular fragments, and that should discriminate well between different scaffolds and functional groups. Concretely, we selected Edge Betweenness Centrality (EBC), Adjusted Rand Index (ARI) and SCAN Structural Similarity score. Each of those descriptors is computed for edges (bonds), but focuses on a different aspect of molecule structure. EBC considers global graph connectivity structure and its shortest path-based properties. ARI uses 3-hop subgraphs and neighborhood connectivity patterns. SCAN also considers local connectivity patterns, but is based on the notion of node clusters and outliers. Therefore, those features should provide complementary information in the feature vector. Another reason for utilizing edge features is that similar edge-focused approaches were successful in improving GNNs for molecular property prediction <cit.>. Each of those features is calculated for all bonds in the molecule and aggregated with a histogram. Edge betweenness centrality (EBC) <cit.> is a centrality measure for edges, defined as a fraction of shortest paths going through that edge: EBC(e) = 2/|V|(|V|-1)∑_u, v ∈ Vσ_u,v(e)/σ_u,v, where σ_u,v is the total number of shortest paths between u and v, and σ_u,v(e) is the number of that paths going through e. The normalization factor before the sum ensures that the values lie in range [0, 1] and are unaffected by graph size. Information about the shortest paths in the graph is well known to be important in chemistry, being used e.g. in Wiener index and Hyper-Wiener index <cit.>, and was also successfully incorporated into multiple GNNs <cit.>. However, the histogram of centralities includes more information than the lengths of shortest paths would, because it shows the actual distribution of critically important edges. If there are bonds with very high EBC values, it indicates the existence of bridge-like subgraphs, such as glycosidic bonds. It can also easily distinguish between linear and polycyclic scaffolds, since the ring-rich topologies will have smaller EBC values in general, while linear structures have many high-EBC bonds. Adjusted Rand Index (ARI) <cit.> is a normalized measure of overlap between neighborhoods of two vertices u and v: ARI(u, v) = 2(ad - bc)/(a + b)(b + d) + (a + c)(c + d), where a is the number of edges to other vertices that u and v have in common, b is the number of edges to other nodes for u that v does not have, c is the number of edges to other nodes for v that u does not have, and d is the number of edges to other nodes that neither u nor v has. Calculated for edge e = {u, v}, it provides information about the subgraphs of radius 3 (from neighbors of v, through edge e, to neighbors of u). Among various neighborhood overlap measures, ARI has a particularly strong statistical interpretation, being equivalent to Kohen's κ defined on incident edge sets of u and v <cit.>. While this measure is typically used for link prediction, it can also be calculated for existing edges. This method has been used for identifying 'incorrect' links, where it surpassed other techniques <cit.>, and a similar approach was also used in LTP <cit.>. Therefore, the histogram of ARI values should work well for existing edges, taking into consideration larger subgraphs than degree features and indicating the general connectivity patterns in a graph. In particular, it is capable of differentiating between star-like graphs (such as spiro compounds or functional groups containing atoms with high coordination number, e.g. phosphate groups) and polycyclic molecules characterized by grid-like subgraphs, like polycyclic aromatic hydrocarbons. SCAN <cit.>, used for node clustering and graph sparsification, defines the structural similarity score for edges as: SCAN(u, v) = |𝒩(u) ∩𝒩(v)| + 1/√((deg(u) + 1)(deg(v) + 1)) SCAN scores were designed to detect the edges critical for graph connectivity, and those corresponding to outliers. In molecules, the distribution of SCAN scores can easily distinguish between linear structures (e.g. alkanes with long carbon chains), where the scores are low in general, and well-connected, ring-rich molecules (e.g. steroids). Molecular graph classification relies heavily on atom and bond features, meaning that baselines utilizing only graph topology are not expressive enough. In fact, purely feature-based molecular fingerprint baseline from <cit.>, using only atom counts (i.e. counts of different chemical elements), can outperform some GNNs. Therefore, MOLTOP incorporates two such features: atomic numbers and bond types. They are the most apparent and consistently available features, universally employed by GNNs for analyzing molecules <cit.>. For atoms, we one-hot encode the atomic numbers up to 89 (with zero marking unknown types). We discard actinides and all further molecules, since they are all radioactive and extremely rarely used. For each chemical element, we compute its mean, standard deviation and sum (total count in the molecule) as graph-level features. We do the same for bonds, with 5 possible types (single, double, triple, aromatic, or miscellaneous). In principle, one could add further features the same way, if they are known to be important for a given problem, e.g. chirality. All features are computed for each graph independently, with the same number of bins n_bins for all histograms. This results in 11 · n_bins + 90 · 3 + 5 · 3. This can result in many all-zero features, especially for atom features. We simply drop all such columns, based on the training data. Therefore, the number of features is often significantly reduced after this step. The only hyperparameter of the feature extraction is the number of bins for histograms n_bins. In other works <cit.> this is either a hyperparameter, requiring tuning, or just an arbitrarily set number. For MOLTOP, we propose a data-driven solution instead, setting the number of bins equal to the median size of the molecules (i.e. number of atoms) in the dataset. This is motivated by the fact that molecule sizes for drug-like compounds typically follow a right-skewed, single-modal distribution with number of atoms very rarely exceeding 50 (see the supplementary material for plots). This fact is often used in medicinal chemistry, e.g. by Lipinski's rule of 5 <cit.>. Therefore, for the vast majority of data, it would bring little benefit to use a high number of bins. With this addition, MOLTOP does not require any hyperparameter tuning for feature extraction. After feature extraction, we use Random Forest (RF) as a classifier. RF serves as an effective prediction model due to its low computational complexity and high scalability. Moreover, its performance is less sensitive to hyperparameter choices, unlike other commonly used classifiers such as SVMs or boosting methods <cit.>. It also natively supports multitask learning, which is common in molecular graph classification. The dimensionality of our representation is quite high, therefore we use larger number of trees and stronger regularization than default settings in Scikit-learn <cit.>, to better incorporate new features and prevent overfitting. Based on the average results on validation sets of MoleculeNet (detailed in <ref>), MOLTOP uses 1000 trees, the entropy as splitting criterion, and minimum of 10 samples to perform a split. Those reasonable defaults make it a hyperparameter-free method, making it extremely easy to use and computationally cheap, which is important for baseline methods. §.§ Complexity analysis The computational complexity of MOLTOP is the sum of complexities of its features, since they are computed independently. Vertex degree features have complexity O(|E|) <cit.>. Computing EBC has complexity O(|V||E|) <cit.>. Calculation of both ARI and SCAN Structural Similarity scores for all edges is pessimistically O(|V||E|), but the expected complexity for molecular graphs is O(|E|) due to their sparsity (for proofs, see the supplementary material). The total complexity of feature extraction is thus O(|V||E|). § EXPERIMENTS AND RESULTS For the main evaluation of the proposed method, we selected 8 classification datasets from MoleculeNet benchmark <cit.> (described in detail in the supplementary material), the most widely used molecular graph classification benchmark. For fair evaluation, we used deterministic scaffold split, with the splits provided by OGB <cit.>. This setting is much more challenging than random split, which does not enforce out-of-distribution generalization. In addition, 5 of those datasets are multitask, including massively multitask ToxCast dataset with 617 targets. In all cases, we follow the recommendation from <cit.>, training 10 models with different random seeds, and we report mean and standard deviation of AUROC. For implementation of MOLTOP feature extraction, we used PyTorch Geometric <cit.> and NetworKit <cit.>. Those frameworks provide efficient data structures and parallel processing. For Random Forest, we use Scikit-learn <cit.>. Since this implementation does not allow missing labels in the training set, we fill them with zeros. This is acceptable, since those tasks are already imbalanced, and such adjustment makes this even a bit more challenging. §.§ Validation set experiments During initial experiments, we verified our modelling choices by using average AUROC on validation sets. This setting was chosen, since we aimed to design a general baseline, that performs well on average for molecular classification. We started with only degree features with 50 bins (inspired by <cit.>), and added proposed improvements one by one. First, we validated that adding other topological descriptors, e.g. other centrality scores than EBC or other neighborhood overlap than ARI, gave results worse or similar to our proposed descriptors. Next, we confirmed that using all proposed statistics of atoms and bonds is crucial. Furthermore, we verified that using median molecule size as the number of bins gave results better or comparable to manual tuning. Lastly, we performed hyperparameter tuning of Random Forest, and resulting values were 1000 trees, entropy splitting criterion, and minimum of 10 samples to perform a split. Those align with our postulate to use more trees and stronger regularization. We summarize the impact of adding described improvements in <ref> (for more detailed tables see the supplementary material). We report average AUROC and standard deviation for test sets of all 8 MoleculeNet datasets. All proposed changes improve the results, in particular the introduction of atom and bonds features in addition to pure topology. This shows that effective baselines for molecular data have to use both structure and domain-relevant features. §.§ MoleculeNet classification We compared MOLTOP to 18 other graph classification methods on MoleculeNet benchmark, with results in <ref>. We compare it to methods from three groups: general-purpose GNNs, GNNs designed specifically for molecular data, and graph classification baselines. This way, we verify not only that MOLTOP improves upon previous baselines, but also achieves strong performance in comparison to sophisticated, domain-specific models. We include 8 general-purpose GNNs: GIN, GCN and GraphSAGE from <cit.>, both with and without context prediction pretraining, as well as recent models based on contrastive learning, GraphCL <cit.> and JOAO <cit.>. For GNNs designed specifically for molecular property prediction, we include multiple recent models utilizing different approaches to incorporating molecular features: D-MPNN <cit.>, AttentiveFP <cit.>, GROVER <cit.> (large variant), GraphMVP <cit.> (regular and contrastive variants), and GEM <cit.>. We also compare to four other baselines: purely topological LDP <cit.> and LTP <cit.>, purely feature-based molecular fingerprint from <cit.> (which uses atom counts as features), and ECFP, the molecular fingerprint commonly used as a baseline for GNNs <cit.> (using default settings). For those baselines, we use Random Forest with 500 trees as a classifier, which follows <cit.> and is a common setting in chemoinformatics. Following best practices for statistical comparison of classifiers from <cit.>, we report average model rank across datasets, in addition to average AUROC. This metric is less influenced by outliers among scores, and therefore better measures how the model really performs on average. In particular, the ClinTox dataset often gives very unstable results <cit.>, and the average rank should be less susceptible to this problem. The main observation is that MOLTOP, under this fair comparison protocol, outperforms the majority of models on average, often by a large margin. In terms of average rank, it exceeds all GNNs without pretraining except for D-MPNN, which has almost identical average rank. It also has results better than most pretrained GNNs, even including recent, complex models like JOAO, GROVER and GraphMVP. This is particularly significant, since MOLTOP does not utilize any external knowledge like those models, nor did it require very costly pretraining on massive datasets. Our results are also notably stable, with low standard deviations, indicating the robustness of this approach. MOLTOP does not require any pretraining, and requires only around 50 minutes for the entire benchmark (with massively multitask ToxCast taking the majority of the time). In addition, it has very low standard deviations, indicating stable and robust behavior. This shows that fair comparison, using strong baselines, remains important even in the era of large pretrained models. Outperforming GNNs can be explained by the global nature of features used by MOLTOP. Those models, while sophisticated, still rely on an inherently local message-passing paradigm, and especially without pretraining it is hard for them to fully understand molecular relations on limited data. The only models that have better average rank than MOLTOP are pretrained GIN, D-MPNN, and GEM. However, using Wilcoxon signed-rank test (recommended by <cit.>) with α=0.05, we determined that difference with GIN and D-MPNN performance is not statistically significant (p-values 0.547 and 0.742). Only GEM outperforms MOLTOP significantly (p-value 0.016), but we note that it has an enormous computational cost, including fine-tuning and even inference, since it requires generation of multiple conformers and Merck molecular force field (MMFF) optimization. Those operations can easily take minutes per molecule, can often fail for molecules with complicated geometries (e.g. highly rotatable bonds), and are simply impossible in many cases, e.g. for compounds with disconnected components like salts. Therefore, MOLTOP always achieves results better or as good as GNNs, except for GEM, which has major practical downsides. MOLTOP also improves upon other baselines by a large margin. Previous approaches like LDP, LTP and molecular fingerprint of <cit.> often fail to beat almost any GNNs, and thus are unsuitable for molecular data. Notably, we even outperform ECFP4 fingerprint, often used to compare again GNNs. This shows that improving upon existing baselines remains important for fair comparison. We present additional comparisons with graph kernels in the supplementary material. We omit them here, because due to OOM errors they couldn't be computed on HIV and MUV datasets, meaning that we can't directly compare their average AUROC and rank to other models in <ref>. §.§ HIV leaderboard results We further evaluate MOLTOP on the HIV dataset featured in OGB leaderboard <cit.>, comparing it to various cutting-edge models that do not provide results on the whole MoleculeNet benchmark. The results are shown in <ref>. While this is the same HIV dataset as used before, here we are not allowed to use the validation data (due to leaderboard rules), even when no hyperparameters are tuned. Since there are currently 34 models on the leaderboard, here we present a few selected ones. MOLTOP achieves 14-th rank, outperforming well-known PNA <cit.> and DeeperGCN <cit.>, and coming very close to Graphormer <cit.>, GSAT <cit.> and CIN <cit.>. It is also narrowly better than very powerful Directional GSN <cit.>. If we lift the limitation of not using the validation set, which is quite artificial for hyperparameter-free MOLTOP, it gets 80.8% AUROC and outperforms both GSAT and Graphormer. §.§ Peptides classification In order to further evaluate the out-of-distribution generalization abilities of MOLTOP, we utilize the peptides-func dataset from LRGB benchmark <cit.>, concerning peptide function classification. The characteristics of this data are very different from MoleculeNet, with peptides being much bigger molecules, with larger diameter and long-range dependencies. We do not perform any tuning, requiring the hyperparameter-free baseline to perform reasonably well even on this very different domain. In <ref>, we compare to the results from <cit.>. Remarkably, MOLTOP outperforms all GNNs, including graph transformers specifically designed for this task, e.g. SAN with RWSE embeddings. At the same time, it is much more stable, with very low standard deviation. This shows that it indeed works very well as a baseline, even for novel datasets and molecular domains. §.§ Time efficiency benchmark While MOLTOP has very low feature extraction complexity, as outlined in <ref>, we also measure wall time for both feature extraction and RF training, summarized in <ref>. On four smallest datasets, it requires less than ten seconds for both, and at most about a minute for a further three datasets. In particular, feature extraction takes only 35 seconds on over 15 thousands of peptides, which are very large molecules by the standard of molecular graph classification, which mostly concerns small, drug-like compounds. On MUV, which is by far the largest in terms of the number of molecules, feature extraction still takes only about 2.5 minutes. In general, we note that MOLTOP feature extraction is embarrassingly parallel, and can process almost arbitrary number of molecules, given enough CPUs. Overall, the ToxCast takes the most time, but for the training part, due to being massively multitask. This is most likely the artifact of the implementation, since such dataset are rare and Scikit-learn implementation of RF is not particularly optimized for those cases. In fact, this is the only dataset in molecular property prediction that we are aware of with such huge number of tasks. For comparison with GNNs, we focus on peptides-func dataset, for which <cit.> provides wall times. Computing LapPE or RWSE embeddings alone, which are necessary for feature augmentation to get reasonable performance of GNNs on this dataset (due to long-range dependencies), takes about a minute. This does not even take into consideration the training time of GNN model itself. Finally, since MOLTOP is hyperparameter-free, it does not require time for tuning for new datasets. This is especially advantageous in comparison to GNNs, which require extensive tuning of at least learning rate and regularization parameters for new datasets. This increases their training cost multiple times, while MOLTOP can just be used as-is. §.§ Feature importance analysis We additionally validate the importance of features leveraging Random Forest average decrease in entropy. This metric is effective in identifying the features that are most useful for the model. The importance of a feature is the sum of importances of its histograms bins, since each bin is treated as a separate feature for the classifier. Next, we average values obtained from 10 classifiers on each dataset, based on different random seeds. To aggregate this information for the entire benchmark, we further average the importances for all 8 datasets. This is shown in <ref>. The main outcome is that all features are useful, as there are none with very low importance. The most influential feature is EBC, which highlights that global information is particularly important for molecular data, and validates our initial belief. The least useful feature is the maximal degree of neighbors, which is expected, as node degrees are typically very low in chemistry. For additional ablation studies, see the supplementary material. §.§ Expressivity experiments Lastly, we analyzed the expressive power of MOLTOP topological features in distinguishing graphs. It is typically represented using a hierarchy of k-dimensional Weisfeiler-Lehman isomorphism tests <cit.>, but can also be verified by using particular classes of graphs, which are known to be hard to distinguish for computational methods. Typical GNNs are at most as powerful as 1-WL test <cit.>, but with specific extensions, often utilizing topological descriptors in forms of shortest paths or subgraphs counting, GNNs become more powerful <cit.>. We verified the discriminative power of MOLTOP feature vectors using graph8c and sr25 datasets <cit.>. Our topological features are all integers after histogram aggregation, so we deem two graphs to be different if they have different values of any feature. We perform paired comparisons of graphs this way, where the number of pairs is 61M for graph8c and 105 for sr25. We report number of errors, i.e. undistinguished pairs, in <ref>. MOLTOP achieves very good results, showing high power in distinguishing graphs. It outperforms all message-passing GNNs, probably due to usage of features that incorporate more global information. Additionally, it performs almost as well as PPGN <cit.> and <cit.>, which are provably as powerful as 2-FWL test, equivalent to 3-WL test. It also achieves perfect result on sr25, which consists of strongly regular graphs. This is particularly exceptional, as they are 3-WL equivalent <cit.>, which means that MOLTOP can distinguish graphs for which even 3-WL test fails. We provide examples of graphs distinguishable by MOLTOP, but not e.g. by 1-WL test, in the supplementary material. § CONCLUSION We presented a new type of molecular graph embedding, which leverages local and global structural information aggregated from vertex and edge descriptors, as well as basic semantics of bonds and atoms. Combined with low parameter classification using Random Forests, it forms a robust baseline algorithm for molecular property prediction called MOLTOP. The key advantages of MOLTOP are: low computational cost, no hyperparameter tuning required, and high discriminative power, which surpasses 1-WL isomorphism test. Based on fair evaluation protocols and deterministic scaffold splits, we show that MOLTOP is surprisingly competitive with GNNs, including out-of-generalization applications to new datasets. With additional verification of results using Wilcoxon signed-rank test, we show that our proposed model is better or as good as all baselines and GNNs, except for GEM model, which uses computationally expensive and error-prone 3D molecular modelling. In the future work, we plan to experiment with incorporating additional features, and adapt this approach to e.g. materials chemistry. We also want to more thoroughly analyze the theoretical aspects of feature descriptors and their discriminative abilities in terms of WL hierarchy. We conclude that strong baselines, such as MOLTOP, are still important to gain deep insights into advances of GNN pre-training and assessing benefits of incorporating spatial or structural information, especially in the experimental setups with limited computational budgets. Research was supported by the funds assigned by Polish Ministry of Science and Higher Education to AGH University of Krakow, and by the grant from Excellence Initiative - Research University (IDUB) for the AGH University of Krakow. We gratefully acknowledge Poland's high-performance Infrastructure PLGrid ACK Cyfronet AGH for providing computer facilities and support within computational grant. We would like to thank Alexandra Elbakyan for her work and support for accessibility of science. Supplementary information § DESCRIPTORS HISTOGRAMS EXAMPLES Here, we visualize the discriminative power of proposed topological descriptors on two example molecules (<ref>): Dipalmitoylphosphatidylcholine (DPCC), a phospholipid used as a pulmonary surfactant, and Paclitaxel, used in cancer treatment. Histograms of EBC, ARI and SCAN (with 5 bins and normalized for readability) are presented in <ref>. It should be noted that those distributions follow chemical intuitions outlined in the Methods section in the main paper. 1in § DATASETS DESCRIPTIONS Here, we present a short description of datasets from MoleculeNet, as well as peptides-func from LRGB <cit.>, including their basic properties. The statistics are summarized in <ref>, and we describe all datasets below. Additionally, in <ref> we present distributions of molecules sizes for HIV and ToxCast datasets. The distributions for the rest of the datasets are very similar, so we omit them for brevity. * BACE <cit.> - binary prediction of binding results for a set of inhibitors of human β-secretase 1 (BACE-1). * BBBP <cit.> - prediction whether a compound is able to penetrate the blood-brain barrier. * HIV <cit.> - prediction whether the molecule can inhibit the HIV replication. * ClinTox <cit.> - database of drugs approved and rejected by FDA for toxicity reasons. Two tasks concern prediction of drug toxicity during clinical trials and during FDA approval process. * MUV <cit.> - the Maximum Unbiased Validation (MUV) has been designed for validation of virtual screening techniques, consisting of 17 tasks based on PubChem BioAssay combined with a refined nearest neighbor analysis. * SIDER <cit.> - the Side Effect Resource database, considering prediction of adverse side effects of drugs on 27 system organ classes. * Tox21 <cit.> - coming from 2014 Tox21 Data Challenge, this dataset concerns prediction of 12 toxicity targets. * ToxCast <cit.> - toxicology measurements for 617 targets from a large scale in vitro high-throughput screening. * peptides-func <cit.> - functions of peptides (small proteins), based on SATPdb data. § FEATURE EXTRACTION PIPELINE VISUALIZATION Here, we present a plot of feature extraction pipeline of MOLTOP, i.e. extracted features and their aggregation. We recall that deg means degree of a node, DN is the multiset of degrees of neighbors, EBC is Edge Betweenness Centrality, ARI is Adjusted Rand Index, and SCAN is the SCAN Structural Similarity score. We aggregate topological features with histograms, resulting in integer features. Depending on the feature, we use either 11 bins or the number of bins equal to the median size of the molecules in the training set. For each of the 90 atom types (atomic numbers) and 5 bond types, we compute the sum, mean and standard deviation in the molecule. All features are finally concatenated into the full MOLTOP feature vector. § COMPUTATIONAL COMPLEXITY OF ADJUSTED RAND INDEX AND SCAN SCORES We present the derivation of the computational complexity for Adjusted Rand Index (ARI) and SCAN Structural Similarity scores for the edges in the graph. We denote the highest vertex degree as k. We assume the adjacency sets representation of a graph G = (V, E), and amortized complexity of checking existence of element in a set as O(1). The computational complexity for computing Adjusted Rand Index (ARI) for all existing edges in the graph is O(k |E|), with the worst case complexity O(|V||E|), which occurs for full graphs. The formula for computing ARI for a single edge e = {u, v} is: ARI(u, v) = 2(ad - bc)/(a + b)(b + d) + (a + c)(c + d), It includes computing four values. a is the number of edges to other vertices that u and v have in common, which reduces to set union, with complexity O(k+k) = O(k); b is the number of edges to other nodes for u that v does not have, which reduces to set intersection, with complexity O(k); c is the number of edges to other nodes for v that u does not have, and it also has complexity O(k); d is the number of edges to other nodes that neither u nor v has, and it reduces to computing the difference between total number of vertices |V| and size of neighborhoods' union, which is already calculated for a, therefore it is simply the difference of two integers, with complexity O(1). Total complexity for a single edge is, therefore, O(k + k + k + 1) = O(k). Computing ARI for all edges requires evaluating the expression above for |E| edges. Therefore, the total complexity is O(k |E|). For full graphs, for which all vertices have degree |V|, and therefore k = |V|, the complexity becomes O(|V||E|). The computational complexity for computing SCAN Structural Similarity scores for all existing edges in the graph has complexity O(k |E|), with worst case complexity O(|V||E|), which occurs for full graphs. The formula for computing SCAN score for a single edge e = {u, v} is: SCAN(u, v) = |𝒩(u) ∩𝒩(v)| + 1/√((deg(u) + 1)(deg(v) + 1)) Computing size of neighborhoods' intersection reduces to computing size of set intersection, which is O(k). Degrees of vertices can be computed during the iteration needed for computing set intersection. This way, total complexity for a single edge is O(k). Computing SCAN scores for all edges requires evaluating the expression above for |E| edges. Therefore, the total complexity is O(k |E|). For full graphs, for which all vertices have degree |V|, and therefore k = |V|, the complexity becomes O(|V||E|). We note that molecular graphs are very sparse, i.e. |E| << |V|^2. In particular, the number of bonds very rarely exceeds 10, especially in medicinal chemistry. Therefore, we can treat k as a constant, and this way the expected complexity reduces to O(|E|) for this kind of graphs. Alternatively, triangle counting algorithms can be used for computing neighborhood intersections. They typically have complexity O(a(G) |E|), where a(G) is the arboricity of the graph <cit.>. This is particularly useful for molecular graphs, since almost all known molecules (with some exceptions, e.g. for crystals) are planar <cit.>, and planar graphs have arboricity at most 3 <cit.>. Utilizing this constant, the complexity reduces to O(|E|), the same as for neighborhood intersection-based algorithms. § DETAILED RESULTS OF MODEL IMPROVEMENTS Here, we present the more detailed results of proposed model improvements in <ref>. We report test AUROC, i.e. mean and standard deviation across 10 runs, for all datasets. The last row corresponds to MOLTOP, with all improvements, and it achieves the best result in all cases. Additionally, the proposed improvements not only always result in the increase of average AUROC, but also almost always improve results for all datasets. Analyzing the detailed effects of particular changes on different datasets allows us to infer, which features are the most important for a given task. For example, for BBBP dataset, introducing the atoms and bonds features gave the largest improvement of 3.4%, which aligns with chemical insight that particular elements and bonds are well correlated with the ability to penetrate the blood-brain barrier. HIV dataset shows similar behavior, with 3.6% improvement. On the other hand, introducing those features for SIDER and Tox21 results in negligible change of 0.1% and 0.3%, respectively. However, the proposed topological descriptors increase AUROC by 4.9% and 6.7% for those datasets, which highlights that drug side effects and various toxicity targets are more affected by the overall topology of the molecule. § REPRODUCIBILITY AND HARDWARE DETAILS To ensure the full reproducibility of our results, we used Poetry tool <cit.> to pin the exact version of all dependencies in the project, including transitive dependencies of directly used libraries. We distribute the resulting file, as well as file generated from it, along with our source code. This ensures the exact reproducibility of all results that is OS-agnostic and hardware-agnostic. We conduct all experiments on CPU, since some operations on GPU are inherently nondeterministic, e.g. those related to processing sparse matrices in PyTorch Geometric. Due to efficiency of MOLTOP, the usage of GPU is also not necessary. All experiments were run on a machine with Intel Core i7-12700KF 3.61 GHz CPU and 32 GB RAM, running Windows 10 OS. We additionally ran the experiments on a second machine with Intel Core i7-10850H 2.70 GHz CPU and 32 GB RAM, running Linux Ubuntu 22.04 OS. The results were exactly the same in all cases. § EVALUATION PROTOCOLS OF OTHER GNNS Here, we compare the evaluation protocol presented in this paper with alternatives found in the literature. In particular, we focus on the distinction between different types of splits, and the subtle differences between them, which render many direct comparisons unfeasible. Random split, typically used in machine learning, just randomly (or, precisely, pseudorandomly, since we can set the random seed) selects the test set. It is interpolative in nature, i.e. the test set roughly follows the overall distribution of the data. This is not realistic for molecular property prediction, where we are often interested in novel compounds. Those tasks are extrapolative in nature, i.e. it is expected for future molecules to be structurally different from the existing ones. If time information is available, we can use a time split, like for PDBbind dataset in <cit.>. However, this is almost never the case, and we use scaffold split instead, also proposed for evaluation of molecular classification in <cit.>. It aims to take the least common groups of structurally similar molecules into the test set, which requires out-of-distribution generalization to achieve a good score. In many cases, this is a good approximation of a time split <cit.>. Firstly, we compute the Bemis-Murcko scaffold <cit.> for each molecule, and then we group molecules by their scaffolds. The subtle differences in the algorithm dividing them into training, validation and test sets determine practical aspects of evaluating classification accuracy. In fact, they are the major source of differences in scores observed in molecular property prediction literature. As described in <cit.>, we put the smallest groups of scaffolds in the test set, until we get the required size, and then we do the same for the validation set. All other scaffolds, which are the most common, constitute the training set. This is a fully deterministic setting, and was used in e.g. <cit.>. Splits provided by OGB <cit.> also follow this protocol. On the other hand, multiple works, such as D-MPNN <cit.> and GROVER <cit.>, explicitly state that they compute scaffold splits multiple times, which indicates a non-deterministic process. This is indeed the case, since <cit.> explicitly describe that they put any scaffold groups larger than half the test size into the training set, and then the remaining groups are put randomly into training, validation and test datasets. This randomness will very likely result in larger scaffold groups in validation and training sets than in the case of deterministic scaffold split. This setting is called balanced scaffold split in <cit.>. This distinction actually makes a very significant difference in scores, as analyzed in detail by <cit.>. Balanced scaffold split achieves much higher results, often by 5% or as much as 20% on BBBP dataset, for multiple models. This is particularly problematic, as this difference is very subtle and not highlighted in the papers at all. GROVER <cit.> mentions that they use three different random-seeded scaffold splits. Checking the official code <cit.>, we found balanced_scaffold in multiple places, confirming that the authors were aware of the difference between scaffold split and balanced scaffold split. This is additionally evidenced by comments in the code and function arguments. For this reason, we conclude that high scores in <cit.> are, at least in some part, the result of this choice. As a consequence of this splitting differences, we cannot compare our results directly with the ones presented in D-MPNN or GROVER papers. The scores for both models, taken from GEM paper <cit.>, which we use for the comparison, are lower and much more in line with results for deterministic scaffold split, as presented in <cit.>. This is, again, the easiest to check with BBBP dataset, on which the difference is about 20% just due to the splitting strategy. We cannot compare to R-MAT <cit.>, because they use scaffold split only for BBBP, and use nonstandard datasets apart from BBBP and ESOL. Additionally, they use random split for other datasets. However, we point that they recalculate GROVER results for BBBP using scaffold split, and get the result that aligns with the one in the GEM paper. As for AttentiveFP <cit.>, the results seem particularly troubling. In the paper, the authors state that they use scaffold split for BBBP, BACE and HIV datasets, following <cit.> (later papers generally use scaffold splits for all MoleculeNet datasets). However, checking the official code <cit.>, the word scaffold does not appear anywhere in the code, and verifying the code for those 3 datasets, the random split is used in every case. Additionally, the difference in results between the original paper, and AttentiveFP results in the GEM paper would indicate that this is indeed the case. Because of this, we also do not compare directly to AttentiveFP results from the original paper, but rather from <cit.>. In conclusion, comparison to other papers for molecular property prediction in many cases requires very in-depth verification of both papers, their exact wording, and analyzing the official code. Of course, there is nothing wrong with alternative evaluation protocols and splitting procedures, but the due to differences in terminology this can result in misunderstanding the actual evaluation protocol used. This is an unfortunate situation, and it requires further investigation for other papers. § ESTIMATION OF GEM COMPUTATIONAL COST Here, we provide an estimation of GEM computational cost for pretraining. While the total cost of pretraining on 20 million molecules is not stated in the paper <cit.>, the authors provide a link to the official code on GitHub <cit.>. There, they provide a small subset of 2000 molecules from the ZINC dataset for a demo, with a note The demo data will take several hours to finish in a single V100 GPU card. We make a very conservative assumption, that several hours means 5 hours. The entire pretraining dataset is about 10000 times larger, so we get 50 thousand GPU hours. Assuming 250 NVidia V100 GPUs (to compare to GROVER, which also used 250 V100 GPUs), this gives us 200 hours, or slightly over 8 days. § ABLATION STUDY Here, we present the results of the ablation study. We remove one group of features at a time from MOLTOP, and present results in <ref>. We include the original MOLTOP results in the first row for reference. Removing any part decreases the average AUROC, and often by a large margin, validating our modelling choices. Removing atoms and bonds features results in the largest drop, which is expected, and emphasizes the importance of incorporating those features for molecular graph classification. The smallest drop is for removal of constant features, but the main goal of this step was to remove obviously useless features and reduce computational and memory complexity, so this was expected. In general, our proposed method shows graceful degradation and still performs well, after removal of any feature. Also, the worst result here, after removal of atoms and bonds, is still better than LTP results. Since this leaves only our topological features, this indicates that our chemical intuitions for their choice specifically for molecular data were correct. We additionally check what is the impact of removing the weakest feature, the maximal degree of neighbors. The average AUROC is a bit lower, showing that while this feature may not be as useful as others, it still positively impacts the discriminative ability of MOLTOP. § GRAPH KERNELS EXPERIMENTS Here, we provide results of additional experiments with graph kernels. We selected the most widely used kernels, representing various approches: vertex histogram (VH), edge histogram (EH), graphlet kernel, propagation kernel, shortest paths kernel, Weisfeiler-Lehman (WL), and WL Optimal Assignment (WL-OA). For node labels, we use atomic numbers. We tune the inverse regularization strength parameter C, considering values [10^-3, 10^-2, 10^-1, 1, 10, 10^2, 10^3]. We compare results on the same datasets as other models, except for HIV, MUV and peptides-func, for which we got OOM errors due to their size. See Table <ref> for results. MOLTOP comes up on top, getting the best results on 3 datasets and close second on BBBP. The slightly worse results on BACE and BBBP shows that they are very topology-centric, in line with conclusions from ablation study in <ref>. § ADDITIONAL EXPRESSIVITY EXPERIMENTS Here, we provide additional examples and the results of experiments for expressivity of MOLTOP, i.e. its ability to distinguish non-isomorphic graphs. In particular, we show that MOLTOP is able to distinguish graphs on which 1-WL test <cit.> fails. In this section, we consider only topological features, i.e. degree features, EBC, ARI and SCAN histograms. They form a vector of integers, since they are the counts in histogram bins, therefore we deem two graphs distinguished if they differ at any index. In all cases, to better understand where the expressiveness of MOLTOP comes from, we analyze the results for the features independently, and for the full feature vector. We note that degree features, based on LDP, are equivalent to a WL-test with 2 iterations <cit.>, therefore we include them as a control, expecting negative result in all cases. Firstly, we provide examples of pairs of graphs from the previous publications on which various WL tests fail. We treat each pair of graphs as a separate 2-sample dataset, making the number of bins equal to the size of those graphs. We consider only the pairs of the same size, since distinguishing differently sized graphs would be trivial. In <ref>, we show decalin and bicyclopentyl, example from <cit.>. Those molecules are not isomorphic nor regular, but cannot be distinguished by 1-WL test, and, by extension, by all typical message-passing GNNs. However, MOLTOP can distinguish them, due to inclusion of EBC - this is the only one of four topological features that is able to do so. The reason is that bicyclopentyl includes a bridge, which has a very high EBC value, and it does not appear in decalin. This also follows our chemical intuition and motivation for including EBC. To compare the MOLTOP features against raw shortest path information, we present the example from <cit.>, in <ref>. Those two graphs are not distinguishable by 1-WL test, but can be distinguished by using the sets of shortest paths distances, and therefore by the Graphformer. Blue and red nodes have different sets of shortest paths distances in two graphs. All MOLTOP features except for degree features can also distinguish those graphs. EBC utilizes more information than just the lengths of shortest paths, and detects a bridge. ARI and SCAN analyze the neighborhood connectivity structure, and can distinguish regular grid (<ref>a) from the two-communities structure (<ref>b). In the main body, we show MOLTOP achieves perfect result on sr25 dataset, which consists of strongly regular graphs. We also show the example of 3-regular (not strongly regular) molecules from <cit.> in <ref>, decaprismane and dodecahedrane. They are not isomorphic, because decaprismane contains a 4-cycle, while dodecahedrane does not. Those graphs are not distinguishable by typical message-passing GNNs, since they cannot distinguish k-regular graphs with the same size and features <cit.>. All MOLTOP features, apart from degree features, can distinguish those graphs, most likely because k-regularity is a local feature and can be verified both by analyzing paths and neighborhood connectivity patterns. There are, however, simple graphs which are not distinguishable by MOLTOP, as shown in <ref>, following example from <cit.>. Those graphs are distinguishable by 3-WL test, since it considers 3-tuple of vertices and can therefore detect disconnectedness in the left graph. MOLTOP, on the other hand, fails because it cannot detect this fact based on any of its features. However, we recall that 3-WL cannot distinguish strongly regular graphs <cit.>, while MOLTOP can, achieving perfect result on sr25. This indicates that, interestingly, it does not fit into the traditional k-WL hierarchy.
http://arxiv.org/abs/2407.12438v1
20240717094911
Semantic-Aware Representation of Multi-Modal Data for Data Ingress: A Literature Review
[ "Pierre Lamart", "Yinan Yu", "Christian Berger" ]
cs.LG
[ "cs.LG" ]
Semantic-Aware Representation of Multi-Modal Data for Data Ingress: A Literature Review Funded by Swedish Research Council (VR), Diarienummer: 2023-03810. Pierre Lamart Computer Science and Engineering University of Gothenburg Gothenburg, Sweden pierre.lamart@gu.se Yinan Yu Computer Science and Engineering Chalmers University of Technology Gothenburg, Sweden yinan@chalmers.se Christian Berger Computer Science and Engineering University of Gothenburg Gothenburg, Sweden christian.berger@gu.se July 22, 2024 ======================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Machine Learning (ML) is continuously permeating a growing amount of application domains. Generative AI such as Large Language Models (LLMs) also sees broad adoption to process multi-modal data such as text, images, audio, and video. While the trend is to use ever-larger datasets for training, managing this data efficiently has become a significant practical challenge in the industry–double as much data is certainly not double as good. Rather the opposite is important since getting an understanding of the inherent quality and diversity of the underlying data lakes is a growing challenge for application-specific ML as well as for fine-tuning foundation models. Furthermore, information retrieval (IR) from expanding data lakes is complicated by the temporal dimension inherent in time-series data which must be considered to determine its semantic value. This study focuses on the different semantic-aware techniques to extract embeddings from mono-modal, multi-modal, and cross-modal data to enhance IR capabilities in a growing data lake. Articles were collected to summarize information about the state-of-the-art techniques focusing on applications of embedding for three different categories of data modalities. data lake, data modality, multi-modal data, information retrieval, embedding, literature review § INTRODUCTION Facing the growing issue of handling large multi-modal datasets, researchers and practitioners try to enhance data management systems by improving the efficiency of specific functions for data ingress (like computing embeddings from data samples) or identifying relevant data for egress as needed for IR. Understanding how such data can be processed and stored is becoming a critical challenge to benefit from a large volume of data. A growingly popular response for data storage is the use of data lakes that allow to store large volumes of raw data from various sources. However, blindly growing the data lake is unsustainable and it results in barely manageable datasets that are hard to use and investigate (cf. Udandarao et al., <cit.>). Making this approach not sufficient and require proper data pre-processing methods prior to data ingress. Moreover, the data collected is increasingly multi-modal, including a variety of formats such as text, images, audio, and video. Each modality comes with its own unique type of metadata, requiring distinct organizational structures. This diversity adds another layer of complexity to data lakes, necessitating metadata management solutions that can efficiently index and retrieve data across different modalities based on their semantic content. It is hence valuable to gain insights into the feature extraction and indexing techniques for multi-modal data. The aim of this review is to provide an overview of existing methods and techniques to encode multi/cross-modal, time-series data in a semantically-aware way. This will allow us to identify possible research trends and get an overview of the state-of-the-art approaches for data ingress. We address the following research question: RQ: What are common approaches to prepare mono/multi/cross-modal data for data ingress, considering the temporal aspects in a growing data lake? We structure our paper as follows: Sec. <ref> introduces relevant concepts and related work. Sec. <ref> outlines how we conducted our study. The results are discussed in Sec. <ref>. The conclusions & future work are addressed in Sec. <ref>. § RELATED WORK Recent technologies and applications relying on Machine Learning (ML)-enabled systems requires an increasing amount of data for continuous improvement. Often coming from different sources, the data is likely to include different modalities <cit.>. Continuously collecting data to benefit from this technology results in data management systems, such as data lakes, becoming insufficient and drowning in the collected data. Several reviews covering multi-modal systems and applications are available in the literature. Perez-Martin et al.<cit.> present a comprehensive review of the cross-modal applications and challenges of text and video data. They study the progress of researchers on 26 datasets for “text retrieval from video task and video captioning/description task”. The review highlights that despite all the progress made in that field, there are still many possible improvements to make to extract and describe complex spatiotemporal information within videos. Similarly, Kaur et al.<cit.> studied cross-modal image-text information retrieval. They compared several approaches to identify their strengths and weaknesses. They see possible improvements in algorithms’ performances. Chen et al.<cit.> conducted a review of deep learning models for the same data modalities. They studied different popular structures for uni-directional and bi-directional multi-modal tasks. Although they see several applications of these models, they argue that adding more modalities to the process could allow these technologies to be applied in more scenarios but is still underexplored. In 2020 Zhang et al.<cit.> analyzed work in multi-modal deep learning from three perspectives: “learning multimodal representations, fusing multimodal signals at various levels, and multimodal applications”. The main modalities studied are natural language and computer vision. Most previous work primarily focuses on techniques using curated datasets, which are typically well-defined for machine learning tasks and benchmarking. However, data within a data lake often has a temporal dimension, as data is continuously collected from various sources. This stresses the importance of the time series data modality, which has not received the same level of attention as other data modalities. In this paper, we highlight the time aspects, focusing on the system capability of handling the dynamic and temporal nature of real-world data. § METHODOLOGY Our search for articles was inspired by Petersen et al. (cf. <cit.>).We focused on articles about data embedding and multi-modal fusion to provide an overview about common ways to embed specific modalities and how these modalities can be fused. Our study follows a similar structure to the one from Zhang et al.<cit.>. However, we focus on methods that gained attention after 2020, namely, representational and contrastive learning. Representational learning refers to models learning the representation of an input data for a specific task like classification, clustering or, in our case, embedding. Contrastive learning is a recent powerful paradigm that learns to differentiate between similar and dissimilar examples <cit.>. It creates positive and negative pairs, maps the pairs using a non-linear encoder, and optimizes the encoder by minimizing the distance between positive pairs and maximizing the negative pairs <cit.>. Contrastive learning gained a lot of attention past 2020. When searching for articles with the keyword “Contrastive Learning” In Scopus, 30 articles were published before 2020 and more than 5000 after 2020 while in ACM Digital Library, all articles were published in 2020 or later. After piloting our search terms in Scopus, ACM Digital Library, and IEEE xPlore, we refined our query using specific keywords for our subject. The template query for the modality is depicted below: KEY ( representational OR contrastive ) AND KEY ( machine OR deep OR ai ) AND KEY ( learning ) AND TITLE-ABS-KEY ( embedding ) AND TITLE-ABS-KEY ( |time-series| ) We systematically replaced the highlighted keyword with or to find embedding approaches targeting specifically these two data modalities. To limit the result set for the data modality , though, we filtered only on the keywords instead for title, abstract, or keywords. To find articles focusing on fusing multi-modal raw data, we used the following template query to identify relevant papers: KEY ( fusion OR alignment OR coordination OR factorization ) AND KEY ( time-series OR text OR image ) AND TITLE-ABS-KEY ( ( machine OR deep ) AND learning OR ai ) AND TITLE-ABS-KEY ( modal* OR multi-modal* ) AND TITLE-ABS-KEY ( |raw| AND data ) We replaced the highlighted keyword by to search for other type of fusion. We continued our selection process by screening the articles' title and abstract by applying the following inclusion/exclusion criteria: Paper addressing the research question. Non-peer-reviewed articles Publications for which full text is not available. Articles written in an other language than English Duplicate papers and shorter versions of already included publications. Tab. <ref> summarizes how the search results were narrowed down after applying our inclusion/exclusion criteria. Threats to Validity: We report potential threats to the validity of our research following recommendations by Feldt and Magazinius (cf. <cit.>). A potential threat may originate by the internal design or construction of our study, in particular the literature review that may be prone to subjective selection bias with respect to the identified research papers. We aimed to mitigate this threat by documenting transparently the search terms, databases, and filtering criteria. While potential papers may have been filtered out, our resulting tables (cf. Tab. <ref> and <ref>) document major aspects that we have encountered while screening relevant literature. With respect to generalizability of our findings, we state that we searched for relevant research in a way that is agnostic to a particular application domain. However, specific domains may have certain, domain-relevant constraints that may render generic solutions non-applicable. Such constraints, though, are left for future and domain-specific studies. § RESULTS AND DISCUSSION Embedding is a widely adopted semantic encoding technique and hence, the literature review focuses on articles about data embedding and multi-modal fusion. As can be seen in Tab. <ref>, the majority of embedding techniques rely on contrastive learning. Used with different models, the contrastive approach allows for a better semantic representation of the embedded data. Several researchers like Wang et al.<cit.> or Cho et al.<cit.> use the benefits of both embeddings and contrastive learning for efficient classification. These approaches show better results than others before, however, studies like the ones from Liu et at. <cit.> or Xu et al. <cit.> also highlight the lack of scalability to multi-modal data and the need to push research in this direction. To understand how multi-modal data can be handled, we looked into different fusion techniques for embedding several modalities. Tab. <ref> shows that most of the collected articles use early or late fusions and 3 use fusions at a variable stage of the process. Cao et al. <cit.> highlight that while early fusion usually outperforms late fusion thanks to low-level features fusion, making it easier to learn using a large amount of data, late fusion allows for a more detailed feature extraction before fusion. Their model takes advantage of both approaches by fusing features by groups in both early and late stages. Caltagirone et al. <cit.> propose a model optimizing the position of the fusion during the training phase for better results. Chen et al. <cit.> push the flexibility further by implementing a dynamic fusion process that adapts itself according to the input. § CONCLUSIONS & FUTURE WORK Efficient information retrieval from growing data lakes is essential as it allows querying the data lake for semantically relevant information of interest. Furthermore, fast and accurate query processing based on semantic-aware data embedding enables similarity look-ups or determining what information is not present yet in a data lake. Our review of recent literature covers a critical part of data curation: data ingress,i.e., the preparation of data samples before storing them in a data lake, by screening state-of-the-art approaches. This process faces challenges from the type and nature of the various data modalities that need to be efficiently handled in data lakes. While we have already surpassed single modalities, today's challenges originate from handling not only multi-modal data but also their data samples over time. More specifically, our survey has unveiled the recent dominance of constrastive learning techniques in embedding mono-modal data as they are capable of effectively capturing semantic representations across various data types. Temporal aspects are being increasingly integrated into embedding methods to address the dynamic nature of data collection. Fusion techniques for multi-modal data primarily use early and late fusion, while innovative methods such as dynamic and adaptive fusion have been developed to balance flexibility and efficiency in handling the complexity and diversity of multi-modal data, which is an important direction for future research. § ACKNOWLEDGMENTS This research is funded by the Swedish Research Council (Diarienummer: 2024-2028). ieeetr
http://arxiv.org/abs/2407.13209v1
20240718064706
Criticality of global monopole charges in diverse dimensions
[ "Hong-Ming Cui", "Zhong-Ying Fan" ]
hep-th
[ "hep-th", "gr-qc" ]
=6.0in =-.55in =9in =-.8in 1.4 total derivative
http://arxiv.org/abs/2407.13722v1
20240718172240
Enhanced $H$-Consistency Bounds
[ "Anqi Mao", "Mehryar Mohri", "Yutao Zhong" ]
cs.LG
[ "cs.LG", "stat.ML" ]
Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [ July 22, 2024 ================================================================================================= § ABSTRACT Recent research has introduced a key notion of -consistency bounds for surrogate losses. These bounds offer finite-sample guarantees, quantifying the relationship between the zero-one estimation error (or other target loss) and the surrogate loss estimation error for a specific hypothesis set. However, previous bounds were derived under the condition that a lower bound of the surrogate loss conditional regret is given as a convex function of the target conditional regret, without non-constant factors depending on the predictor or input instance. Can we derive finer and more favorable -consistency bounds? In this work, we relax this condition and present a general framework for establishing enhanced -consistency bounds based on more general inequalities relating conditional regrets. Our theorems not only subsume existing results as special cases but also enable the derivation of more favorable bounds in various scenarios. These include standard multi-class classification, binary and multi-class classification under Tsybakov noise conditions, and bipartite ranking. § INTRODUCTION The design of accurate and reliable learning algorithms hinges on the choice of surrogate loss functions, since optimizing the true target loss is typically intractable. A key property of these surrogate losses is Bayes-consistency, which guarantees that minimizing the surrogate loss leads to the minimization of the true target loss in the limit. This property has been well-studied for convex margin-based losses in both binary <cit.> and multi-class classification settings <cit.>. However, this classical notion has significant limitations since it only holds asymptotically and for the impractical set of all measurable functions. Thus, it fails to provide guarantees for real-world scenarios where learning is restricted to specific hypothesis sets, such as linear models or neural networks. In fact, Bayes-consistency does not always translate into superior performance, as highlighted by <cit.>. Recent research has addressed these limitations by introducing -consistency bounds <cit.>. These bounds offer non-asymptotic guarantees, quantifying the relationship between the zero-one estimation error (or other target loss) and the surrogate loss estimation error for a specific hypothesis set. While existing work has characterized the general behavior of these bounds <cit.>, particularly for smooth surrogates in binary and multi-class classification, their derivation has been restricted by certain assumptions. Specifically, previous bounds were derived under the condition that a lower bound of the surrogate loss conditional regret is given as a convex function of the target conditional regret, without non-constant factors depending on the predictor or input instance. Can we derive finer and more favorable -consistency bounds? In this work, we relax this condition and present a general framework for establishing enhanced -consistency bounds based on more general inequalities relating conditional regrets. Our theorems not only subsume existing results as special cases but also enable the derivation of tighter bounds in various scenarios. These include standard multi-class classification, binary and multi-class classification under Tsybakov noise conditions, and bipartite ranking. The remainder of this paper is organized as follows. In Section <ref>, we prove general theorems serving as new fundamental tools for deriving enhanced -consistency bounds. These theorems allow for the presence of non-constant factors α and β which can depend on both the hypothesis h and the input instance x. They include as special cases previous -consistency theorems, where α≡ 1 and β≡ 1. Furthermore, the bounds of these theorems are tight. In Section <ref>, we apply these tools to establish enhanced -consistency bounds for constrained losses in standard multi-class classification. These bounds are enhanced by incorporating a new hypothesis-dependent quantity, Λ(h), not present in previous work. Next, in Section <ref>, we derive a series of new and substantially more favorable -consistency bounds under Tsybakov noise conditions. Our bounds in binary classification (Section <ref>) recover as special cases some past results and even improve upon some. Our bounds for multi-class classification (Section <ref>) are entirely new and do not admit any past counterpart even in special cases. To illustrate the applicability of our results, we instantiate them for common surrogate losses in both binary and multi-class classification (see Appendix <ref>). In Section <ref>, we extend our new fundamental tools to the bipartite ranking setting (Section <ref>) and leverage them to derive novel -consistency bounds relating classification surrogate losses to bipartite ranking surrogate losses. We also identify a necessary condition for loss functions to admit such bounds. We present a remarkable direct upper bound on the estimation error of the RankBoost loss function, expressed in terms of the AdaBoost loss, with a multiplicative factor equal to the classification error of the predictor (Section <ref>). Additionally, we prove another surprising result with a different non-constant factor for logistic regression and its ranking counterpart (Section <ref>). Conversely, we establish negative results for such bounds in the case of the hinge loss (Section <ref>). In Appendix <ref>, we provide novel enhanced generalization bounds. We provide a detailed discussion of related work in Appendix <ref>. We begin by establishing the necessary terminology and definitions. § PRELIMINARIES We consider the standard supervised learning setting. Consider as the input space, as the label space, and as a distribution over ×. Given a sample S = *(x_1, y_1), …, (x_m, y_m) draw i.i.d. according to , our goal is to learn a hypothesis h that maps to a prediction space, denoted by . This hypothesis is chosen from a predefined hypothesis set , which is a subset of the family of all measurable functions, denoted by _all = *h →| h measurable. We denote by ℓ××→_+ the loss function that measures the performance of a hypothesis h on any pair (x, y). Given a loss function ℓ and a hypothesis set , we denote by _ℓ(h) = _(x, y) ∼*ℓ(h, x, y) the generalization error and by ^*_ℓ() = inf_h ∈_ℓ(h) the best-in-class generalization error. We further define the conditional error and the best-in-class condition error as _ℓ(h, x) = _y | x*ℓ(h, x, y) and ^*_ℓ(, x) = inf_h ∈_ℓ(h, x), respectively. Thus, the generalization error can be rewritten as _ℓ(h) = _X *_ℓ(h, x). For convenience, we refer to _ℓ(h) - ^*_ℓ() as the estimation error and to Δ_ℓ, (h, x) _ℓ(h, x) - ^*_ℓ(, x) as the conditional regret. Minimizing the target loss function, as specified by the learning task, is typically NP-hard. Instead, a surrogate loss function is often minimized. This paper investigates how minimizing surrogate losses can guarantee the minimization of the target loss function. We are especially interested in three applications: binary classification, multi-class classification, and bipartite ranking, although our general results are applicable to any supervised learning framework. Binary classification. Here, the label space is = * 1, 1, and the prediction space is =. The target loss function is the binary zero-one loss, defined by ℓ^bi_0-1(h, x, y) = 1_(h(x)) ≠ y, where (t) = 1 if t ≥ 0 and -1 otherwise. Let η(x) = ℙ (Y = 1 | X = x) be the conditional probability of Y = 1 given X = x. The condition error can be expressed explicitly as _ℓ(h, x) = η(x) ℓ(h, x, +1) + *1 - η(x)ℓ(h, x, -1). Common surrogate loss functions include the margin-based loss functions ℓ_Φ(h, x, y) = Φ(yh(x)), for some function Φ that is non-negative and non-increasing. Multi-class classification. Here, the label space is [n] *1, …, n, and the prediction space is = ^n for some n ∈ℤ_+. Let h(x, y) denote the y-th element of h(x), where y ∈ [n]. The target loss function is the multi-class zero-one loss, defined by ℓ_0-1(h, x, y) = 1_(x) ≠ y, where (x) = _y ∈ h(x, y). An arbitrary but fixed deterministic strategy is used for breaking ties. For simplicity, we fix this strategy to select the label with the highest index under the natural ordering of labels. Let p(y | x) = ℙ (Y = y | X = x) be the conditional probability of Y = y given X = x. The condition error can be explicitly expressed as _ℓ(h, x) = ∑_y ∈ p(y | x) ℓ(h, x, y). Common surrogate loss functions include the max losses <cit.>, constrained losses <cit.>, and comp-sum losses <cit.>. Bipartite ranking. Here, the label space is = * 1, 1, and the prediction space is =. Unlike the previous two settings, the goal here is to minimize the bipartite misranking loss _0-1, defined for any two pairs (x, y) and (x', y') drawn i.i.d. according to , and a hypothesis h: _0-1(h, x, x', y, y') = 1_(y - y')(h(x) - h(x')) < 0+ 1/2 1_(h(x) = h(x')) ∧ (y≠ y'). Let η(x) = ℙ (Y = 1 | X = x) be the conditional probability of Y = 1 given X = x. Given a loss function ××××→_+ and a hypothesis set , the generalization error and the condition error can be defined accordingly as _(h) = _(x, y) ∼, (x', y') ∼*(h, x, x', y, y'), _(h, x, x') = η(x)(1 - η(x')) (h, x, x', 1, 1) + η(x')(1 - η(x)) (h, x, x', 1, 1). The best-in-class generalization error and best-in-class condition error can be expressed as ^*_() = inf_h ∈_(h) and ^*_(, x, x') = inf_h ∈_(h, x, x'), respectively. The estimation error and conditional regret can be written as _(h) - ^*_() and Δ_, (h, x, x') = _(h, x, x') - ^*_(, x, x'), respectively. Common bipartite ranking surrogate loss functions typically take the following form: _Φ(h, x, x', y, y') = Φ*(y - y') *h(x) - h(x')/2 1_y ≠ y', for some function Φ that is non-negative and non-increasing. Another choice is to use the margin-based loss ℓ_Φ(h, x, y) = Φ(y h(x)) in binary classification as a surrogate loss. We will specifically be interested in the guarantees of minimizing ℓ_Φ with respect to the minimization of _Φ. -Consistency bounds. A desirable guarantee for a surrogate loss is Bayes-consistency <cit.>. This means that, asymptotically, the minimization of a surrogate loss ℓ_1 over the family of all measurable functions, _all, implies the minimization of the target loss function ℓ_2 over that same family: _ℓ_1(h_n) - ^*_ℓ_1(_all) 0 _ℓ_2(h_n) - ^*_ℓ_2(_all) 0. However, Bayes-consistency is an asymptotic guarantee, which cannot provide any guarantee for approximated minimizers. Moreover, it only applies to the family of all measurable functions, which is not relevant when a hypothesis set is adopted in practice. Instead, <cit.> proposed a novel consistency guarantee called -consistency bounds, which are upper bounds of the target estimation error in terms of the surrogate estimation error, for some concave function Γ≥ 0: _ℓ_2(h) - ^*_ℓ_2() + _ℓ_2()≤Γ*_ℓ_1(h) - ^*_ℓ_1() + _ℓ_1(), where for a given hypothesis set and a loss function ℓ, _ℓ() = ^*_ℓ() - _X *^*_ℓ(, x)≥ 0 measures the distance between the best-in-class generalization error and the expected best-in-class conditional error, and is referred to as minimizability gap. This also adapts to the bipartite ranking setting, where _() = ^*_() - _(x, x')*^*_(, x, x'). When = _all or ^*_ℓ() = ^*_ℓ(_all), the minimizability gaps vanish <cit.>, and -consistency bounds imply Bayes-consistency by taking the limit on both sides of (<ref>). However, in general, minimizability gaps are non-zero and represent an inherent quantity depending on the distribution and hypothesis set that we cannot hope to minimize. It is upper bounded by the approximation error and is generally a finer quantity <cit.>. Thus, -consistency bounds provide a stronger and more informative guarantee than Bayes-consistency, since they are both non-asymptotic and specific to the hypothesis set used. By the sub-additivity of a concave function Γ≥ 0, an -consistency bound also implies that _ℓ_2(h) - ^*_ℓ_2() ≤Γ*_ℓ_1(h) - ^*_ℓ_1() + Γ*_ℓ_1() - _ℓ_2(), where Γ*_ℓ_1() - _ℓ_2() is an inherent constant depending on the hypothesis set and distribution, which cannot be minimized. § NEW FUNDAMENTAL TOOLS To derive -consistency bounds, previous work has provided several general tools, including <cit.>, <cit.>, and <cit.>. These general tools indicate that if there exists a convex function Ψ_+→ or a concave function Γ_+→ such that the following holds for all h ∈ and x ∈: Ψ*Δ_ℓ_2, (h, x)≤Δ_ℓ_1, (h, x) or Δ_ℓ_2, (h, x) ≤Γ*Δ_ℓ_1, (h, x), then, for any hypothesis h ∈, Ψ*_ℓ_2(h) - _ℓ_2^*() + _ℓ_2()≤_ℓ_1(h) - ^*_ℓ_1() + _ℓ_1() or _ℓ_2(h) - _ℓ_2^*() + _ℓ_2() ≤Γ*_ℓ_1(h) - ^*_ℓ_1() + _ℓ_1(), respectively. Here, we present new tools that are more general than those in previous work. theoremNewBoundConvex Assume that there exist a convex function Ψ_+ → and two positive functions α×→^*_+ and β×→^*_+ with sup_x ∈α(h, x) < +∞ and _x ∈*β(h, x) < +∞ for all h ∈ such that the following holds for all h∈ and x∈: Ψ*Δ_ℓ_2,(h, x) _X*β(h, x)/β(h, x)≤α(h, x) Δ_ℓ_1,(h, x). Then, the following inequality holds for any hypothesis h ∈: Ψ*_ℓ_2(h) - _ℓ_2^*() + _ℓ_2()≤γ(h) *_ℓ_1(h) - ^*_ℓ_1() + _ℓ_1(). with γ(h) = *sup_x ∈α(h, x) β(h, x)/_X*β(h, x). If, additionally, is a subset of ^n and, for any h ∈, x ↦Δ_ℓ_1,(h, x) is non-decreasing and x ↦α(h, x) β(h, x) is non-increasing, or vice-versa, then, the inequality holds with γ(h) = _X*α(h, x) β(h, x)/_X*β(h, x). theoremNewBoundConcave Assume that there exist a concave function Γ_+ → and two positive functions α×→^*_+ and β×→^*_+ with sup_x ∈α(h, x) < +∞ and _x ∈*β(h, x) < +∞ for all h ∈ such that the following holds for all h∈ and x∈: Δ_ℓ_2,(h, x) _X*β(h, x)/β(h, x)≤Γ*α(h, x) Δ_ℓ_1,(h, x). Then, the following inequality holds for any hypothesis h ∈: _ℓ_2(h) - _ℓ_2^*() + _ℓ_2() ≤Γ*γ(h) *_ℓ_1(h) - ^*_ℓ_1() + _ℓ_1(), with γ(h) = *sup_x ∈α(h, x) β(h, x)/_X*β(h, x). If, additionally, is a subset of ^n and, for any h ∈, x ↦Δ_ℓ_1,(h, x) is non-decreasing and x ↦α(h, x) β(h, x) is non-increasing, or vice-versa, then, the inequality holds with γ(h) = _X*α(h, x) β(h, x)/_X*β(h, x). We refer to Theorems <ref> and <ref> as new fundamental tools since they incorporate additional factors α and β which depend on both the hypothesis h and instance x. They include previous general tools <cit.> given in the special case where α≡ 1 and β≡ 1. Compared to previous ones, these new tools can provide more precise -consistency bounds in familiar settings or extend -consistency bounds to new scenarios where previous tools fall short. We will demonstrate their applications in both contexts. Additionally, the bounds derived using these tools are tight. lemmaNewBoundTight The bounds of Theorems <ref> and <ref> are tight in the following sense: for some distributions, Inequality (<ref>) (respectively Inequality (<ref>)) is the tightest possible -consistency bound that can be derived under the assumption of Theorem <ref> (respectively Theorem <ref>). Note that, when Γ(0) = 0, which is the case for most concave functions Γ considered, Γ is sub-additive over _+ and the theorem implies the following inequality: _ℓ_2(h) - ^*_ℓ_2() + _ℓ_2() ≤Γ*γ(h) *_ℓ_1(h) - ^*_ℓ_1() + Γ*γ(h) _ℓ_1() , with γ(h) = _X*α(h, x) β(h, x)/_X*β(h, x). The bound implies that if the surrogate estimation loss of a predictor h is reduced to , then the target estimation loss is bounded by Γ(γ(h) ) + Γ*γ(h) _ℓ_1() - _ℓ_2(). When the minimizability gaps are zero, for example when the problem is realizable, the upper bound simplifies to Γ(γ(h) ). In the special case of Ψ(x) = x^s or equivalently, Γ(x) = x^1/s, for some s ≥ 1 with conjugate number t ≥ 1, that is 1/s + 1/t = 1, we can further obtain the following result. theoremNewBoundPower Assume that there exist two positive functions α×→^*_+ and β×→^*_+ with sup_x ∈α(h, x) < +∞ and _x ∈*β(h, x) < +∞ for all h ∈ such that the following holds for all h∈ and x∈: Δ_ℓ_2,(h, x) _X*β(h, x)/β(h, x)≤*α(h, x) Δ_ℓ_1,(h, x)^1/s, for some s ≥ 1 with conjugate number t ≥ 1, that is 1/s + 1/t = 1. Then, for γ(h) = _X *α^t/s(h, x) β^t(h, x)/_X*β(h, x)^t^1/t, the following inequality holds for any h ∈: _ℓ_2(h) - _ℓ_2^*() + _ℓ_2() ≤γ(h) *_ℓ_1(h) - ^*_ℓ_1() + _ℓ_1()^1/s. As above, by the sub-additivity of x ↦ x^1/s over _+, the bound implies _ℓ_2(h) - _ℓ_2^*() + _ℓ_2() ≤γ(h) **_ℓ_1(h) - ^*_ℓ_1()^1/s + *_ℓ_1()^1/s. The proofs of Theorems <ref>, <ref>, <ref>, and <ref> are presented in Appendix <ref>. These proofs are more complex than their counterparts for earlier results in the literature due to the presence of the functions α and β. Our proof technique involves a refined application of Jensen's inequality tailored to the β function, the use of Hölder's inequality adapted for the α function, and the application of the FKG Inequality in the second part of both Theorems <ref> and <ref>. The proof of Theorem <ref> also leverages Hölder's Inequality. For cases where Ψ or Γ is linear, our proof shows that the resulting bounds are essentially optimal, modulo the use of Hölder's inequality. As we shall see in Section <ref>, Ψ and Γ are linear when Massart's noise assumption holds. Building upon these theorems, we proceed to derive finer -consistency bounds than existing ones. § STANDARD MULTI-CLASS CLASSIFICATION We first apply our new tools to establish enhanced -consistency bounds in standard multi-class classification. We will consider the constrained losses <cit.>, defined as Φ^cstnd(h, x, y) = ∑_y'≠ yΦ*-h(x, y') subject to ∑_y∈ h(x, y) = 0, where Φ is a non-increasing and non-negative function. We will specifically consider Φ(u) = e^-u, Φ(u) = max*0, 1 - u, and Φ(u) = (1 - u)^2 1_u ≤ 1, corresponding to the constrained exponential loss, constrained hinge loss, and constrained squared hinge loss, respectively. By applying Theorems <ref> or <ref>, we obtain the following enhanced -consistency bounds. We say that a hypothesis set is symmetric if there exists a family of functions f mapping from to such that **h(x, 1), …, h(x, n + 1) h ∈ = **f_1(x),…, f_n + 1(x) f_1, …, f_n + 1∈, for any x ∈. We say that a hypothesis set is complete if for any (x, y) ∈×, the set of scores generated by it spans across the real numbers: *h(x, y) | h ∈ =. [Enhanced -consistency bounds for constrained losses] theoremBoundLeeFiner Assume that is symmetric and complete. Then, the following inequality holds for any hypothesis h ∈: _ℓ_0-1(h) - ^*_ℓ_0-1() + _ℓ_0-1() ≤Γ*_Φ^cstnd(h) - ^*_Φ^cstnd() + _Φ^cstnd(, where Γ(x) = √(2) x^1/2/*e^Λ(h)^1/2 for Φ(u) = e^-u, Γ(x) = x/1 + Λ(h) for Φ(u) = max*0, 1 - u, and Γ(x) = x^1/2/1 + Λ(h) for Φ(u) = (1 - u)^2 1_u ≤ 1. Additionally, Λ(h) = inf_x ∈max_y∈ h(x,y). The proof is included in Appendix <ref>. These -consistency bounds are referred to as enhanced -consistency bounds because they incorporate a hypothesis-dependent quantity, Λ(h), unlike the previous -consistency bounds derived for the constrained losses in <cit.>. Since ∑_y ∈ h(x, y) = 0, there must be non-negative scores. Consequently, Λ(h) must be greater than or equal to 0. Given that Γ is non-decreasing, the -consistency bounds in Theorem <ref> are finer than the previous ones, where Λ(h) is replaced by zero. § CLASSIFICATION UNDER LOW-NOISE CONDITIONS The previous section demonstrated the usefulness of our new fundamental tools in deriving enhanced -consistency bounds within standard classification settings. In this section, we leverage them to establish novel -consistency bounds under low-noise conditions for both binary and multi-class classification problems. §.§ Binary classification Here, we first consider the binary classification setting under the Tsybakov noise condition <cit.>, that is there exist B > 0 and α∈ [0, 1) such that ∀ t > 0, [*η(x) - 1/2≤ t] ≤ B t^α/1 - α. Note that as α→ 1, t^α/1 - α→ 0, corresponding to Massart’s noise condition. When α = 0, the condition is void. This condition is equivalent to assuming the existence of a universal constant c > 0 and α∈ [0, 1) such that for all h ∈, the following inequality holds <cit.>: [1_(X) ≠^*(X)] ≤ c *_ℓ^bi_0-1(h) - _ℓ^bi_0-1(h^*)^α. where h^* is the Bayes-classifier. We also assume that there is no approximation error and that _ℓ^bi_0-1() = 0. We refer to this as the Tsybakov noise assumption in binary classification. theoremTsybakovBinary Consider a binary classification setting where the Tsybakov noise assumption holds. Assume that the following holds for all h ∈ and x ∈: Δ_ℓ^bi_0-1,(h, x) ≤Γ*Δ_ℓ,(h, x), with Γ(x) = x^1/s, for some s ≥ 1. Then, for any h ∈, _ℓ^bi_0-1(h) - ^*_ℓ^bi_0-1() ≤ c^s - 1/s - α(s - 1)*_ℓ(h) - ^*_ℓ() + _ℓ()^1/s - α(s - 1). The theorem offers a substantially more favorable -consistency guarantee for binary classification. While standard -consistency bounds for smooth loss functions rely on a square-root dependency (s = 2), this work establishes a linear dependence when Massart's noise condition holds (α→ 1), and an intermediate rate between linear and square-root for other values of α within the range (0, 1). Our result is general and admits as special cases previous related bounds. In particular, setting s = 2 and α = 1 recovers the -consistency bounds of <cit.> under Massart's noise. Additionally, with = _all, it recovers the excess bounds under the Tsybakov noise condition of <cit.>, but with a more favorable factor of one instead of 2^s/s - α (s - 1), which is always greater than one. Table <ref> illustrates several specific instances of our bounds for margin-based losses. The proof is given in Appendix <ref>. It consists of defining β(h, x) = 1_h(x) ≠ h^*(x) + for a fixed > 0 and proving the inequality Δ_ℓ^bi_0-1,(h, x) _X*β(h, x)/β(h, x)≤*α(h, x) Δ_ℓ,(h, x)^1/s, where α(h, x) = _X[β(h, x)]^s. The result then follows the application of our new tools Theorem <ref>. Note that our proof is novel and that previous general tools for deriving -consistency bounds in <cit.> cannot be applied here since α and β are not constants. §.§ Multi-class classification The original definition of the Tsybakov noise <cit.> was given and analyzed in the binary classification setting. Here, we give a natural extension of this definition and analyze its properties in the general multi-class classification setting. We denote by y_max = _y ∈ p(y | x). Define the minimal margin for a point x ∈ as follows: γ(x) = (y_max | x) - sup_y ≠ y_max(y | x). The Tsybakov noise model assumes that the probability of a small margin occurring is relatively low, that is there exist B > 0 and α∈ [0, 1) such that ∀ t > 0, [γ(X) ≤ t] ≤ B t^α/1 - α. In the binary classification setting, where γ(x) = 2 η(x) - 1, this recovers the condition described in Section <ref>. For α→ 1, t^α/1 - α→ 0, this corresponds to Massart's noise condition in multi-class classification. When α = 0, the condition becomes void. Similar to the binary classification setting, we can establish an equivalence assumption for the Tsybakov noise model as follows. We denote the Bayes classifier by h^*. lemmaTsybakov The Tsybakov noise assumption implies that there exists a constant c such that the following inequalities hold for any h ∈: [1_(x) ≠^*(x)] ≤ c [γ(X) 1_(x) ≠^*(x)]^α≤ c [_ℓ_0-1(h) - _ℓ_0-1(h^*)]^α. lemmaTsybakovEquiv Assume that for any h ∈_all, we have [(X) ≠^*(X)] ≤ c [γ(X) 1_γ(X) ≤ t ]^α. Then, the Tsybakov noise condition holds, that is, there exists a constant B > 0, such that ∀ t > 0, [γ(X) ≤ t] ≤ B t^α/1 - α. The proofs of Lemma <ref> and Lemma <ref> are included in Appendix <ref>. To the best of our knowledge, there are no previous results that formally analyze these properties of the Tsybakov noise in the general multi-class classification setting, although the similar result in the binary setting is well-known. Next, we assume that there exists a universal constant c > 0 and α∈ [0, 1) such that for all h ∈, the following Tsybakov noise inequality holds: [1_(x) ≠^*(x)] ≤ c *_ℓ_0-1(h) - ^*_ℓ_0-1(h^*)^α. where h^* is the Bayes-classifier. We also assume that there is no approximation error and that _ℓ_0-1() = 0. We refer to this as the Tsybakov noise assumption in multi-class classification. theoremTsybakovMulti Consider a multi-class classification setting where the Tsybakov noise assumption holds. Assume that the following holds for all h ∈ and x ∈: Δ_ℓ_0-1,(h, x) ≤Γ*Δ_ℓ,(h, x), with Γ(x) = x^1/s, for some s ≥ 1. Then, for any h ∈, _ℓ_0-1(h) - ^*_ℓ_0-1() ≤ c^s - 1/s - α(s - 1)*_ℓ(h) - ^*_ℓ() + _ℓ()^1/s - α(s - 1). To our knowledge, these are the first multi-class classification -consistency bounds, and even excess error bounds (a special case where = _all) established under the Tsybakov noise assumption. Here too, this theorem offers a significantly improved -consistency guarantee for multi-class classification. For smooth loss functions, standard -consistency bounds rely on a square-root dependence (s = 2). This dependence is improved to a linear rate when the Massart noise condition holds (α→ 1), or to an intermediate rate between linear and square-root for other values of α within the range (0, 1). The proof is given in Appendix <ref>. Illustrative examples of these bounds for constrained losses and comp-sum losses are presented in Tables <ref> and <ref> within Appendix <ref>. It is worth noting that deriving -consistency bounds for any concave Γ under low-noise conditions in both binary and multi-class classification remains a potential avenue for future work. § BIPARTITE RANKING In preceding sections, we demonstrated how our new tools enable the derivation of enhanced -consistency bounds in various classification scenarios: standard multi-class classification and low-noise regimes of both binary and multi-class classification. Here, we extend the applicability of our refined tools to the bipartite ranking setting. We illustrate how they facilitate the establishment of more favorable -consistency bounds for classification surrogate losses ℓ_Φ with respect to the bipartite ranking surrogate losses _Φ. The loss functions ℓ_Φ and _Φ are defined as follows: ℓ_Φ(h, x, y) = Φ(yh(x)), _Φ(h, x, x', y, y') = Φ[](y - y') *h(x) - h(x')2 1_y ≠ y', where Φ is a non-negative and non-increasing function. We will say that ℓ_Φ admits an -consistency bound with respect to _Φ, if there exists a concave function Γ_+→_+ with Γ(0) = 0 such that the following inequality holds: __Φ(h) - ^*__Φ() + __Φ()≤Γ*_ℓ_Φ(h) - ^*_ℓ_Φ() + _ℓ_Φ(), where _ℓ_Φ() = ^*_ℓ_Φ() - _X *^*_ℓ_Φ(, x) and __Φ() = ^*__Φ() - _(x, x')*^*__Φ(, x,x') represent the minimizability gaps for ℓ_Φ and _Φ, respectively. §.§ Fundamental tools for bipartite ranking We first extend our new fundamental tools to the bipartite ranking setting. theoremNewBoundConcaveRanking Assume that there exist two concave functions Γ_1 _+ → and Γ_2 _+ →, and two positive functions α_1 ×→^*_+ and α_2 ×→^*_+ with _x ∈*α_1(h, x) < +∞ and _x ∈*α_2(h, x) < +∞ for all h ∈ such that the following holds for all h∈ and (x, x') ∈×: Δ_, (h, x, x') ≤Γ_1 *α_1(h, x') Δ_ℓ,(h, x) + Γ_2 *α_2(h, x) Δ_ℓ,(h, x'). Then, the following inequality holds for any hypothesis h ∈: _(h) - _^*() + _() ≤Γ_1 *γ_1(h) _ℓ(h) + Γ_2 *γ_2(h) _ℓ(h). with γ_1(h) = _x ∈*α_1(h, x), γ_2(h) = _x ∈*α_2(h, x), and _ℓ(h) = _ℓ(h) - ^*_ℓ() + _ℓ(). The proof, detailed in Appendix <ref>, leverages the fact that in the bipartite ranking setting, two pairs (x, y) and (x', y') are drawn i.i.d. according to the distribution . As in the classification setting, Theorem <ref> is a fundamental tool for establishing enhanced -consistency bounds. This is achieved incorporating the additional terms α_1 and α_2, which can depend on both the hypothesis h and the instances x or x', thereby offering greater flexibility. Note that such enhanced -consistency bounds are meaningful only when Γ_1(0) + Γ_2(0) = 0. This ensures that when the minimizability gaps vanish (e.g., in the case where = _all or in more generally realizable cases), the estimation error of classification losses _ℓ(h) - ^*_ℓ() is zero implies that the estimation error of bipartite ranking losses _(h) - _^*() is also zero. This requires that there exists Γ_1 and Γ_2 such that Γ_1(0) + Γ_2(0) = 0 and Δ_, (h, x, x') ≤Γ_1 *α_1(h, x') Δ_ℓ,(h, x) + Γ_2 *α_2(h, x) Δ_ℓ,(h, x'), for all h∈ and (x, x') ∈×. Note that a necessary condition for this requirement is calibration: we say that a classification loss ℓ is calibrated with respect to a bipartite ranking loss , if for all h∈_all and (x, x') ∈×: Δ_ℓ, _all(h, x) = 0 and Δ_ℓ, _all(h, x') = 0 Δ_, _all(h, x, x') = 0. We now introduce a family of auxiliary functions that are differentiable and that admit a property facilitating the calibration between ℓ_Φ and _Φ. theoremNewBoundConcaveRankingGeneral Assume that Φ is convex and differentiable, and satisfies Φ'(t) < 0 for all t ∈, and Φ'(t)/Φ'(-t) = e^-ν t for some ν > 0. Then, ℓ_Φ is calibrated with respect to _Φ. The proof can be found in Appendix <ref>. Theorem <ref> identifies a family of functions Φ for which ℓ_Φ is calibrated with respect to _Φ. This inclues the exponential loss and the logistic loss, which fulfill the properties outlined in Theorem <ref>. For the exponential loss, Φ(u) = Φ_exp(u) = e^-u, we have Φ'_exp(t)/Φ'_exp(-t) = -e^-t/-e^t = e^-2t. Similarly, for the logistic loss, Φ(u) = Φ_log(u) = log(1 + e^-u), we have Φ'_log(t)/Φ'_log(-t) = -1/e^t + 1/-1/e^-t + 1 = e^-t. In the next sections, we will prove -consistency bounds in these two cases. §.§ Exponential loss We first consider the exponential loss, where Φ(u) = Φ_exp(u) = e^-u. In the bipartite ranking setting, a hypothesis set is said to be complete if for any x ∈, *h(x) h ∈ spans . theoremNewBoundConcaveRankingExp Assume that is complete. Then, the following inequality holds for the exponential loss Φ_exp: Δ__Φ_exp, (h, x, x') ≤_ℓ_Φ_exp(h, x') Δ_ℓ_Φ_exp,(h, x) + _ℓ_Φ_exp(h, x) Δ_ℓ_Φ_exp,(h, x'). Additionally, for any hypothesis h ∈, we have __Φ_exp(h) - __Φ_exp^*() + __Φ_exp() ≤ 2 _ℓ_Φ_exp(h) *_ℓ_Φ_exp(h) - ^*_ℓ_Φ_exp() + _ℓ_Φ_exp(). See Appendix <ref> for the proof. The proof leverages our new tool, Theorem <ref>, in conjunction with the specific form of the conditional regrets for the exponential function and the convexity of squared function. This result is remarkable since it directly bounds the estimation error of the RankBoost loss function by that of AdaBoost. The observation that AdaBoost often exhibits favorable ranking accuracy, often approaching that of RankBoost, was first highlighted by <cit.>. Later, <cit.> introduced a coordinate descent version of RankBoost and demonstrated that, when incorporating the constant weak classifier, AdaBoost asymptotically achieves the same ranking accuracy as coordinate descent RankBoost. Here, we present a stronger non-asymptotic result for the estimation losses of these algorithms. We show that when the estimation error of the AdaBoost predictor h is reduced to , the corresponding RankBoost loss is bounded by 2 _ℓ_Φ_exp(h) * + _ℓ_Φ_exp() - __Φ_exp(). This provides a stronger guarantee for the ranking quality of AdaBoost. In the nearly realizable case, where minimizability gaps are negligible, this upper bound approximates to 2 _ℓ_Φ_exp(h), aligning with the results of <cit.> for excess errors, where is assumed to be the family of all measurable functions. §.§ Logistic loss Here, we consider the logistic loss, where Φ(u) = Φ_log(u) = log(1 + e^-u). theoremNewBoundConcaveRankingLog Assume that is complete. For any x, define u(x) = max*η(x), 1 - η(x). Then, the following inequality holds for the logistic loss Φ_log: Δ__Φ_log, (h, x, x') ≤ u(x') Δ_ℓ_Φ_log, (h, x) + u(x) Δ_ℓ_Φ_log, (h, x'). Furthermore, for any hypothesis h ∈, we have __Φ_log(h) - __Φ_log^*() + __Φ_log() ≤ 2[u(X)] *_ℓ_Φ_log(h) - ^*_ℓ_Φ_log() + _ℓ_Φ_log(). Note that the term [u(X)] can be expressed as 1 - [min*η(X), (1 - η(X))], and coincides with the accuracy of the Bayes classifier. In particular, in the deterministic case, we have [u(X)] = 1. The proof is given in Appendix <ref>. In the first part of the proof, we establish and leverage the sub-additivity of Φ_log: Φ_log(h - h') ≤Φ_log(h) + Φ_log(-h'), to derive an upper bound for Δ__Φ_log, (h, x, x') in terms of Δ_ℓ_Φ_log, (h, x) and Δ_ℓ_Φ_log, (h, x'). Next, we apply our new tool, Theorem <ref>, with α_1(h, x') = max*η(x'), 1 - η(x') and α_2(h, x) = max*η(x), 1 - η(x). Both our result and its proof are entirely novel. Significantly, this result implies a parallel finding for logistic regression analogous to that of AdaBoost: If h is the predictor obtained by minimizing the logistic loss estimation error to , then the _Φ_log-estimation loss of h for ranking is bounded above by 2 [u(X)] ( + _ℓ_Φ_log()) - __Φ_log(). When minimizability gaps are small, such as in realizable cases, this bound further simplifies to 2 [u(X)], suggesting a favorable ranking property for logistic regression. This result is surprising, as the favorable ranking property of AdaBoost and its connection to RankBoost were thought to stem from the specific properties of the exponential loss, particularly its morphism property, which directly links the loss functions of AdaBoost and RankBoost. This direct connection does not exist for the logistic loss, making our proof and result particularly remarkable. In both cases, our new tools facilitated the derivation of non-trivial inequalities where the factor plays a crucial role. The exploration of enhanced -consistency bounds for other functions Φ is an interesting question for future research that we have initiated. In the next section, we prove negative results for the hinge loss. §.§ Hinge loss Here, we show that when Φ is the hinge loss, ℓ_Φ_hinge is not calibrated with respect to _Φ_hinge, and that there are no meaningful -consistency bounds. The hinge loss ℓ_Φ_hinge is the loss function minimized by the support vector machines (SVM) <cit.> and _Φ_hinge is the loss function optimized by the RankSVM algorithm <cit.> (which in fact coincides with SVM <cit.>). However, the relationships observed for AdaBoost and RankBoost, or Logistic Regression and its ranking counterpart, do not hold here. Instead, we present the following two negative results. theoremNewBoundConcaveRankingHingeCalibration For the hinge loss, ℓ_Φ_hinge is not calibrated with respect to _Φ_hinge. [Negative result for hinge losses]theoremNewBoundConcaveRankingHinge Assume that contains the constant function 1. For the hinge loss, if there exists a function pair (Γ_1, Γ_2) such that the following holds for all h∈ and (x, x') ∈×, with some positive functions α_1 ×→^*_+ and α_2 ×→^*_+: Δ__Φ_hinge, (h, x, x') ≤Γ_1 *α_1(h, x') Δ_ℓ_Φ_hinge, (h, x) + Γ_2 *α_2(h, x) Δ_ℓ_Φ_hinge, (h, x'), then, we have Γ_1(0) + Γ_2(0) ≥1/2. See Appendix <ref> for the proof. Theorem <ref> implies that there are no meaningful -consistency bounds for ℓ_Φ_hinge with respect to _Φ_hinge with common hypothesis sets. In Appendix <ref>, we show that all our derived enhanced -consistency bounds can be used to provide novel enhanced generalization bounds in their respective settings. A natural question arises: which functions Φ should we consider instead? We show below that differentiability can help. § CONCLUSION We introduced novel tools for deriving enhanced -consistency bounds in various learning settings, including multi-class classification, low-noise regimes, and bipartite ranking. Remarkably, we established substantially more favorable guarantees for several settings and demonstrated unexpected connections between classification and bipartite ranking performances for the exponential and logistic losses. Our tools are likely to be useful in the analysis of -consistency bounds for a wide range of other scenarios. toc § RELATED WORK Bayes-consistency has been well studied in a wide range of learning scenarios, including binary classification <cit.>, multi-class classification <cit.>, multi-label learning <cit.>, learning with rejection <cit.>, learning to defer <cit.> , ranking <cit.>, cost sensitive learning <cit.>, structured prediction <cit.>, general embedding framework <cit.>, Top-k classification <cit.>, hierarchical classification <cit.>, ordinal regression <cit.>, and learning from noisy labels <cit.>. However, this classical notion has significant limitations since it only holds asymptotically and for the impractical set of all measurable functions. Thus, it fails to provide guarantees for real-world scenarios where learning is restricted to specific hypothesis sets, such as linear models or neural networks. In fact, Bayes-consistency does not always translate into superior performance, as highlighted by <cit.> (see also <cit.>). <cit.> proposed the key notion of -consistency bounds for binary classification. These novel non-asymptotic learning guarantees for binary classification account for the hypothesis set adopted and are more significant and informative than existing Bayes-consistency guarantees. They provided general tools for deriving such bounds and used them to establish a series of -consistency bounds in both standard binary classification and binary classification under Massart's noise condition. <cit.> and <cit.> further generalized those general tools to standard multi-class classification and used them to establish multi-class -consistency bounds. Specifically, <cit.> presented a comprehensive analysis of -consistency bounds for the three most commonly used families of multi-class surrogate losses: max losses <cit.>, sum losses <cit.>, and constrained losses <cit.>. They showed negative results for max losses, while providing positive results for sum losses and constrained losses. Additionally, <cit.> used these general tools in multi-class classification to derive -consistency bounds for the (multinomial) logistic loss <cit.>. Meanwhile, <cit.> presented a theoretical analysis of -consistency bounds for a broader family of loss functions, termed comp-sum losses, which includes sum losses and cross-entropy (or logistic loss) as special cases, and also includes generalized cross-entropy <cit.>, mean absolute error <cit.>, and other cross-entropy-like loss functions. In all these works, determining whether -consistency bounds hold and deriving these bounds have required specific proofs and analyses for each surrogate loss. <cit.> complemented these efforts by providing both a general characterization and an extension of -consistency bounds for multi-class classification, based on the error transformation functions they defined for comp-sum losses and constrained losses. Recently, <cit.> further applied these error transformations to characterize the general behavior of these bounds, showing that the universal growth rate of -consistency bounds for smooth surrogate losses in both binary and multi-class classification is square-root. -consistency bounds have also been studied in other learning scenarios including pairwise ranking <cit.>, learning with rejection <cit.>, learning to defer <cit.>, top-k classification <cit.>, adversarial robustness <cit.>, bounded regression <cit.>, and structured prediction <cit.>. All previous bounds in the aforementioned work were derived under the condition that a lower bound of the surrogate loss conditional regret is given as a convex function of the target conditional regret, without non-constant factors depending on the predictor or input instance. In this work, we relax this condition and present a general framework for establishing enhanced -consistency bounds based on more general inequalities relating conditional regrets, leading to finer and more favorable -consistency bounds. § PROOF OF NEW FUNDAMENTAL TOOLS (THEOREM <REF>, THEOREM <REF>, THEOREM <REF> AND THEOREM <REF>) * For any h∈, we can write Ψ*_ℓ_2(h) - ^*_ℓ_2() + _ℓ_2() = Ψ*_X*Δ_ℓ_2,(h, x) = Ψ*_X*β(h, x)/_X*β(h, x)Δ_ℓ_2,(h, x) _X*β(h, x)/β(h, x) ≤_X*β(h, x)/_X*β(h, x)Ψ*Δ_ℓ_2,(h, x) _X*β(h, x)/β(h, x)Jensen's ineq. ≤_X*α(h, x) β(h, x)/_X*β(h, x)Δ_ℓ_1,(h, x)assumption ≤*sup_x ∈α(h, x) β(h, x)/_X*β(h, x)_X *Δ_ℓ_1,(h, x)Hölder’s ineq. = []sup_x ∈α(h, x) β(h, x)/_X*β(h, x)*_ℓ_1(h) - ^*_ℓ_1() + _ℓ_1()def. of _X *Δ_ℓ_1,(h, x). If, additionally, is a subset of ^n and, for any h ∈, x ↦Δ_ℓ_1,(h, x) is non-decreasing and x ↦α(h, x) β(h, x) is non-increasing, or vice-versa, then, by the FKG inequality <cit.>, we have Ψ*_ℓ_2(h) - ^*_ℓ_2() + _ℓ_2() ≤_X*α(h, x) β(h, x)/_X*β(h, x)Δ_ℓ_1,(h, x) ≤_X*α(h, x) β(h, x)/_X*β(h, x)_X*Δ_ℓ_1,(h, x) ≤_X*α(h, x) β(h, x)/_X*β(h, x)*_ℓ_1(h) - ^*_ℓ_1() + _ℓ_1(), which completes the proof. * For any h∈, we can write _ℓ_2(h) - ^*_ℓ_2() + _ℓ_2() = _X*Δ_ℓ_2,(h, x) ≤_X*β(h, x)/_X*β(h, x)Γ*α(h, x) Δ_ℓ_1,(h, x)assumption ≤Γ*_X*α(h, x) β(h, x)/_X*β(h, x)Δ_ℓ_1,(h, x) Jensen's ineq. ≤Γ**sup_x ∈α(h, x) β(h, x)/_X*β(h, x)_X *Δ_ℓ_1,(h, x)Hölder’s ineq. = Γ*[]sup_x ∈α(h, x) β(h, x)/_X*β(h, x)*_ℓ_1(h) - ^*_ℓ_1() + _ℓ_1()def. of _X *Δ_ℓ_1,(h, x). If, additionally, is a subset of ^n and, for any h ∈, x ↦Δ_ℓ_1,(h, x) is non-decreasing and x ↦α(h, x) β(h, x) is non-increasing, or vice-versa, then, by the FKG inequality <cit.>, we have _ℓ_2(h) - ^*_ℓ_2() + _ℓ_2() ≤Γ*_X*α(h, x) β(h, x)/_X*β(h, x)Δ_ℓ_1,(h, x) ≤Γ*_X*α(h, x) β(h, x)/_X*β(h, x)_X*Δ_ℓ_1,(h, x) ≤Γ*_X*α(h, x) β(h, x)/_X*β(h, x)*_ℓ_1(h) - ^*_ℓ_1() + _ℓ_1(), which completes the proof. * Take h and x such that sup_x ∈α(h, x) β(h, x) is achieved. Consider the distribution concentrates on that x. Then, the bounds given by (<ref>) and (<ref>) reduce to the following forms: Ψ*Δ_ℓ_2,(h, x) _X*β(h, x)/β(h, x) ≤α(h, x) Δ_ℓ_1,(h, x) Δ_ℓ_2,(h, x) _X*β(h, x)/β(h, x) ≤Γ*α(h, x) Δ_ℓ_1,(h, x) where we used the fact that β(h, x)/_X*β(h, x) = 1 in this case. They exactly match the assumptions in Theorems <ref> and <ref>, which are the tightest inequalities that can be obtained. The -consistency bound is in fact an equality in the same cases when the assumption holds with the best choices of α and β. * For any h∈, we can write _ℓ_2(h) - ^*_ℓ_2() + _ℓ_2() = _X*Δ_ℓ_2,(h, x) ≤_X*β(h, x)/_X*β(h, x)α^1/s(h, x) Δ^1/s_ℓ_1,(h, x)assumption ≤_X *α^t/s(h, x) β^t(h, x)/_X*β(h, x)^t^1/t_X *Δ_ℓ_1,(h, x)^1/sHölder’s ineq. = _X *α^t/s(h, x) β^t(h, x)/_X*β(h, x)^t^1/t*_ℓ_1(h) - ^*_ℓ_1() + _ℓ_1()^1/sdef. of _X *Δ_ℓ_1,(h, x). This completes the proof. § PROOF OF ENHANCED H-CONSISTENCY BOUNDS IN MULTI-CLASS CLASSIFICATION (THEOREM <REF>) To begin the proof, we first introduce the following result from <cit.>, which characterizes the conditional regret of the multi-class zero-one loss. For completeness, we include the proof here. lemmaExplicitAssumption Assume that is symmetric and complete. For any x ∈, the best-in-class conditional error and the conditional regret for ℓ_0-1 can be expressed as follows: ^*_ℓ_0-1, (x) = 1 - max_y ∈ p(y | x) Δ_ℓ_0-1, (h, x) = max_y ∈ p(y | x) - p((x) | x) The conditional error for ℓ_0-1 can be expressed as: _ℓ_0-1(h, x) = ∑_y ∈ p(y | x) 1_(x) ≠ y = 1 - p((x) | y). Since is symmetric and complete, we have *(x) h ∈ =. Therefore, ^*_ℓ_0-1, (x) = inf_h ∈_ℓ_0-1(h, x) = 1 - max_y ∈ p(y | x) Δ_ℓ_0-1, (h, x) = _ℓ_0-1(h, x) - ^*_ℓ_0-1, (x) = max_y ∈ p(y | x) - p((x) | x). This completes the proof. * The conditional error for constrained losses can be expressed as follows: _Φ^cstnd(h, x) = ∑_y ∈ p(y | x) ∑_y'≠ yΦ*-h(x, y') = ∑_y ∈*1 - p(y | x)Φ*-h(x, y). Next, we will provide the proof for each case individually. We denote by y_max = _y ∈ p(y | x). Constrained exponential loss with Φ(u) = e^-u. When (x) = y_max, we have Δ_ℓ_0-1, (h, x) = 0. Let h ∈ be a hypothesis such that (x) ≠ y_max. In this case, the conditional error can be written as: _Φ^cstnd(h, x) = ∑_y∈*1 - p(y | x) e^h(x, y) = ∑_y∈*y_max, (x)*1 - p(y | x) e^h(x, y) + ∑_y ∉*y_max, (x) e^h(x, y). For any x∈, define the hypothesis h_μ∈ by h_μ(x, y) = h(x, y) if y ∉*y_max, (x) h(x, y_max) + μ if y = (x) h(x, (x)) - μ if y = y_max for any μ∈. By the completeness of , we have h_μ∈ and ∑_y ∈ h_μ(x, y)=0. Thus, Δ_Φ^cstnd, (h, x) = _Φ^cstnd(h, x) - ^*_Φ^cstnd(, x) ≥_Φ^cstnd(h, x) - inf_μ∈_Φ^cstnd(h_μ, x) = *√((1 - p((x) | x)) e^h(x, (x))) - √((1 - p(y_max| x)) e^h(x, y_max))^2 ≥ e^h(x, (x))*√((1 - p((x) | x))) - √((1 - p(y_max| x)))^2 e^h(x, (x))≥ e^h(x, y_max) and p((x) | x)≤ p(y_max| x) = e^h(x, (x))*p(y_max| x) - p((x) | x)/√((1 - p((x) | x))) + √((1 - p(y_max| x)))^2 ≥e^h(x, (x))/2*max_y ∈ p(x, y) - p((x) | x)^2 0 ≤ p(y_max| x) + p((x) | x)≤ 1 ≥e^Λ(h) /2*Δ_ℓ_0-1, (h, x)^2. Therefore, by Theorems <ref> or <ref>, the following inequality holds for any hypothesis h ∈: _ℓ_0-1(h) - _ℓ_0-1^*() + _ℓ_0-1() ≤√(2)/*e^Λ(h)^1/2*_Φ^cstnd(h) - _Φ^cstnd^*() + _Φ^cstnd()^1/2. Constrained hinge loss with Φ(u) = max*1 - u, 0. When (x) = y_max, we have Δ_ℓ_0-1, (h, x) = 0. Let h ∈ be a hypothesis such that (x) ≠ y_max. In this case, the conditional error can be written as: _Φ^cstnd(h, x) = ∑_y∈*1 - p(y | x)max*1 + h(x, y), 0 = ∑_y∈*y_max, (x)*1 - p(y | x)max*1 + h(x, y), 0 + ∑_y ∉*y_max, (x)max*1 + h(x, y), 0. For any x∈, define the hypothesis h_μ∈ by h_μ(x, y) = h(x, y) if y ∉*y_max, (x) h(x, y_max) + μ if y = (x) h(x, (x)) - μ if y = y_max for any μ∈. By the completeness of , we have h_μ∈ and ∑_y ∈ h_μ(x, y)=0. Thus, Δ_Φ^cstnd, (h, x) = _Φ^cstnd(h, x) - ^*_Φ^cstnd(, x) ≥_Φ^cstnd(h, x) - inf_μ∈_Φ^cstnd(h_μ, x) ≥*1 + h(x, (x))*p(y_max| x) - p((x) | x) ≥1 + Λ(h)*Δ_ℓ_0-1, (h, x). Therefore, by Theorems <ref> or <ref>, the following inequality holds for any hypothesis h ∈: _ℓ_0-1(h) - _ℓ_0-1^*() + _ℓ_0-1() ≤1/1 + Λ(h)*_Φ^cstnd(h) - _Φ^cstnd^*() + _Φ^cstnd(). Constrained squared hinge loss with Φ(u) = (1 - u)^2 1_u ≤ 1. When (x) = y_max, we have Δ_ℓ_0-1, (h, x) = 0. Let h ∈ be a hypothesis such that (x) ≠ y_max. In this case, the conditional error can be written as: _Φ^cstnd(h, x) = ∑_y∈*1 - p(y | x)max*1 + h(x, y), 0^2 = ∑_y∈*y_max, (x)*1 - p(y | x)max*1 + h(x, y), 0^2 + ∑_y ∉*y_max, (x)max*1 + h(x, y), 0^2. For any x∈, define the hypothesis h_μ∈ by h_μ(x, y) = h(x, y) if y ∉*y_max, (x) h(x, y_max) + μ if y = (x) h(x, (x)) - μ if y = y_max for any μ∈. By the completeness of , we have h_μ∈ and ∑_y ∈ h_μ(x, y)=0. Thus, Δ_Φ^cstnd, (h, x) = _Φ^cstnd(h, x) - ^*_Φ^cstnd(, x) ≥_Φ^cstnd(h, x) - inf_μ∈_Φ^cstnd(h_μ, x) ≥*1 + h(x, (x))^2 *p(y_max| x) - p((x) | x)^2 ≥1 + Λ(h)^2 *Δ_ℓ_0-1, (h, x)^2. Therefore, by Theorems <ref> or <ref>, the following inequality holds for any hypothesis h ∈: _ℓ_0-1(h) - _ℓ_0-1^*() + _ℓ_0-1() ≤1/1 + Λ(h)*_Φ^cstnd(h) - _Φ^cstnd^*() + _Φ^cstnd()^1/2. § PROOF OF ENHANCED H-CONSISTENCY BOUNDS UNDER LOW-NOISE CONDITIONS §.§ Proof of Theorem <ref> * Fix > 0 and define β(h, x) = 1_h(x) ≠ h^*(x) +. Since Δ_ℓ^bi_0-1,(h, x) = |2 η(x) - 1 | 1_h(x) ≠ h^*(x), we have Δ_ℓ^bi_0-1,(h, x) _X[β(h, x)]/β(h, x)≤Δ_ℓ^bi_0-1,(h, x) _X[β(h, x)], thus the following inequality holds Δ_ℓ^bi_0-1,(h, x) _X[β(h, x)]/β(h, x)≤_X[β(h, x)] Δ^1/s_ℓ,(h, x). By Theorem <ref>, with α(h, x) = _X[β(h, x)]^s, we have _ℓ^bi_0-1(h) - ^*_ℓ^bi_0-1() ≤_X[β^t(h, x)]^1/t*_ℓ(h) - ^*_ℓ() + _ℓ()^1/s. Since the inequality holds for any > 0, it implies: _ℓ^bi_0-1(h) - ^*_ℓ^bi_0-1() ≤_X**1_h(x) ≠ h^*(x)^t^1/t*_ℓ(h) - ^*_ℓ() + _ℓ()^1/s = _X[1_h(x) ≠ h^*(x)]^1/t*_ℓ(h) - ^*_ℓ() + _ℓ()^1/s*1_h(x) ≠ h^*(x)^t = 1_h(x) ≠ h^*(x) ≤ c^1/t*_ℓ^bi_0-1(h) - ^*_ℓ^bi_0-1()^α/t*_ℓ(h) - ^*_ℓ() + _ℓ()^1/sTsybakov noise assumption The result follows after dividing both sides by *_ℓ^bi_0-1(h) - ^*_ℓ^bi_0-1()^α/t. §.§ Proof of Lemma <ref> and Lemma <ref> * We prove the first inequality, the second one follows immediately the definition of the margin. By definition of the expectation and the Lebesgue integral, for any u > 0, we can write [γ(X) 1_(X) ≠^*(X)] = ∫_0^+∞[γ(X) 1_(X) ≠^*(X) > t] dt ≥∫_0^u[γ(X) 1_(X) ≠^*(X) > t] dt = ∫_0^u[1_γ(X) > t 1_(X) ≠^*(X)] dt = ∫_0^u*[1_γ(X) > t] - [1_γ(X) > t 1_(X) = ^*(X)] dt ≥∫_0^u*[1_γ(X) > t] - [1_(X) = ^*(X)] dt = ∫_0^u*[(X) ≠^*(X)] - [γ(X) ≤ t] dt ≥ u [(X) ≠^*(X)] - ∫_0^u B t^α/1 - α dt = u [(X) ≠^*(X)] - B (1 - α) u^1/1 - α. Taking the derivative and choosing u to maximize the above gives u = *[(X) ≠^*(X)]/B^1- α/α. Plugging in this choice of u gives [γ(X) 1_(X) ≠^*(X)] ≥*1/B^1 - α/αα[(X) ≠^*(X)]^1/α, which can be rewritten as [(X) ≠^*(X)] ≤ c [γ(X) 1_(X) ≠^*(X)]^α for c = B^1 - α/α^α. * Fix t > 0 and consider the event *γ(X) ≤ t. Since h can be chosen to be any measurable function, there exists h such 1_γ(X) ≤ t = 1_(X) ≠^*(X). In view of that, we can write [γ(X) ≤ t] = [1_γ(X) ≤ t] ≤ c [γ(X) 1_γ(X) ≤ t]^α≤ c t^α[1_γ(X) ≤ t]^α. Comparing the left- and right-hand sides gives immediately [γ(X) ≤ t] ≤ c^1/1 - α t^α/1 - α. Choosing B = c^1/1 - α completes the proof. §.§ Proof of Theorem <ref> * Fix > 0 and define β(h, x) = 1_(x) ≠^*(x) +. By Lemma <ref>, Δ_ℓ_0-1,(h, x) = max_y ∈p(y | x) - p((x) | x) = p(^*(x) | x) - p((x) | x), we have Δ_ℓ_0-1,(h, x) _X[β(h, x)]/β(h, x)≤Δ_ℓ_0-1,(h, x) _X[β(h, x)], thus the following inequality holds Δ_ℓ_0-1,(h, x) _X[β(h, x)]/β(h, x)≤_X[β(h, x)] Δ^1/s_ℓ,(h, x). By Theorem <ref>, with α(h, x) = _X[β(h, x)]^s, we have _ℓ_0-1(h) - ^*_ℓ_0-1() ≤_X[β^t(h, x)]^1/t*_ℓ(h) - ^*_ℓ() + _ℓ()^1/s. Since the inequality holds for any > 0, it implies: _ℓ_0-1(h) - ^*_ℓ_0-1() ≤_X**1_(X) ≠^*(X)^t^1/t*_ℓ(h) - ^*_ℓ() + _ℓ()^1/s = _X[1_(X) ≠^*(X)]^1/t*_ℓ(h) - ^*_ℓ() + _ℓ()^1/s*1_(X) ≠^*(X)^t = 1_(x) ≠^*(x) ≤ c^1/t*_ℓ_0-1(h) - ^*_ℓ_0-1()^α/t*_ℓ(h) - ^*_ℓ() + _ℓ()^1/sTsybakov noise assumption The result follows after dividing both sides by *_ℓ_0-1(h) - ^*_ℓ_0-1()^α/t. § EXAMPLES OF ENHANCED H-CONSISTENCY BOUNDS UNDER LOW-NOISE CONDITIONS §.§ Binary classification Here we consider complete hypothesis sets in binary classification satisfying ∀ x ∈, *h(x) h ∈ =. We consider margin-based loss functions ℓ(h, x, y) = Φ(yh(x)), including the hinge loss, logistic loss, exponential loss, squared-hinge loss, sigmoid loss, and ρ-margin loss. As shown by <cit.>, their corresponding Γ functions are either linear or squared functions. Table <ref> presents enhanced -consistency bounds for them under the Tsybakov noise assumption as provided by Theorem <ref>. §.§ Multi-class classification Here we consider symmetric and complete hypothesis sets in multi-class classification. We consider constrained losses <cit.> and comp-sum losses <cit.>. As shown by <cit.> and <cit.>, their corresponding Γ functions are either linear or squared functions. Tables <ref> and <ref> presents enhanced -consistency bounds for constrained losses and comp-sum losses under the Tsybakov noise assumption as provided by Theorem <ref>. § PROOF OF ENHANCED H-CONSISTENCY BOUNDS IN BIPARTITE RANKING §.§ Proof of Theorem <ref> * For any h∈, we can write _(h) - ^*_() + _() = _X,X'*Δ_,(h, x, x') ≤_X,X'*Γ_1 *α_1(h, x') Δ_ℓ,(h, x) + Γ_2 *α_2(h, x) Δ_ℓ,(h, x')assumption ≤Γ_1 *_X'*α_1(h, x')_X*Δ_ℓ,(h, x) + Γ_2 *_X*α_2(h, x)_X'*Δ_ℓ,(h, x') Jensen's ineq. ≤Γ_1 *_x ∈*α_1(h, x) *_ℓ(h) - ^*_ℓ() + _ℓ() + Γ_2 *_x ∈*α_2(h, x) *_ℓ(h) - ^*_ℓ() + _ℓ()def. of _X *Δ_ℓ,(h, x), which completes the proof. §.§ Proof of Theorem <ref> * To simplify the notation, we will drop the dependency on x. Specifically, we use η to denote η(x), η' to denote η(x'), h to denote h(x), and h' to denote h(x'). Thus, we can write: Δ__Φ_exp, (h, x, x') = η (1 - η') e^-h + h' + η' (1 - η) e^-h' + h - 2 √(η (1 - η') η' (1 - η)) _ℓ_Φ_exp(h, x) = η e^-h + (1 - η) e^h Δ_ℓ_Φ_exp,(h, x) = η e^-h + (1 - η) e^h - 2 √(η (1 - η)). For any A, B ∈, we have (A + B)^2 ≤ 2(A^2 + B^2). To prove this inequality, observe that the function x ↦ x^2 is convex. Therefore, we have: (A + B)^2 = 4 *A + B/2^2 ≤ 4 *A^2 + B^2/2 = 2 (A^2 + B^2). In light of this inequality, we can write Δ__Φ_exp, (h, x, x') = *√(η (1 - η') e^-h + h') - √(η' (1 - η) e^-h' + h)^2 = *√((1 - η') e^h')*√(η e^-h) - √((1 - η) e^h) + √((1 - η) e^h)*√((1 - η') e^h') - √(η' e^-h')^2 ≤ 2*(1 - η') e^h'*√(η e^-h) - √((1 - η) e^h)^2 + 2*(1 - η) e^h*√(η' e^-h') - √((1 - η') e^h')^2 (A + B)^2 ≤ 2 (A^2 + B^2) Δ__Φ_exp, (h, x, x') = *√(η' (1 - η) e^-h' + h) - √(η (1 - η') e^-h + h')^2 = *√(η' e^-h')*√((1 - η) e^h) - √(η e^-h) + √(η e^-h)*√(η' e^-h') - √((1 - η') e^h')^2 ≤ 2*η' e^-h'*√(η e^-h) - √((1 - η) e^h)^2 + 2*η e^-h*√(η' e^-h') - √((1 - η') e^h')^2 (A + B)^2 ≤ 2 (A^2 + B^2). Thus, by taking the mean of the two inequalities, we obtain: Δ__Φ_exp, (h, x, x') ≤*η' e^-h' + (1 - η') e^h'*√(η e^-h) - √((1 - η) e^h)^2 + *η e^-h + (1 - η) e^h*√(η' e^-h') - √((1 - η') e^h')^2. Therefore, we have Δ__Φ_exp, (h, x, x') ≤_ℓ_Φ_exp(h, x') Δ_ℓ_Φ_exp,(h, x) + _ℓ_Φ_exp(h, x) Δ_ℓ_Φ_exp,(h, x'). By Theorem <ref>, we obtain __Φ_exp(h) - __Φ_exp^*() + __Φ_exp() ≤ 2 _ℓ_Φ_exp(h) *_ℓ_Φ_exp(h) - ^*_ℓ_Φ_exp() + _ℓ_Φ_exp(). §.§ Proof of Theorem <ref> * By the definition, we have _Φ(h, x) = η(x) Φ(h(x)) + *1 - η(x)Φ(-h(x)) _Φ(h, x') = η(x') Φ(h(x')) + *1 - η(x')Φ(-h(x')) __Φ(h, x, x') = η(x)*1 - η(x')Φ(h(x) - h(x')) + η(x')*1 - η(x)Φ(-h(x) + h(x')). Therefore, by taking the derivative, we have Δ_Φ,(h, x) = 0 η(x) Φ'(h(x)) = *1 - η(x)Φ'(-h(x)) h(x) = 1/νlog*η(x)/1 - η(x) Δ_Φ,(h, x') = 0 η(x') Φ'(h(x')) = *1 - η(x')Φ'(-h(x')) h(x') = 1/νlog*η(x')/1 - η(x'). Therefore, h(x) - h(x') = 1/νlog*η(x) (1 - η(x'))/η(x') (1 - η(x)). This satisfies that η(x)*1 - η(x')Φ'(h(x) - h(x')) = η(x')*1 - η(x)Φ'(-h(x) + h(x')), which implies that Δ__Φ, (h, x, x') = 0 by taking the derivative. §.§ Proof of Theorem <ref> * __Φ_log(h, x, x') = η(x)(1 - η(x')) Φ_log(h(x) - h(x')) + η(x')(1 - η(x))Φ_log(h(x') - h(x)) = η(x)(1 - η(x'))log_2*1 + e^-h(x) + h(x') + η(x')(1 - η(x))log_2*1 + e^h(x) - h(x') ^*__Φ_log,_all(x, x') = inf_h ∈_all__Φ_log(h, x, x') = -η(x)(1 - η(x'))log_2(η(x)(1 - η(x'))) - η(x')(1 - η(x))log_2(η(x')(1 - η(x))). _Φ_log(h, x) = η(x) Φ_log(h(x)) + (1 - η(x))Φ_log(-h(x)) = η(x) log_2*1 + e^-h(x) + (1 - t) log_2*1 + e^h(x). inf_h∈_all_Φ_log(h, x) = -η(x) log_2(η(x)) - (1 - η(x)) log_2(1 - η(x)) To simplify the notation, we will drop the dependency on x. Specifically, we use η to denote η(x), η' to denote η(x'), h to denote h(x), and h' to denote h(x'). Thus, we can write: Δ__Φ_log, (h, x, x') ≤η (1 - η') log*1 + e^-h + h' + η' (1 - η) log*1 + e^-h' + h + η (1 - η') log*η (1 - η') + η' (1 - η) log*η' (1 - η) Δ_ℓ_Φ_log, (h, x) = ηlog*1 + e^-h + (1 - η) log*1 + e^h + ηlog*η + (1 - η) log*(1 - η). Let Φ_log denote the logistic loss. Φ_log is a convex function and for any x, we have Φ_log(2x) ≤ 2 Φ_log(x). To prove this last inequality, observe that for any x ∈, we have Φ_log(2x) = log (1 + e^-2x) ≤log (1 + 2e^-x + e^-2x) = log ((1 + e^-x)^2) = 2 log ((1 + e^-x)) = 2 Φ_log(x). Thus, we can write Φ_log(h - h') = Φ_log(2h/2 - 2h'/2) ≤1/2 (Φ_log(2h) + Φ_log(-2h')) ≤Φ_log(h) + Φ_log(-h'). In light of this inequality, we can write Δ__Φ_log, (h, x, x') ≤η (1 - η') (Φ_log(h) + Φ_log(- h')) + η' (1 - η) (Φ_log(h') + Φ_log(- h)) + η (1 - η') log*η (1 - η') + η' (1 - η) log*η' (1 - η) = η (1 - η') (Φ_log(h) + Φ_log(- h')) + η' (1 - η) (Φ_log(h') + Φ_log(- h)) + η (1 - η') *logη + log (1 - η') + η' (1 - η) *logη' + log (1 - η). Therefore, we have Δ__Φ_log, (h, x, x') ≤max*η', 1 - η'Δ_ℓ_Φ_log,(h, x) + max*η, 1 - ηΔ_ℓ_Φ_log,(h, x') . By Theorem <ref>, we obtain __Φ_log(h) - __Φ_log^*() + __Φ_log() ≤ 2[max*η(x), (1 - η(x))] *_ℓ_Φ_log(h) - ^*_ℓ_Φ_log() + _ℓ_Φ_log(). §.§ Proof of Theorem <ref> * Consider the distribution that supports on *(x_0, x_0'). Let 1 ≥η(x_0) > η(x'_0) > 1/2, and h_0 = 1 ∈. Then, for any h ∈, _ℓ_Φ_hinge(h, x_0) = η(x_0) max*0, 1 - h(x_0) + (1 - η(x_0)) max*0, 1 + h(x_0)≥ 2 (1 - η(x_0)) _ℓ_Φ_hinge(h, x'_0) = η(x'_0) max*0, 1 - h(x'_0) + (1 - η(x'_0)) max*0, 1 + h(x'_0)≥ 2 (1 - η(x'_0)), where both equality can be achieved by h_0 = 1. Furthermore, __Φ_hinge(h_0, x_0, x'_0) = η(x_0) (1 - η(x'_0)) max*0, 1 - h_0(x_0) + h_0(x'_0) + η(x'_0) (1 - η(x_0)) max*1 -h_0(x'_0) + h_0(x_0) = η(x_0) (1 - η(x'_0)) + η(x'_0) (1 - η(x_0)) Δ__Φ_hinge, (h_0, x_0, x'_0) = η(x_0) (1 - η(x'_0)) + η(x'_0) (1 - η(x_0)) - 2min*η(x_0) (1 - η(x'_0)), η(x'_0) (1 - η(x_0)) = η(x_0) - η(x'_0). Therefore, Δ_ℓ_Φ_hinge, (h_0, x_0) = Δ_ℓ_Φ_hinge, (h_0, x'_0) = 0, but Δ__Φ_hinge, (h_0, x_0, x'_0) ≠ 0, which implies that ℓ_Φ_hinge is not -calibrated with respect to _Φ_hinge. Suppose that for all h∈, the following holds: Δ__Φ_hinge, (h, x_0, x'_0) ≤Γ_1 *α_1(h, x'_0) Δ_ℓ_Φ_hinge, (h, x_0) + Γ_2 *α_2(h, x_0) Δ_ℓ_Φ_hinge, (h, x'_0). Let h = h_0, then, for any 1 ≥η(x_0) > η(x'_0) > 1/2, the following inequality holds: η(x_0) - η(x'_0) ≤Γ_1(0) + Γ_2(0). This implies that Γ_1(0) + Γ_2(0) ≥1/2. § GENERALIZATION BOUNDS Here, we show that all our derived enhanced -consistency bounds can be used to provide novel enhanced generalization bounds in their respective settings. §.§ Standard multi-class classification Let S = *(x_1, y_1), …, (x_m, y_m) be a finite sample drawn from ^m. We denote by h_S the minimizer of the empirical loss within with respect to the constrained loss Φ^cstnd: h_S = _h ∈_Φ^cstnd, S(h) = _h ∈1/m∑_i = 1^m Φ^cstnd(h, x_i,y _i). Next, by using enhanced -consistency bounds for constrained losses Φ^cstnd in Theorem <ref>, we derive novel generalization bounds for the multi-class zero-one loss by upper bounding the surrogate estimation error _Φ^cstnd( h_S) - _Φ^cstnd^*() with the complexity (e.g. the Rademacher complexity) of the family of functions associated with Φ^cstnd and : _Φ^cstnd=*(x, y) ↦Φ^cstnd(h, x, y) h ∈. Let _m^Φ^cstnd() be the Rademacher complexity of _Φ^cstnd and B_Φ^cstnd an upper bound of the constrained loss Φ^cstnd. The following generalization bound for the multi-class zero-one loss holds. [Enhanced generalization bound with constrained losses] theoremBoundLeeFinerG Assume that is symmetric and complete. Then, the following generalization bound holds for h_S: for any δ > 0, with probability at least 1-δ over the draw of an i.i.d sample S of size m: _ℓ_0-1( h_S) - ^*_ℓ_0-1() + _ℓ_0-1() ≤Γ*4 _m^Φ^cstnd() + 2 B_Φ^cstnd√(log2/δ2m) + _Φ^cstnd(), where Γ(x) = √(2) x^1/2/*e^Λ( h_S)^1/2 for Φ(u) = e^-u, Γ(x) = x/1 + Λ( h_S) for Φ(u) = max*0, 1 - u, and Γ(x) = x^1/2/1 + Λ( h_S) for Φ(u) = (1 - u)^2 1_u ≤ 1. Additionally, Λ( h_S) = inf_x ∈max_y∈ h_S(x,y). By using the standard Rademacher complexity bounds <cit.>, for any δ>0, with probability at least 1 - δ, the following holds for all h ∈: *_Φ^cstnd(h) - _Φ^cstnd, S(h)≤ 2 _m^Φ^cstnd() + B_Φ^cstnd√(log (2/δ)2m). Fix > 0. By the definition of the infimum, there exists h^* ∈ such that _Φ^cstnd(h^*) ≤_Φ^cstnd^*() +. By definition of h_S, we have _Φ^cstnd( h_S) - _Φ^cstnd^*() = _Φ^cstnd( h_S) - _Φ^cstnd, S( h_S) + _Φ^cstnd, S( h_S) - _Φ^cstnd^*() ≤_Φ^cstnd( h_S) - _Φ^cstnd, S( h_S) + _Φ^cstnd, S(h^*) - _Φ^cstnd^*() ≤_Φ^cstnd( h_S) - _Φ^cstnd, S( h_S) + _Φ^cstnd, S(h^*) - _Φ^cstnd^*(h^*) + ≤ 2 *2 _m^Φ^cstnd() + B_Φ^cstnd√(log (2/δ)2m) + . Since the inequality holds for all > 0, it implies: _Φ^cstnd( h_S) - _Φ^cstnd^*() ≤ 4 _m^Φ^cstnd() + 2 B_Φ^cstnd√(log (2/δ)2m). Plugging in this inequality in the bounds of Theorem <ref> completes the proof. To the best of our knowledge, Theorem <ref> provides the first enhanced finite-sample guarantees, expressed in terms of minimizability gaps, for the estimation error of the minimizer of constrained losses with respect to the multi-class zero-one loss, incorporating a quantity Λ( h_S) depending on h_S. The proof uses our enhanced -consistency bounds for constrained losses (Theorem <ref>), as well as standard Rademacher complexity guarantees. §.§ Classification under low-noise conditions Let S = *(x_1, y_1), …, (x_m, y_m) be a finite sample drawn from ^m. We denote by h_S the minimizer of the empirical loss within with respect to a surrogate loss ℓ: h_S = _h ∈_ℓ, S(h) = _h ∈1/m∑_i = 1^m ℓ(h, x_i,y _i). Next, by using enhanced -consistency bounds for surrogate losses ℓ in Theorems <ref> and <ref>, we derive novel generalization bounds for the binary and multi-class zero-one loss under low-noise conditions, by upper bounding the surrogate estimation error _ℓ( h_S) - _ℓ^*() with the complexity (e.g. the Rademacher complexity) of the family of functions associated with ℓ and : _ℓ=*(x, y) ↦ℓ(h, x, y) h ∈. Let _m^ℓ() be the Rademacher complexity of _ℓ and B_ℓ an upper bound of the surrogate loss ℓ. The following generalization bounds for the binary and multi-class zero-one loss hold. [Enhanced binary generalization bound under the Tsybakov noise assumption]theoremTsybakovBinaryG Consider a binary classification setting where the Tsybakov noise assumption holds. Assume that the following holds for all h ∈ and x ∈: Δ_ℓ^bi_0-1,(h, x) ≤Γ*Δ_ℓ,(h, x), with Γ(x) = x^1/s, for some s ≥ 1. Then, for any h ∈, _ℓ^bi_0-1( h_S) - ^*_ℓ^bi_0-1() ≤ c^s - 1/s - α(s - 1)*4 _m^ℓ() + 2 B_ℓ√(log (2/δ)2m) + _ℓ()^1/s - α(s - 1). By using the standard Rademacher complexity bounds <cit.>, for any δ>0, with probability at least 1 - δ, the following holds for all h ∈: *_ℓ(h) - _ℓ, S(h)≤ 2 _m^ℓ() + B_ℓ√(log (2/δ)2m). Fix > 0. By the definition of the infimum, there exists h^* ∈ such that _ℓ(h^*) ≤_ℓ^*() +. By definition of h_S, we have _ℓ( h_S) - _ℓ^*() = _ℓ( h_S) - _ℓ, S( h_S) + _ℓ, S( h_S) - _ℓ^*() ≤_ℓ( h_S) - _ℓ, S( h_S) + _ℓ, S(h^*) - _ℓ^*() ≤_ℓ( h_S) - _ℓ, S( h_S) + _ℓ, S(h^*) - _ℓ^*(h^*) + ≤ 2 *2 _m^ℓ() + B_ℓ√(log (2/δ)2m) + . Since the inequality holds for all > 0, it implies: _ℓ( h_S) - _ℓ^*() ≤ 4 _m^ℓ() + 2 B_ℓ√(log (2/δ)2m). Plugging in this inequality in the bounds of Theorem <ref> completes the proof. [Enhanced multi-class generalization bound under the Tsybakov noise assumption]theoremTsybakovMultiG Consider a multi-class classification setting where the Tsybakov noise assumption holds. Assume that the following holds for all h ∈ and x ∈: Δ_ℓ_0-1,(h, x) ≤Γ*Δ_ℓ,(h, x), with Γ(x) = x^1/s, for some s ≥ 1. Then, for any h ∈, _ℓ_0-1( h_S) - ^*_ℓ_0-1() ≤ c^s - 1/s - α(s - 1)*4 _m^ℓ() + 2 B_ℓ√(log (2/δ)2m) + _ℓ()^1/s - α(s - 1). By using the standard Rademacher complexity bounds <cit.>, for any δ>0, with probability at least 1 - δ, the following holds for all h ∈: *_ℓ(h) - _ℓ, S(h)≤ 2 _m^ℓ() + B_ℓ√(log (2/δ)2m). Fix > 0. By the definition of the infimum, there exists h^* ∈ such that _ℓ(h^*) ≤_ℓ^*() +. By definition of h_S, we have _ℓ( h_S) - _ℓ^*() = _ℓ( h_S) - _ℓ, S( h_S) + _ℓ, S( h_S) - _ℓ^*() ≤_ℓ( h_S) - _ℓ, S( h_S) + _ℓ, S(h^*) - _ℓ^*() ≤_ℓ( h_S) - _ℓ, S( h_S) + _ℓ, S(h^*) - _ℓ^*(h^*) + ≤ 2 *2 _m^ℓ() + B_ℓ√(log (2/δ)2m) + . Since the inequality holds for all > 0, it implies: _ℓ( h_S) - _ℓ^*() ≤ 4 _m^ℓ() + 2 B_ℓ√(log (2/δ)2m). Plugging in this inequality in the bounds of Theorem <ref> completes the proof. To the best of our knowledge, Theorems <ref> and <ref> provide the first enhanced finite-sample guarantees, expressed in terms of minimizability gaps, for the estimation error of the minimizer of surrogate losses with respect to the binary and multi-class zero-one loss, under the Tsybakov noise assumption. The proofs use our enhanced -consistency bounds (Theorems <ref> and <ref>), as well as standard Rademacher complexity guarantees. Note that our enhanced -consistency bounds can also be combined with standard bounds of _ℓ( h_S) - _ℓ^*() in the case of Tsybakov noise, which can yield a fast rate. For example, our enhanced -consistency bounds in binary classification (Theorems <ref>) can be combined with the fast estimation rates described in <cit.>, which can lead to enhanced finite-sample guarantees in the case of Tsybakov noise. This is even true in the case of = _all, with a more favorable factor of one instead of 2^s/s - α (s - 1), which is always greater than one. §.§ Bipartite ranking Let S = *(x_1, y_1), (x'_1, y'_1), …, (x_m, y_m), (x'_m, y'_m) be a finite sample drawn from (×)^m. We denote by h_S the minimizer of the empirical loss within with respect to a classification surrogate loss ℓ_Φ: h_S = _h ∈_ℓ_Φ, S(h) = _h ∈1/m∑_i = 1^m ℓ_Φ(h, x_i,y _i). Next, by using enhanced -consistency bounds for classification surrogate losses ℓ_Φ in Theorems <ref> and <ref>, we derive novel generalization bounds for bipartite ranking surrogate losses _Φ, by upper bounding the surrogate estimation error _ℓ_Φ( h_S) - _ℓ_Φ^*() with the complexity (e.g. the Rademacher complexity) of the family of functions associated with ℓ_Φ and : _ℓ_Φ=*(x, y) ↦ℓ_Φ(h, x, y) h ∈. Let _m^ℓ_Φ() be the Rademacher complexity of _ℓ_Φ and B_ℓ_Φ an upper bound of the classification surrogate ℓ_Φ. The following generalization bounds for the bipartite ranking surrogate losses _Φ hold. [Enhanced generalization bound with AdaBoost]theoremNewBoundConcaveRankingExpG Assume that is complete. Then, for any hypothesis h ∈, we have __Φ_exp( h_S) - __Φ_exp^*() + __Φ_exp() ≤ 2 _ℓ_Φ_exp( h_S) *4 _m^ℓ_Φ_exp() + 2 B_ℓ_Φ_exp√(log (2/δ)2m) + _ℓ_Φ_exp(). By using the standard Rademacher complexity bounds <cit.>, for any δ>0, with probability at least 1 - δ, the following holds for all h ∈: *_ℓ_Φ_exp(h) - _ℓ_Φ_exp, S(h)≤ 2 _m^ℓ_Φ_exp() + B_ℓ_Φ_exp√(log (2/δ)2m). Fix > 0. By the definition of the infimum, there exists h^* ∈ such that _ℓ_Φ_exp(h^*) ≤_ℓ_Φ_exp^*() +. By definition of h_S, we have _ℓ_Φ_exp( h_S) - _ℓ_Φ_exp^*() = _ℓ_Φ_exp( h_S) - _ℓ_Φ_exp, S( h_S) + _ℓ_Φ_exp, S( h_S) - _ℓ_Φ_exp^*() ≤_ℓ_Φ_exp( h_S) - _ℓ_Φ_exp, S( h_S) + _ℓ_Φ_exp, S(h^*) - _ℓ_Φ_exp^*() ≤_ℓ_Φ_exp( h_S) - _ℓ_Φ_exp, S( h_S) + _ℓ_Φ_exp, S(h^*) - _ℓ_Φ_exp^*(h^*) + ≤ 2 *2 _m^ℓ_Φ_exp() + B_ℓ_Φ_exp√(log (2/δ)2m) + . Since the inequality holds for all > 0, it implies: _ℓ_Φ_exp( h_S) - _ℓ_Φ_exp^*() ≤ 4 _m^ℓ_Φ_exp() + 2 B_ℓ_Φ_exp√(log (2/δ)2m). Plugging in this inequality in the bounds of Theorem <ref> completes the proof. [Enhanced generalization bound with logistic regression]theoremNewBoundConcaveRankingLogG Assume that is complete. For any x, define u(x) = max*η(x), 1 - η(x). Then, for any hypothesis h ∈, we have __Φ_log( h_S) - __Φ_log^*() + __Φ_log() ≤ 2[u(X)] *4 _m^ℓ_Φ_log() + 2 B_ℓ_Φ_log√(log (2/δ)2m) + _ℓ_Φ_log(). By using the standard Rademacher complexity bounds <cit.>, for any δ>0, with probability at least 1 - δ, the following holds for all h ∈: *_ℓ_Φ_log(h) - _ℓ_Φ_log, S(h)≤ 2 _m^ℓ_Φ_log() + B_ℓ_Φ_log√(log (2/δ)2m). Fix > 0. By the definition of the infimum, there exists h^* ∈ such that _ℓ_Φ_log(h^*) ≤_ℓ_Φ_log^*() +. By definition of h_S, we have _ℓ_Φ_log( h_S) - _ℓ_Φ_log^*() = _ℓ_Φ_log( h_S) - _ℓ_Φ_log, S( h_S) + _ℓ_Φ_log, S( h_S) - _ℓ_Φ_log^*() ≤_ℓ_Φ_log( h_S) - _ℓ_Φ_log, S( h_S) + _ℓ_Φ_log, S(h^*) - _ℓ_Φ_log^*() ≤_ℓ_Φ_log( h_S) - _ℓ_Φ_log, S( h_S) + _ℓ_Φ_log, S(h^*) - _ℓ_Φ_log^*(h^*) + ≤ 2 *2 _m^ℓ_Φ_log() + B_ℓ_Φ_log√(log (2/δ)2m) + . Since the inequality holds for all > 0, it implies: _ℓ_Φ_log( h_S) - _ℓ_Φ_log^*() ≤ 4 _m^ℓ_Φ_log() + 2 B_ℓ_Φ_log√(log (2/δ)2m). Plugging in this inequality in the bounds of Theorem <ref> completes the proof. To the best of our knowledge, Theorems <ref> and <ref> provide the first enhanced finite-sample guarantees, expressed in terms of minimizability gaps, for the estimation error of the minimizer of classification surrogate losses with respect to the bipartite ranking surrogate losses. Theorem <ref> is remarkable since it provide finite simple bounds of the estimation error of the RankBoost loss function by that of AdaBoost. Significantly, Theorems <ref> implies a parallel finding for logistic regression analogous to that of AdaBoost. The proofs use our enhanced -consistency bounds (Theorems <ref> and <ref>), as well as standard Rademacher complexity guarantees. § FUTURE WORK While we presented a general framework for establishing enhanced -consistency bounds that enables the derivation of more favorable bounds in various scenarios—including standard multi-class classification, binary and multi-class classification under Tsybakov noise conditions, and bipartite ranking—extending our framework to other learning scenarios, such as non-i.i.d. settings, would be an interesting future research question. Moreover, although our work is theoretical in nature, further empirical analysis, such as verifying a favorable ranking property for logistic regression, is an interesting study that we have initiated.
http://arxiv.org/abs/2407.12113v1
20240716185124
A Graph-based Adversarial Imitation Learning Framework for Reliable & Realtime Fleet Scheduling in Urban Air Mobility
[ "Prithvi Poddar", "Steve Paul", "Souma Chowdhury" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.MA" ]
Efficiently Training 7B LLM with 1 Million Sequence Length on 8 GPUs Pinxue Zhao^1, Hailin Zhang^1, Fangcheng Fu^1, Xiaonan Nie^1, Qibin Liu^2, Fang Yang^2, Yuanbo Peng^2, Dian Jiao^2, Shuaipeng Li^2, Jinbao Xue^2, Yangyu Tao^2, Bin Cui^1 July 22, 2024 =============================================================================================================================================================================== § ABSTRACT The advent of Urban Air Mobility (UAM) presents the scope for a transformative shift in the domain of urban transportation. However, its widespread adoption and economic viability depends in part on the ability to optimally schedule the fleet of aircraft across vertiports in a UAM network, under uncertainties attributed to airspace congestion, changing weather conditions, and varying demands. This paper presents a comprehensive optimization formulation of the fleet scheduling problem, while also identifying the need for alternate solution approaches, since directly solving the resulting integer nonlinear programming (INLP) problem is computationally prohibitive for daily fleet scheduling. Previous work has shown the effectiveness of using (graph) reinforcement learning (RL) approaches to train real-time executable policy models for fleet scheduling. However, such policies can often be brittle on out-of-distribution scenarios or edge cases. Moreover, training performance also deteriorates as the complexity (e.g., number of constraints) of the problem increases. To address these issues, this paper presents an imitation learning approach where the RL-based policy exploits expert demonstrations yielded by solving the exact optimization using a Genetic Algorithm. The policy model comprises Graph Neural Network (GNN) based encoders that embed the space of vertiports and aircraft, Transformer networks to encode demand, passenger fare, and transport cost profiles, and a Multi-head attention (MHA) based decoder. Expert demonstrations are used through the Generative Adversarial Imitation Learning (GAIL) algorithm. Interfaced with a UAM simulation environment involving 8 vertiports and 40 aircrafts, in terms of the daily profits earned reward, the new imitative approach achieves better mean performance and remarkable improvement in the case of unseen worst-case scenarios, compared to pure RL results. § INTRODUCTION In the modern landscape of urban transportation, the emergence of Urban Air Mobility (UAM) introduces a revolutionary dimension to the concept of commuting. As cities grapple with increasing populations and traffic congestion, the concept of UAM proposes the use of electric vertical take-off and landing (eVTOL) aircraft <cit.> as an alternate form of automated air transportation. With a projected market size of $1.5 trillion by 2040 <cit.>, its economic viability will be driven by the ability to operate a sufficiently large number of eVTOLs in any given market (high-penetration), placing the domain of UAM fleet scheduling at the forefront of innovations. Following the air-space and aircraft safety constraints while being robust against dynamic environmental changes, mitigating energy footprint, and maximizing profitability makes developing an optimal scheduling policy challenging. The complexities inherent in UAM fleet scheduling are characterized by: Dynamic environments where real-time factors such as airspace congestion, weather conditions, demand uncertainty characterized by dynamic traffic patterns, changing weather conditions, and regulatory constraints demand intelligent scheduling solutions that can swiftly adapt to real-time challenges <cit.>. State information sharing among all vertiports and eVTOLs is necessary for trajectory and speed adjustments to ensure the safety of the operations <cit.>. Scarce resources such as vertiport parking lots, charging stations, and air corridors must be optimally allocated to prevent unnecessary delays and conflicts <cit.>. Contemporary solutions to such scheduling problems take the form of complex nonlinear Combinatorial Optimization (CO) problems <cit.>, which can be addressed through classical optimization, heuristic search, and learning-based approaches. While these approaches provide local optimal solutions for small UAM fleets <cit.>, they often present computational complexities that render them impractical for online decision-making and larger fleets. Furthermore, despite the computational expenses, these methods would still be viable options if we disregard uncertainties in the eVTOL's operations (e.g. occasional failures of the aircraft) and environmental constraints (like the closure of air corridors due to bad weather conditions). However, these factors cannot be disregarded because of the safety concerns they impose, making it especially challenging to use such approaches. In the pursuit of addressing these complexities and presenting an efficient online scheduler, we extend our previous work in <cit.> by proposing a specialized Graph Neural Networks (GNN) <cit.> based adversarial imitation learning framework that learns from the optimal schedules (expert data) generated by classical optimization algorithms and builds upon existing multi-agent task allocation methods for scaling up to larger UAM fleets. We begin by posing the UAM fleet scheduling problem as an Integer Non-Linear Programming (INLP) problem with a small number of eVTOLs and vertiports and generate optimal solutions using an elitist Genetic Algorithm. These form the expert demonstrations for the imitation learning policy. We use the Generative Adversarial Imitation Learning (GAIL) <cit.> algorithm to train the GNN policy. Then, we establish how a graph representation efficiently encodes all the state-space information of the vertiports and the eVTOLs, thus motivating the use of GNNs over regular neural networks. We also present a comparative study between a pure reinforcement learning-based method and a Genetic Algorithm, that demonstrates edge cases where standard RL methodologies fail to generate an optimal solution and require access to expert data to train an optimal policy. Finally, we compare our results with standard RL-based and Genetic Algorithm based solutions and demonstrate that our method performs better than RL-based methods, in terms of profitability and while it performs equally to a Genetic Algorithm, it is significantly faster in it terms of computational times when compared to the Genetic Algorithm. Fig.<ref> presents a diagrammatic representation of the UAM fleet management problem. § RELATED WORKS The existing body of work in Urban Air Mobility (UAM) fleet planning has witnessed notable growth, propelling the optimization and learning formalism to advance this emerging concept. However, a considerable portion of this literature neglects crucial guidelines outlined by the FAA <cit.> concerning UAM airspace integration, particularly in aspects such as air corridors, range/battery constraints, and unforeseen events like route closures due to adverse weather or eVTOL malfunctions <cit.>. Traditional methods, including Integer Linear Programming (ILP) and metaheuristics, prove inadequate for efficiently solving related NP-hard fleet scheduling problems <cit.>. For context, we focus on hour-ahead planning to allow flexibility in adapting to fluctuating demand and route conditions affected by weather and unforeseen aircraft downtime. While learning-based methods have shown promise in generating policies for combinatorial optimization problems with relatable characteristics <cit.>, their current forms often oversimplify complexities or tackle less intricate problem scenarios. In response, our paper introduces a centralized learning-based approach tailored to address the complexities and safety guidelines inherent in UAM fleet scheduling. Our proposed approach involves the creation of a simulation environment that encapsulates these complexities, facilitating the training of a policy network for generating hour-ahead sequential actions for eVTOLs across a generic 12-hour operational day. The policy network integrates a Graph Capsule Convolutional Neural Network (GCACPN) <cit.> for encoding vertiport and eVTOL state information presented as graphs, a Transformer encoder network for assimilating time-series data on demand and fare, and a feedforward network for encoding passenger transportation cost. Additionally, a Multi-head Attention mechanism is employed to fuse the encoded information and problem-specific context, enhancing sequential decision-making <cit.>. § PROBLEM DESCRIPTION AND FORMULATION §.§ UAM Fleet Scheduling as an Integer Non-Linear Programming Problem We begin by posing the UAM fleet scheduling problem as an optimization problem <cit.>, enabling us to use a Genetic Algorithm to generate the expert solution. Consider a UAM network comprising N vertiports and N_K number of eVTOLs, each with a maximum passenger carrying capacity of C(=4). Let V and K denote the set of all vertiports and eVTOLs, respectively, with each vertiport i∈ V capable of accommodating a maximum of C_max^park(=10) number of eVTOLs while also having charging stations for C_max^charge(=6) eVTOLS. Some vertiports might not have charging stations and are called vertistops (V_S⊂ V). We consider there to be 4 air corridors between two vertiports with two corridors for each direction. Each vertiport i ∈ V has an expected take-off delay T_i^TOD that affects every eVTOL taking off, and we consider this estimated take-off delay to be less than 6 minutes at each vertiport. The probability of route closure between any two vertiports i and j is considered to be P_i,j^closure(≤ 0.05) while the probability of an eVTOL k becoming dysfunctional is considered to be P_k^fail(≤ 0.005). Every episode considers a randomly assigned take-off delay (≤ 30 mins) that is drawn from a Gaussian distribution with mean T_i^TOD and a standard deviation of 6 minutes. Additional assumptions that are made regarding the problem setup are: 1) The decision-making is done by a central agent that has access to the full observation of the states of the eVTOLs and the vertiports, and 2) An eVTOL can travel between any two vertiports given it has sufficient battery charge and the resistive loss of batteries is negligible. We define R_ij as the operation cost of transporting a passenger from vertiport i to j while F_ijt^passenger is the price a passenger is charged for traveling from i to j at time t, that is based on passenger demand. We follow the demand model as mentioned in our previous work <cit.>. Let Q(I,j,t) represent the demand between vertiports i and j at time t. A subset of vertiports V_B⊂ V are considered to be high-demand vertiports and we account for two peak hours of operation: 8:00-9:00 am (T^peak1) and 4:00-5:00 pm (T^peak2) such that V_B experience high demands during both peak hours. The demand is high from V-V_B to V_B during T^peak1 and vice-versa during T^peak2. The passenger fare is computed by adding a variable fare F^passenger to a fixed base fare of F^base=$5. The variable fare between two vertiports i and j is dependent on the passenger demand Q(i,j,t) and the operational cost R_i,j and is computed as F_i,j,t^passenger=R_i,j× Q_factor(i,j,t), where Q_factor(i,j,t)=max(log (Q_(i,j,t)/10),1). A constant electricity pricing, Price^elec=$0.2/kWh is used <cit.>. The eVTOL vehicle model is considered to be the City Airbus eVTOL <cit.> with a battery model similar to the one described in <cit.>. The aircraft has a maximum cruising speed of 74.5 mph and a passenger seating capacity of 4. The operating cost of this vehicle is $0.64 per mile <cit.>. It has a maximum battery capacity of B_max=110 kWh. Let B_t^k represent the charge left in eVTOL k at time t and if the eVTOL travels from vertiport i to j, then the charge at the next time step is B_t+1^k=B_t^k-B_i,j^charge, where B_i,j^charge is the charge required to travel from i to j. We consider an operating time horizon of T hours (with start time T^start and end time T^end) and the aim of the UAM scheduling problem is to maximize the profits earned in the T hours. This is achieved by smartly assigning (based on the demand, battery charge, operation cost, etc.) a journey to an eVTOL during each decision-making instance t∈ T. A journey is defined as the commute of an eVTOl between two vertiports i and j. If i=j, the eVTOL has to wait for T^wait(=15) minutes at the vertiport before assigning a journey in the next decision-making instance. We assume that each eVTOL can take off at any time within the T horizon, and we consider the schedule for a single-day operation where the operational time begins at 6:00 AM (T^start) and ends at 6:00 PM (T^end). Optimization formulation: Based on the notations described above, the optimization problem is formulated as follows (similar to our previous work <cit.>). For every eVTOL k∈ V_e, let S_k^jour be the set of journeys taken during time period of T and, N_i,l^passengers be the number of passengers transported during the trip, l∈ S_k^jour by eVTOL k. Let B_l^k be the battery charge of eVTOL k just before its l’th journey, V_k,l^start and V_k,l^end be the respective start and end vertiports of eVTOL k during it l’th journey, T_k,l^takeoff and T_k,l^landing be the corresponding takeoff time and landing time. Then the objective of the optimization problem is to maximize the net profit z that can be computed by subtracting the operational costs C^O and the cost of electricity C^E from the revenue generated. These values are computed as: C^O = ∑_k∈ K∑_l∈ S_k^jour N_i,l^passengers× R_i,j,i=V_k,l^start, j=V_k,l^end C^E = ∑_k∈ K∑_l∈ S_k^jour Price^elec× B_i,j^charge ,i=V_k,l^start, j=V_k,l^end Revenue = ∑_k∈ K∑_l∈ S_k^jour N_i,l^passengers× F_i,j,t^passenger ,i=V_k,l^start, j=V_k,l^end Therefore, the objective function can be formulated as: max z=Revenue-C^O-C^C With the following constraints: N_i,l^passengers=min(C,Q_act(I,j,t)) ,i=V_k,l^start, j=V_k,l^end, t=T_k,l^takeoff∀ l∈ S_k^jour B_T_k,l^takeoff^k>B_i,j^charge ,i=V_k,l^start, j=V_k,l^end, ∀ k∈ K, ∀ l∈ S_k^jour C_i,t^park≤ C_max^park,∀ i∈ V, ∀ t ∈ [T^start,T^end] V_l,k^end≠ i, if A^i,V_l,k^end=0,∀ i∈ V, ∀ k ∈ K, ∀ l ∈ S_k^jour Where Q_act(I,j,t) is the actual demand, and A is a matrix that represents the availability of routes between vertiports. A_ij=A_ji=1 if the route between i and j is open and =0 if the route is closed. §.§ Generating the Expert Demonstrations Given the INLP problem formulation in Sec.<ref>, we use an elitist Genetic Algorithm (GA) to generate the expert solutions for a small number of eVTOLs (N_k = 40) and vertiports (N = 8). The GA uses a population size of 100, max iterations of 100, mutation probability of 0.1, elite ratio of 0.01, and cross-over probability of 0.5. The GA is implemented in batches of 60 decision variables which are then simulated sequentially and the objective functions are computed. Upon simulating a batch of the GA solution, we achieve an updated state of the environment which is then used to compute the next 60 decision variables. This process is repeated until the episode is over. Each decision variable takes an integer value between 1 and N. We generate the GA solutions for 100 different scenarios (for each scenario, the locations of the vertiports stay fixed) which are then used as expert demonstrations for the imitation learning algorithm. §.§ MDP Formulation Having expressed the UAM fleet scheduling problem as an INLP problem, we now define it as a Markov Decision Process (MDP) that sequentially computes the action for each eVTOL at the decision-making time instant t∈ T. The state, action, reward, and the transitions are described below: State Space: The state-space at time t consists of information about 1) the vertiports, represented as a graph G_V, 2) the eVTOLs, represented as a graph G_e, 3) the passenger demand model Q, 4) passenger fare F^passenger, 5) operational costs R (computed based on the per mile operational cost of the eVTOL, the passenger demands and the price for electricity), and 6) time when it is safe to launch an eVTOL to the corridors T^cor∈ℝ^N× N× 2. At any decision-making time step, the state information is processed by a transformer-aided multihead-attention graph capsule convolutional neural network (that was presented in <cit.>) that acts as the policy network for the imitation learning algorithm. Details about the policy network have been discussed in <ref>. Further details on the state space are as follows: Graph Representation of the Vertiport Network: G_V=(V_v, E_v, A_v) represents the vertiport network as a graph where V_v (=V) is the set of vertiports, E_v represents the set of edges/routes between the vertiports, and A_v represents the adjacency matrix of the graph. To consider the route closure probability, a weighted adjacency matrix is computed as A_v = (1_N × N - P^closure) × A. The node properties δ^t_i of the vertiport i∈ V_v at time t are defined as δ_i^t=[x_i,y_i,C_i,t^park, T_i^charge, T^TOD_i, I_i^vstop], where (x_i,y_i) are the x-y coordinates of the vertiport, C_i,t^park is the number of eVTOLs currently parked, T_i^charge is the earliest time when a charging station will be free, T^TOD_i is the expected take-off delay, and I_i^vstop is a variable that takes the value 1 if the node is a vertistop and 0 if the node is a vertiport. Graph Representation of the eVTOLs: G_e = (V_e, E_e, A_e) represents the graph that encodes the information about the eVTOL network. V_e represents the set of eVTOLs, E_e represents the set of edges, and A_e represents the adjacency matrix. G_e is considered to be fully connected and each eVTOL k ∈ V_e is represented by it properties i.e. ψ_k^t = [ x_k^d, y_k^d, B_t^k, T_k^flight, T_k^dec, P_k^fail], ψ_i^t ∈ℝ^6. Here, x_k^d, y_k^d) represent the coordinates of the destination vertiport, B_t^k is the current battery level, T_k^flight is the next flight time, T_k^dec is the next decision making time, and P_k^fail is the probability of failure. Action Space: At each decision-making time instance, each agent takes an action from the available action space. The action space consists of all the available vertiports. Therefore the action space will be of size N_K. During a decision-making step, if an agent chooses the vertiport at which it currently is, then it waits for 15 minutes in the vertiport, until it makes a new decision. Reward: We consider a delayed reward function where the agent gets a reward only at the end of an episode. The reward is ratio of the profit earned to the maximum possible profit in an episode, i.e. ∑_i∈ V,j∈ V,t∈[T^start,T^end](Q(i,j,t)× F^passenger_i,j,t). Transition Dynamics: Since demand and electricity pricing can vary from that of the forecasted values, the transition of the states is considered to be stochastic. The transition is an event-based trigger. An event is defined as the condition that an eVTOL is ready for takeoff. As environmental uncertainties and communication issues (thus partial observation) are not considered in this paper, only deterministic state transitions are allowed. § PROPOSED GRAPH-BASED ADVERSARIAL IMITATION LEARNING APPROACH We propose an adversarial imitation learning formulation to train a transformer-aided GNN-based policy network called CapTAIN <cit.> that will assign actions to the eVTOLs. The policy network takes the state information from all the eVTOLs and vertiports as input and assigns an action to the eVTOLs that are ready for take-off. The policy is trained using the generative adversarial imitation learning (GAIL) <cit.> algorithm that uses the solutions generated by the GA as the expert demonstrations. The following section presents further information about the GAIL algorithm and the state encoding and decoding. §.§ Policy Network We use the Capsule Transformer Attention-mechanism Integrated Network (CapTAIN) <cit.> as the policy network for GAIL. CapTAIN combines graph neural networks and transformers to encode the state-space information and used a multi-head attention-based decoder to generate the action. The information from the veriports and eVTOLs (G_V and G_e) are first passed through a Graph Capsule Convolutional Network (GCAPCN), introduced in <cit.>, which takes the graphs as the inputs and generates the feature embeddings for the vertiports and eVTOLs. Simultaneously, a transformer architecture is used to compute learnable feature vectors for the passenger demand model and the passenger fares (information that can be represented as time-series data). Additionally, a simple feed forward network computes the embeddings for the passenger transportation cost R which can be represented as a N× N matrix, and another feed forward network processes the corridor availability. We keep a track of the time at which it is safe for a new eVTOL to enter a corridor, subject to various safety restrictions and minimum separation, using a N× N× 2 tensor, T^cor, which is flattened and fed into the feed forward network. The outputs from the transformer, GCAPCN, and the feed forward networks form the encoded embedding which is then passed through a multi-head attention <cit.> decoder to generate the probabilities of choosing each action, for the decision-making agent. Further technical details about CapTAIN have been discussed in <cit.>. §.§ Training with Generative Adversarial Imitation Learning The policy network π is trained on the expert demonstrations generated by the Genetic Algorithm, using Generative Adversarial Imitation Learning (GAIL), an imitation learning approach introduced in <cit.>. The policy is learnt using a two-player zero-sum game which can be defined as: πargmin D∈(0,1)argmax 𝔼_π[logD(s,a)]+ 𝔼_π_E[log(1-D(s,a))]-λ H(π) Where π_E is the policy followed by the expert demonstration. D is a discriminator that solves a binary classification problem D:𝒮×𝒜→ (0,1), where 𝒮 represents the state-space and 𝒜 represents the action-space. D tries to distinguish between the distribution of data generated by π from the expert data generated by π_E, When D cannot distinguish data generated by the policy π from the true data, then π has successfully matched the true data. H(λ)=𝔼_π[ -logπ(a|s)] is the entropy. Figure <ref> shows the overall learning framework. The discriminator (approximated by a neural network) tries to maximize the objective, while the policy π tries to minimize it. GAIL alternates between an Adam gradient [6] step on the parameters w of D_w with the gradient: 𝔼̂_τ_i[∇_wlogD_w(s,a)]+𝔼̂_𝒟[∇_wlog1-D_w(s,a)] and a Trust Region Policy Optimization (TRPO) [7] gradient step on the parameters θ of π_theta which minimizes a cost function c(s, a) = log D_w_i+1(s, a) with the gradient: 𝔼̂_τ_i[∇_θlogπ_θ(a|s)Q(s,a)]-λ∇_θ H(π_θ) § EXPERIMENTS AND RESULTS §.§ Simulation Details We use the simulation environment presented in <cit.> that is implemented in Python and uses the Open AI Gym environment interface. We consider a hypothetical city covering an area of 50×50 sq. miles with 8 vertiports (2 of which are vertistops) and 40 eVTOLs. The locations of the vertiports remain the same throughout training and testing the algorithm. The rest of the operational details stay the same, as discussed above. To train the CapTAIN policy network, we use the GAIL implementation from imitation <cit.>, a python package for imitation learning algorithms, and PPO <cit.> from stable-baselines3 <cit.>. §.§ Training Details We begin by training the CapTAIN policy using just PPO, for 1.5 million steps. Next, we generate the expert demonstrations for 100 scenarios (un-seen by CapTAIN during training) using a standard elitist Genetic Algorithm (GA) (using the parameters as described in Sec.<ref>) and compare the performance of CapTAIN against GA. These test scenarios are generated by fixing the seed values for the random number generators in the simulation. Finally, we train the imitation learning policy (CapTAIN-GAIL) on the expert demonstration, with a soft start by beginning the training with the pre-trained weights from CapTAIN, and finally compare the performance of CapTAIN-GAIL against CapTAIN and GA. All the training, experiments, and evaluations are computed on a workstation running Ubuntu 22.04.4 LTS and equipped with an Intel Core i9-12900K processor and Nvidia GeForce RTX 3080 Ti graphics processor. Further evaluation details and results are discussed in the sections ahead. §.§ Comparing CapTAIN vs. Genetic Algorithm We begin by testing the performance of a policy trained purely using reinforcement learning (i.e. CapTAIN trained using PPO) against GA. The daily profit earned by both the algorithms in each test scenario, is used to compare their performance and Fig.<ref> plots the difference in the profits earned by GA and CapTAIN in each of the 100 test scenarios. It can be seen that GA generates more profits in the majority of scenarios (GA performs better in 66 out of 100 cases), with GA generating an average profit of $17603 while CapTAIN generating an average profit of $15727. To test the significance of this difference, we perform a statistical T-test with the null-hypothesis being that both methods generate the same amount of profit. The p-value of the test turns out to be 3.08× 10^-5(<0.05) which means that GA has a significant statistical advantage over CapTAIN. This motivates the use of imitation learning for training a policy that can mimic GA while being significantly more efficient than GA in terms of computational time. §.§ Evaluating CapTAIN-GAIL We train the imitation learning policy with a soft start, i.e. we use the pre-trained CapTAIN policy from Sec.<ref> as the initial policy for GAIL. The generator policy in GAIL is trained for 20 iterations with 20000 steps in each iteration. The trained imitation learning policy is then tested on the 100 test cases and its performance is compared against CapTAIN and GA. We further test the generalizability of CapTAIN-GAIL on a new set of 100 unseen test cases that were not a part of the expert demonstrations. §.§.§ Average Profits Fig.<ref> plots the average profits earned by all three methods, in the test scenarios that were a part of the expert demonstrations. It can be noticed that CapTAIN generates the least profit while CapTAIN-GAIL and GA almost earn equal profits on average. A more detailed analysis of the performance of the three methods is discussed below. §.§.§ CapTAIN-GAIL vs. CapTAIN Fig.<ref> compares the difference in the profits earned by CapTAIN-GAIL and CapTAIN. CapTAIN-GAIL generates more profits than CapTAIN in 73 out of 100 cases, with the average profit of CapTAIN-GAIL being $17587 while the average profit of CapTAIN is $15727. To statistically evaluate the significance of this difference, we conduct a T-test with the null-hypothesis being that both the algorithms generate equal average profits. With a p-value of 6.55× 10^-5(<0.05), we have statistical evidence that CapTAIN-GAIL performs significantly better than CapTAIN. §.§.§ CapTAIN-GAIL vs. GA Next, we evaluate the performance of CapTAIN-GAIL against GA. With Fig.<ref> plotting the difference in the profits earned by these two methods, we can see that CapTAIN-GAIL performs better than GA in 55 out of the 100 cases, which is roughly half the number of test cases. The average profit earned by GA is $17603 while the average profit earned by CapTAIN-GAIL is $17587. Upon performing a T-test with the null hypothesis being that both methods earn the same average profit, we get a p-value of 0.97(>0.05) which indicates that statistically, both methods have a similar performance. This result matches our expectations since the imitation learning policy should at least perform similar to the expert when tested on the expert demonstrations. Even though both CapTAIN-GAIL and GA perform similarly in terms of the average profits they generate, it shall be noticed in Fig.<ref> that the difference in the profit values are quite large. This indicates that in cases where GA performs better, it outperforms CapTAIN-GAIL by a large margin while the opposite is also true, i.e. CapTAIN-GAIL outperforms GA by a large margin, in the cases where it performs better. One would ideally assume the difference in the profits to be close to 0, given that both the methods generate similar amount for profits, and this discrepancy in the performance of these two algorithms shall remain the subject of future research. §.§.§ Generalizability Analysis To test the generalizability of CapTAIN-GAIL, we test it on a new set of 100 unseen scenarios that were not a part of the expert demonstrations on which it was trained, and compare it's performance against CapTAIN. Fig.<ref> plots the difference in the profits earned by CapTAIN-GAIL and CapTAIN. When compared to Fig.<ref>, we notice that CapTAIN-GAIL shows improvement in terms of performing better in cases where CapTAIN generates more profits (which can be seen by the reduced negative values in Fig.<ref>). Further analysing the mean profits earned by both the methods in the unseen scenarios (as shown in Fig.<ref>) and comparing it to Fig.<ref>, we notice that CapTAIN-GAIL achieves better mean performance and remarkably better bounding of the unseen worst-case scenarios (that can be seen in the reduce standard deviation between Fig.<ref> and Fig.<ref>), when compared to CapTAIN. In the unseen scenarios, CapTAIN-GAIL generates an average profit of $17813 as compared to $17587 in the expert demonstration cases while CapTAIN generates $15902. Additionally, the standard deviation of the profits earned, goes down from $3363 in the expert demonstration cases to $3161 in the unseen cases. This shows that CapTAIN-GAIL is generalizable across scenarios that are unseen in the expert demonstrations. Conducting a T-test with the null-hypothesis being that both the methods generate the same average profits, gives a p-value of 3.09× 10^-5(<0.05). Which means that CapTAIN-GAIL performs statistically better than CapTAIN. §.§.§ Further Analysis To further analyse the performance difference among the three methods, we look at their computation times as well as track the average number of idle decisions and flight decisions taken by the three methods. The computing time for GA is calculated by adding the total time required to generate the solutions for an entire episode. Similarly, the computation times for CapTAIN and CapTAIN-GAIL are calculated based on the total forward-propagation time for the entire episode. It was found that GA took an average of 783.48 ± 267.54 seconds per episode while CapTAIN and CapTAIN-GAIL took 2.57 ± 2.24 seconds and 1.86 ± 2.25 seconds respectively. Thus, there is a significant advantage of using a policy trained using imitation learning when compared to an optimization solver like GA, in terms of computational time required. Additionally, Fig.<ref> shows the average idle decisions and flight decisions taken by all the three algorithms in the scenarios present in the expert demonstrations. It can be noted that GA take the most number of flight decisions and the least number of idle decisions where as CapTAIN takes the least number of flight decisions and most number of idle decisions. Meanwhile, the decisions taken by CapTAIN-GAIL closely resemble those taken by GA, which is expected given that we have already seen that CapTAIN-GAIL performs similar to the GA. In comparison, Fig.<ref> show the decisions taken by CapTAIN-GAIL and CapTAIN in the unseen test cases. It can be noted that the decisions remain fairly the same for both the algorithms. Additionally, the number of outliers in the unseen cases for CapTAIN-GAIL are lower than that for the expert demonstration cases. On an average, CapTAIN-GAIL takes 51.84 idle decisions and 187.29 fight decisions in the seen cases and 52.36 idle decisions and 189.48 flight decisions in the unseen cases, while CapTAIN takes 72.11 idle decisions and 179.96 fight decisions in the seen cases and 71.8 idle decisions and 178.39 flight decisions in the unseen cases. This further demonstrates the generalizability of CapTAIN-GAIL. § CONCLUSION AND FUTURE WORK In this paper, we developed CapTAIN-GAIL, a graph neural network based policy for Urban Air Mobility fleet scheduling, which presents itself as a challenging combinatorial optimization problem. CapTAIN-GAIL is trained using generative-adversarial imitation learning (GAIL), with (offline) expert demonstrations generated by a Genetic Algorithm or GA (that is impractical to be used directly/online due to its high computing cost). While prior work had shown that graph RL is uniquely capable of learning policies with performance clearly better than standard RL or other online approaches, there is still a significant optimality gap when compared with partly-converged results from a global optimizer such as GA. The imitation learning extension presented here (CapTAIN-GAIL) was found to successfully reduce this optimality gap, with the initial policy seeded by the earlier CapTAIN results. Our case studies involved scheduling the flights of 40 aircraft across 8 vertiports. When tested over 100 seen scenarios, CapTAIN-GAIL generated solutions with performance that is statistically similar to the solutions generated by the GA. Both across seen (during training) and unseen scenarios the new CapTAIN-GAIL results (in terms of the profit metric) are found to be significantly better than those by CapTAIN, thereby supporting our hypothesis of taking an imitation learning approach to reduce the optimality gap and improve performance. Notably CapTAIN-GAIL learnt to take more flight decisions than staying idle decisions for the UAM aircraft, compared to CapTAIN, which might be partly responsible for the improved performance. Interestingly, when comparing the performance of CapTAIN-GAIL against GA, for the small set of cases where their performance differed, the difference was quite high, the cause of which needs to be further explored in the future. Moreover, to allow more efficient use of costly expert demonstrations from GA, future work should look at approaches that can in situ identify when and where imitation is needed similar to the concept of adaptive sequential sampling in optimization, as opposed pre-computing a fixed set of expert demonstrations. § ACKNOWLEDGMENTS This work was supported by the Office of Naval Research (ONR) award N00014-21-1-2530 and the National Science Foundation (NSF) award CMMI 2048020. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the ONR or the NSF. IEEEtran
http://arxiv.org/abs/2407.13213v1
20240718065350
Leveraging Machine Learning for High-Dimensional Option Pricing within the Uncertain Volatility Model
[ "Ludovic Goudenege", "Andrea Molent", "Antonino Zanette" ]
q-fin.CP
[ "q-fin.CP" ]
Scaling limit of the KPZ equation with non-integrable spatial correlations Luca Gerolla^1, Martin Hairer^1,2, Xue-Mei Li^1,2 Received X XX, XXXX; accepted X XX, XXXX =========================================================================== 11pt Abstract This paper explores the application of Machine Learning techniques for pricing high-dimensional options within the framework of the Uncertain Volatility Model (UVM). The UVM is a robust framework that accounts for the inherent unpredictability of market volatility by setting upper and lower bounds on volatility and the correlation among underlying assets. By leveraging historical data and extreme values of estimated volatilities and correlations, the model establishes a confidence interval for future volatility and correlations, thus providing a more realistic approach to option pricing. By integrating advanced Machine Learning algorithms, we aim to enhance the accuracy and efficiency of option pricing under the UVM, especially when the option price depends on a large number of variables, such as in basket or path-dependent options. Our approach evolves backward in time, dynamically selecting at each time step the most expensive volatility and correlation for each market state. Specifically, it identifies the particular values of volatility and correlation that maximize the expected option value at the next time step. This is achieved through the use of Gaussian Process regression, the computation of expectations via a single step of a multidimensional tree and the Sequential Quadratic Programming optimization algorithm. The numerical results demonstrate that the proposed approach can significantly improve the precision of option pricing and risk management strategies compared with methods already in the literature, particularly in high-dimensional contexts. Keywords: UVM, option pricing, GPR, Tree method, SQP. 11pt § INTRODUCTION The Uncertain Volatility Model represents a significant advancement in financial modeling, specifically for the pricing and hedging of derivative securities in markets characterized by uncertain volatility. Developed initially by Avellaneda et al. <cit.>, the UVM addresses the limitations of traditional models like the Black-Scholes, which assume constant volatility over the life of an option. Instead, the UVM assumes that volatility fluctuates within a known range, providing a more realistic framework for financial markets where volatility is inherently unpredictable. The core idea of the UVM is to set upper and lower bounds on volatility, denoted as σ^min and σ^max. These bounds can be inferred from historical data or extreme values of implied volatilities from liquid options. The model uses these bounds to establish a confidence interval within which the future volatility is expected to lie. This approach acknowledges the inherent uncertainty in predicting future volatility and provides a robust framework for pricing options under this uncertainty <cit.>. One of the seminal contributions of Avellaneda et al. <cit.> was the derivation of the Black-Scholes-Barenblatt (BSB) equation, a non-linear partial differential equation (PDE) that extends the classical Black-Scholes equation by incorporating the variable volatility bounds. This PDE dynamically selects the pricing volatility from the two extreme values based on the convexity of the value function. For instance, if the second derivative (Gamma) of the option price is positive, the model uses σ^max, otherwise, it uses σ^min. The UVM framework is particularly effective for superhedging strategies, where traders aim to hedge against the worst-case scenarios of volatility. By ensuring that the option pricing incorporates the highest potential volatility, traders can guarantee non-negative profit and loss outcomes even when actual realized volatility deviates significantly from the expected values. This is especially useful in markets with high volatility or during periods of market stress, as pointed out by Martini and Jacquier <cit.>. In addition to its theoretical foundations, the UVM has practical implications for risk management. Pooley et al. <cit.> provided a detailed numerical analysis of the UVM, highlighting its advantages in constructing efficient hedges and managing derivatives positions in environments with uncertain volatility. Their work emphasized the importance of diversification in managing volatility risk and provided algorithms for implementing the UVM in real-world trading systems. Several extensions and practical applications of the UVM have been explored in subsequent research. For instance, Windcliff et al. <cit.> use methods based on partial differential equations (PDEs) to evaluate cliquet options in the context of the uncertain volatility model. The paper explores how to solve the PDEs associated with these financial instruments by comparing different numerical and grid construction techniques. It is shown that the PDE-based approach is effective in capturing the complexities of the UVM model, providing a more accurate evaluation of cliquet options than other traditional methods. Martini and Jacquier <cit.> elaborated on the superhedging strategies within the UVM framework, discussing the implications for derivative desks where risk aversion is a critical factor. They demonstrated that the UVM could be applied to a range of option types, including those with non-convex payoffs, by employing more sophisticated numerical methods such as trinomial trees and finite-difference schemes. More recently, Guyon and Labordre <cit.> introduce two innovative Monte Carlo methods to price financial derivatives under the UVM. These methods address the challenges posed by high-dimensionality, where traditional finite-difference schemes fall short. Moreover, the paper considers a path-dependent option, which is particularly useful in the context of the UVM. This is beneficial because path-dependent options can capture the effects of the asset's historical prices, providing a more accurate valuation under uncertainty, and aligning well with the UVM's focus on modeling worst-case scenarios. Despite one may appreciate the innovative approach presented in the paper, it is worth noting that the numerical results are occasionally suboptimal, and the authors do not extend their numerical analysis to derivatives with more than two underlyings. In this paper, we propose an innovative approach to the problem of derivative evaluation within the UVM. Specifically, the proposed method is based on the GPR-Tree pricing algorithm, which we introduced in a previous paper <cit.> for the valuation of options dependent on multiple underlyings. Consequently, we refer to this method as GTU, which stands for GPR-Tree method for UVM. Our algorithm involves calculating the price in the UVM by working backward in time and determining, for selected market states, the maximum price by leveraging an enhanced tree method and a suitable optimization algorithm. This calculation accounts for varying volatilities and correlations between the underlyings. The obtained prices for specific typical market states are then generalized to all possible market conditions by using Gaussian Process Regression (GPR), a powerful machine learning technique for multidimensional regressions from scattered data. The proposed method not only serves as a viable alternative to existing methods for handling problems in lower dimensions but also demonstrates exceptional capability in valuing derivatives in high-dimensional settings, including scenarios with dozens of underlyings. To the best of our knowledge, this is the first instance where such a high-dimensional problem has been addressed effectively. Moreover, the proposed method is particularly effective for problems where the high dimension arises not from the number of underlyings, but from the impact of the entire process history on the derivative's final value. This is especially relevant for path-dependent options, which our algorithm can evaluate proficiently. The paper includes the results of several numerical experiments, which show that the GTU algorithm is reliable and fast. In particular, we observe that the results returned by GTU are very stable and very close to the benchmark prices when these are available. The remainder of the paper is organized as follows. Section 2 introduces the UVM and the pricing principles relevant to this context. Section 3 reviews the fundamentals of the GPR-Tree algorithm. Section 4 demonstrates the evolution of the GPR-Tree algorithm into the proposed GTU method. Section 5 provides detailed numerical results from key experiments. Finally, Section 6 concludes the paper. § THE MULTIDIMENSIONAL UNCERTAIN VOLATILITY MODEL Let 𝐒=(𝐒_t)_t∈[0,T] denote the d-dimensional underlying process, which evolves randomly according to the multidimensional Black-Scholes model. Under the risk-neutral probability measure ℚ, the dynamics of this model are given by the stochastic differential equation: dS_t^i=(r-η_i)S_t^i dt+σ_iS_t^i dW_t^i, i=1,…,d, where 𝐒_0=(s_0^1,…,s_0^d)∈ℝ_+^d is the initial spot price vector, r is the constant risk-free interest rate, η=(η_1,…,η_d)^⊤ is the vector of dividend yields, σ=(σ_1,…,σ_d)^⊤ is the vector of volatilities, 𝐖 is a d-dimensional correlated Wiener processes, and ρ_i,j is the instantaneous correlation coefficient between W^i and W^j. We also term {ℱ_t} _t≥0 the natural filtration generated by the 𝐖. Equation (<ref>) can be rewritten as dS_t^i=S_t^i((r-η_i)dt+σ_iΣ_id𝐁_t), where 𝐁 is a d-dimensional uncorrelated Wiener processes and Σ_i is the i-th row of the matrix Σ=Σ(ρ_1,2,…,ρ_d-1,d) defined as a square root of the correlation matrix Γ, given by Γ=Γ(ρ_1,2,…,ρ_d-1,d)=[ 1 ρ_1,2 ρ_1,d; ρ_1,2 1 ⋱ ⋮; ⋮ ⋱ ⋱ ρ_d-1,d; ρ_1,d ρ_d-1,d 1 ]. Moreover, one can write 𝐒_t=𝐒_0∘exp((r-η-σ^2/2)Δ t+σ∘(Σ𝐁_t)) with 𝐯∘𝐰 the Hadamard product between vectors 𝐯 and 𝐰, that is the element-wise product, and the exponential function in (<ref>) is applyed to each component of the vector (r-σ^2/2)Δ t+σ∘(Σ𝐁_t). Instead of assuming a single constant volatility, the UVM considers a range of possible volatilities. Specifically, the vector of volatilities σ is not known, but it can change over time, as a function of the time and of the underlyings, so we write σ=σ(t,𝐒_t)=(σ_1(t,𝐒_t),…,σ_d(t,𝐒_t))^⊤. Moreover, the explicit form of σ(t,𝐒_t) is not specified in the model, but we only know that, for each i=1,…,d there exist two bounds for σ_i, that is σ_i^min≤σ_i(t,𝐒_t)≤σ_i^max. Such a degree of uncertainty can be extended to correlations between Wiener processes. Specifically ρ_i,j=ρ_i,j(t,𝐒_t) may change over time based on the underlying values, but the particular law is not a model datum. Instead, we only know that, for each couple of values i=1,…,d-1 and j=i+1,…,d, there exist two bounds for ρ_i,j, that is ρ_i,j^min≤ρ_i,j(t,𝐒_t)≤ρ_i,j^max. To summarise, in the UVM, the dynamics of each underlying S^i can be written down as dS_t^i=(r-η_i)S_t^i dt+σ_i,t(t,𝐒_t)S_t^i dW_t^i, i=1,…,d, with σ_i,t(t,𝐒_t):[0,T]×ℝ_+^d→[σ_i^min,σ_i^max] a function. For every couple of indices i=1,…,d-1 and j=i+1,…,d we have d⟨ W_t^i,W_t^j⟩ =ρ_i,j(t,𝐒_t), with ρ_i,j(t,𝐒_t):[0,T]×ℝ_+^d→[ρ_i,j^min,ρ_i,j^max] a function. Finally, in order to preserve the interpretability of the ρ_i,j coefficients as correlations, we require the matrix Γ(t,𝐒_t)=Γ(ρ_1,2(t,𝐒_t),…,ρ_d-1,d(t,𝐒_t)) to be positive semidefinite, which we denote with Γ(t,𝐒_t)≽0. The price of a financial derivative in the UVM can be formulated as a stochastic control problem. To this aim, we denote with P=P(t,𝐒_t)=(σ_1(t,𝐒_t),…,σ_d,t(t,𝐒_t),ρ_1,2(t,𝐒_t),…,ρ_d-1,d(t,𝐒_t))^⊤ the vector of all uncertain model parameters as a function of t and 𝐒_t, and 𝒜 the set of all admissible values P, that is, which satisfy relations (<ref>), (<ref>), and (<ref>). In the UVM, the pricing of derivatives is approached as an optimization problem due to the uncertainty in the volatility and correlation of the underlying asset. This model aims to find the worst-case scenario within this range to ensure robust and conservative pricing. Specifically, we seek to maximize the expected payoff of the derivative over all admissible scenarios for volatilities and correlation, that is over all P in 𝒜. Let V(t,𝐒_t) represent the value of the derivative at time t and asset price 𝐒_t in the UVM. The pricing problem is then given by: V(t,𝐒_t)=sup_P∈𝒜𝔼[e^-r(T-t)Ψ(𝐒_T)|𝐒_t], where Ψ is the payoff function at maturity T, the dynamics of 𝐒_t is given by (<ref>) for P∈𝒜, and 𝔼 denotes the expectation under the risk-neutral measure ℚ. This optimization ensures that the derivative's price accounts for the most adverse volatility-correlation scenario, providing a conservative estimate that is robust to volatility fluctuations. By solving this maximization problem, we obtain a price that safeguards against the uncertainty in the market, making it a reliable valuation method in uncertain volatility conditions. § THE GPR-TREE METHOD The proposed GTU method is based on the GPR-Tree algorithm. Here we give some details about this latter algorithm and we refer the interested reader for further details to <cit.>. The GPR-Tree algorithm exploits Gaussian Gaussian Process Regression and a multimensional binomial tree to compute the price of multidimensional American option. GPR is a non-parametric kernel-based probabilistic model used for regression tasks. It is particularly useful for making predictions about complex datasets in high dimensions. GPR models the observations as samples from a Gaussian process, which provides a probabilistic framework for prediction. Let X={𝐱_p,p=1,…,P}⊂ℝ^d be a set of d-dimensional predictors and Y={ y_p,p=1,…,P}⊂ℝ be the corresponding set of scalar outputs. These observations are modeled as: y_p=f(𝐱_p)+ϵ_p, where f is a Gaussian process and ϵ_p is Gaussian noise with variance σ^2. A Gaussian process f(𝐱)∼𝒢𝒫(m(𝐱),k(𝐱,𝐱')) is fully specified by its mean function m(𝐱) and covariance function k(𝐱,𝐱'). The mean function is often assumed to be zero, m(𝐱)=0, and the covariance function (kernel) defines the relationship between different points in the input space. A common choice for the kernel is the Matern 3/2 (M3/2) kernel, given by: k_M3/2(𝐱,𝐱')=σ_f^2(1+√(3)/σ_l𝐱-𝐱'_2)exp(-√(3)/σ_l𝐱-𝐱'_2), where σ_f^2 is the signal variance and σ_l is the length scale. Given a new input 𝐱^*∈ℝ^d, the predicted mean and variance of the corresponding output are given by: μ_*=K(𝐱^*,X)[K(X,X)+σ_n^2I_n]^-1Y, σ_*^2=K(𝐱^*,𝐱^*)-K(𝐱^*,X)[K(X,X)+σ_n^2I_n]^-1K(X,𝐱^*), where K(X,X) with K(X,X)_i,j=k(𝐱_i,𝐱_j) is the covariance matrix computed using the kernel function for all pairs of training inputs, σ_n^2 is the noise volatility and I_n is the n× n identity matrix. The parameters σ_f,σ_l and σ_n are referred to as hyperparameters and are estimated through log-likelihood maximization. The value f(𝐱) is predicted by μ_*, while σ_*^2 can be exploited to create confidence intervals for such a prediction. Therefore, GPR is an effective tool for performing multidimensional regressions on sparse sets of input data points. The GPR-Tree method proceeds by approximating the price of an American option with a Bermudan option on the same basket. At each exercise date t_n, the option value is computed as the maximum between the exercise value and the continuation value, which is approximated using GPR. Let us consider a d-dimensional Black-Scholes model, describer by equations (<ref>). Let N be the number of time steps, Δ t=T/N, and t_n=nΔ t. At any exercise date t_n, the value of the option is: V(t_n,𝐒_t_n)=max(Φ(S_t_n),C(t_n,𝐒_t_n)), where C(t_n,S_t_n) denotes the continuation value given by: C(t_n,𝐒_t_n)=𝔼[e^-rΔ tV(t_n+1,𝐒_t_n+1)|𝐒_t_n]. To approximate the continuation value C(t_n,S_t_n), the GPR-Tree method uses a set X_n of predetermined points, whose elements represent certain possible values for the underlyings 𝐒: X^n={𝐱^n,p=(x_1^n,p,…,x_d^n,p)^⊤,p=1,…,P}⊂]0,+∞[^d. The elements of X^n are obtained through a quasi-random simulation of 𝐒_T based on the Halton sequence, which allows one to cover the region of interest, avoiding to leave some uncovered areas and to create useless clusters of points (other low discrepancy sequences may be considered, such as Sobol's one). Specifically, 𝐱_i^n,p=𝐒_0^iexp((r-η_i-σ_i^2/2)t_n+σ_i√(t_n)Σ_iΦ^-1(𝐡^q)), with 𝐡^q the q-th point (column vector) of the Halton sequence in [0,1]^d and Φ^-1 stands for the cumulative distribution function of a standarn Gaussian Variable, then applied to each component of 𝐡^q. The GPR-Tree method assesses the continuation value through one step of the binomial tree proposed by Ekval <cit.>. For each point 𝐱^n,p, we consider a set X̃^n,p of M=2^d possible values for 𝐒_t_n+1|𝐒_t_n=𝐱^n,p, which are computed as follows: 𝐱̃_i^n,p,m=𝐱_i^n,pexp((r-η_i-σ_i^2/2)Δ t+σ_i√(Δ t)Σ_i𝐆_m), being 𝐆_m the m-th point of the space { -1,+1} ^d. It is worth noticing that, as pointed out in <cit.>, the elements of X̃^n,p are equally likely and this simplifies the evaluation of the expected value to the computation of the arithmetic mean of the future values. Generally speaking, given Ṽ_n+1(·) a suitable approximation of the option value V(t_n+1,·), the price function at time t_n for 𝐒_t_n=𝐱^n,p is approximated by V_n^Tree(𝐱^n,p)=max(Ψ(𝐱^n,p),e^-rΔ t/2^d∑_m=1^2^dṼ_n+1(𝐱̃^n,p,m)). In order to reduce the computational complexity of (<ref>) when d is high, see Remark 2 in Section <ref>. Then, GPR is employed to achive an approximation of V(t_n,·) form the set X_n of the predictors and the set { V_n^Tree(𝐱^n,p),p=1,…,P} of the corresponding outputs. This procedure is repeated backward in time untile time t=0 is reached. § THE GTU METHOD Avellaneda et al. <cit.> propose solving the dynamic optimization problem (<ref>) by dynamic programming, and here we take the same backward approach. First of all, we consider a finite number of time steps N, a time interval Δ t=T/N and a discrete set of times { t_n=n·Δ t,n=0,…,N}. The option price at contract inception V(0,𝐒_0) can be computed by backward induction, starting from t_N=T. In particular, the terminal condition is given by V(T,𝐒_T)=Ψ(𝐒_T). Now, let Ṽ_n+1(·) be a suitable approximation of V(t_n+1,·) and let us suppose 𝐱 to be a fixed point of ℝ^d. Then one can define Ṽ_n(·) be a suitable approximation of V(t_n,·) at 𝐱 as Ṽ_n(𝐱)=sup_P∈𝒜𝔼[e^-rΔ tṼ_n+1(𝐒_t_n+1)|𝐒_t=𝐱]. Formally, the model parameters P can vary between t_n and t_n+1, however, following Avellaneda et al. <cit.>, the optimal solution to the problem (<ref>) can be approximated by considering a constant function with respect both time and underlying values. So, one considers the problem Ṽ_n(𝐱)=sup_P∈ℬ𝔼[e^-rΔ tṼ_n+1(𝐒_t_n+1)|𝐒_t_n=𝐱], with ℬ the set of parameter functions which are constant. This is particularly interesting because, a constant function is identified by the value it takes. For a particular 𝐱, we can consider the following constant functions σ_1(t,𝐒_t)=σ̂_1,𝐱,…,σ_d,t(t,𝐒_t)=σ̂_d,𝐱,ρ_1,2(t,𝐒_t)=ρ̂_1,2,𝐱,…,ρ_d-1,d(t,𝐒_t)=ρ̂_d-1,d,𝐱 a total of d(d+1)/2 parameters and we set D=[σ_1^min,σ_1^max]×…×[σ_d^min,σ_d^max]×[ρ_1,2^min,ρ_1,2^max]×…×[ρ_d-1,d^min,ρ_d-1,d^max] C={ c_𝐱=(σ̂_1,𝐱,…,σ̂_d,𝐱,ρ̂_1,2,𝐱,…,ρ̂_d-1,d,𝐱)∈ D s.t. Γ(ρ̂_1,2,𝐱,…,ρ̂_d-1,d,𝐱)≽0} Ṽ_n(𝐱)=sup_c_𝐱∈ C𝔼[e^-rΔ tṼ_n+1(𝐱·exp((r-η-σ^2/2)Δ t+(σ̂_1,𝐱,…,σ̂_d,𝐱)^⊤∘(Σ(ρ̂_1,2,𝐱,…,ρ̂_d-1,d,𝐱)(𝐁_t_n+1-𝐁_t_n))))]. To address problem (<ref>), we must overcome two key challenges: determining the expected value and maximizing such an expected value. Our proposed method, the GTU algorithm, involves calculating the expected value through a single tree step, following the same approach as described in the previous Section <ref>. Note that this choice is particularly effective in that, once a set of parameters c_𝐱 is fixed, this calculation can be handeld as for the standard multidimensional Blasck-Scholes model: 1/2^d∑_m=1^2^de^-rΔ tṼ_n+1(𝐱·exp((r-η-σ^2/2)Δ t+(σ̂_1,𝐱,…,σ̂_d,𝐱)^⊤∘(Σ(ρ̂_1,2,𝐱,…,ρ̂_d-1,d,𝐱)𝐆_m)√(Δ t))) with 𝐆_m the m-th point of the space { -1,+1} ^d . Once we are able to compute the expected value for any c_𝐱∈ C fixed, we can tackle the optimization problem: this is a multidimensional optimisation problem with inequality constraints that are both linear (the limits for the parameters) and non-linear (the positivity condition of the Γ matrix). This is done by means of the Sequential Quadratic Programming (SQP) algorithm, a powerful and widely used method for solving nonlinear optimization problems, particularly those with constraints. SQP methods solve a sequence of optimization subproblems, each of which approximates the original nonlinear problem by a quadratic programming problem. Drawing on the research by Biggs <cit.>, Han <cit.>, and Powell <cit.>, this approach enables a close simulation of Newton's method for constrained optimization, similar to its application in unconstrained optimization. Sequential Quadratic Programming methods are considered the pinnacle of nonlinear programming techniques. For instance, Schittkowski <cit.> developed and evaluated a version that surpasses all other tested methods in efficiency, accuracy, and success rate across a wide array of test problems. For this reason, we have propose to use the SQP algorithm to menage problem (<ref>). The procedure described above allows us to calculate the UVM price for a particular value 𝐱 of 𝐒_t, at time t_n, knowing (an approximation of) the price function Ṽ_n+1 at time t_n+1. In order to actively implement this procedure so as to calculate the price at initial time t=0, one problem remains to be solved: obtaining an analytical expression of Ṽ_n from observations Ṽ_n(𝐱) of the same function at certain points. To do this, we use GPR regression. As described in Section <ref>, at each time-step n=1,…,N, we first create a mesh of points X^n as in (<ref>), but in this case the points 𝐱^n,p are define through 𝐱_i^n,p=𝐒_0^iexp((r-(σ_i^avg)^2/2)t_n+(σ_i^avg)^2√(t_n)Σ_i^avgΦ^-1(𝐡^q)), where σ_i^avg=σ_i^+σ_i^/2, and Σ_i^avg is the square root of the correlation matrix Γ^avg, given by Γ^avg=Γ(ρ_1,2^avg,…,ρ_d-1,d^avg), with ρ_i,j^avg=ρ_1,2^min+ρ_1,2^max/2. Then, we estimate the price function by using the approximated solution of (<ref>), and then use GPR to estimate the function Ṽ_n outside this set of points. Moving backwards, it is finally possible to calculate an estimate of the option price at time t=0, by considering X_0={𝐒_0} . Unlike neural networks, GPR requires few observations to produce good estimates of the function to be regressed, typically a few thousand. This feature combines well with the fact that, for each time-step n and for each point in X_n, an optimization problem must be solved. The summation that appears in the formula (<ref>) contains a number of addends that grows exponentially with respect to the dimension d. For large dimensions, one therefore has to consider the arithmetic mean of a large number of addends. A possible approach to limit this quantity is to consider only a certain number of addends M≤2^d sampled randomly among the 2^d addends, and make M tend towards 2^d. Furthermore, to improve the predictive power of this reduced sample, assuming M a even number, we sample the G_m values needed to calculate the addends in (<ref>) by antithetic variables. This means that if we include in the summation the value that is obtained from a certain G_m value, we also include that which is generated from -G_m. This expedient improves the convergence of the procedure. As demonstrated in Section <ref>, in the numerical experiments considered, it is usually sufficient to consider M of the order of a few thousand to obtain results that are very stable and close to those that would be obtained with M=2^d. This observation is very useful when considering high dimensions (indicatively d>10) as it allows reducing the computational time that would otherwise tend to explode with dimension d. Finally, it is important to note that at each time step t_n, the calculation of the solutions to problems (<ref>) for each value 𝐱^n,p∈ X^n are independent of one another. Therefore, these problems can be addressed using parallel computing, which significantly reduces the overall computational time. The Γ^avg matrix defined in (<ref>), which is used to defined the sets X_n for n=1,…,N, may not be positive semidefinite. In this particular case, certain techniques can be employed to determine the closest positive semidefinite matrix to Γ^avg according to the spectral norm. One of these techniques is called Projection onto the Cone of Positive Semidefinite Matrices (see e.g. Calafiore and El Ghaoui <cit.>). For the record, in the cases we analyzed in the numerical experiments Section (<ref>), this situation never occurred. § NUMERICAL EXPERIMENTS In this Section we report some numerical results in order to investigate the effectiveness of the proposed GTU algorithm for pricing European options in high dimension. Numerical tests are divided into 3 categories. In the first group of tests, we assume that there is no uncertainty about correlation. In the second battery of tests the correlation between the various underlyings is uncertain. In the third set of tests we instead consider a path dependent option written on a single underlying asset. While in the first two cases, the high dimensionality of the problem arises because the option payoff depends on a large number of underlying assets, in the last case, the high dimensionality originates because the payoff depends on the values assumed by the single underlying asset at numerous previous times. Parameters employed for the tests are shown in Table <ref>. The options chosen for these tests, along with their numerical parameters, are identical to those analyzed by Guyon and Labordre <cit.>. This allows us to directly compare our results with theirs. The algorithm has been implemented in MATLAB and computations have been preformed on a server which employs a 2.40 GHz Intel Xenon processor (Gold 6148, Skylake) and 20 GB of RAM. To better evaluate the computational time of the algorithm, parallelization was not used for solving the optimization problems (<ref>). §.§ Model with no uncertainty on correlation In this first battery of tests, we assume that the correlation between the various Brownians is known, which is equivalent to imposing ρ_i,j^min=ρ_i,j^max. In particular, it can be proven that if all correlations are equal and non-negative, then the correlation matrix is positive semidefinite, ensuring that this parameter set is well-defined. The level of correlation adopted in the various tests is specified in the following tables. §.§.§ Outperformer option The Outperformer option is a financial derivative that pays a payoff equal to the positive part of the difference between two asset values at maturity. Specifically, the payoff is given by Ψ(𝐒_T)=(S_T^2-S_T^1)_+. Thus, the valuation of this option is a 2-dimensional problem, which can be further simplified to a problem in dimension 1 when ρ_1,2≤0 by employing the change of numraire technique as described by Guyon and Labordre <cit.>. Thus, although the dimension of this problem is not large, it is interesting in that we have a reliable benchmark (computed for example with the PDE method by Windcliff et al. <cit.>) as well as the results reported in the paper by Guyon and Labordre <cit.>. Table <ref> provides the numerical results. The results obtained with GTU closely match the benchmark, even for small values of N and P. The computational time is minimal. §.§.§ Outperformer spread option Similar to the Outperformer option discussed in Subsection <ref>, the Outperformer spread option is another example of 2-dimensional option which has been investigated by Guyon and Labordre <cit.> and for which, when ρ_1,2≤0, a benchmark is available by using the change of numraire technique. Specifically, the payoff of such an option reads out Ψ(𝐒_T)=(S_T^2-0.9S_T^1)_+-(S_T^2-1.1S_T^1)_+. Results are presented in Table (<ref>) are very close to the benchmark. §.§.§ Geo-Call spread The Geo-Call spread option generalized the Call spread option discussed in Subsection <ref> to a multi-asset option. Specifically, the payoff is given by the difference between the payoffs of two call options written on the geometric mean of the underlyings considered. Ψ(𝐒_T)=(√(∏_k=1^dS_T^k)-K_1)_+-(√(∏_k=1^dS_T^k)-K_2)_+ The use of the geometric mean is of particular interest since, if the underlyings follow (<ref>) and ρ_i,j=0, than this expression has the same probability distribution of a single underlying as shown below: √(∏_k=1^dS_T^k)∼√(∏_k=1^dS_0^k)exp((r-η̂-1/2(1/d√(∑_k=1^dσ_i^2))^2)T+(1/d√(∑_k=1^dσ_i^2))B_T) with B a Brownian motion and η̂ the modified dividend yield, defined as η̂=1/d∑_k=1^dη_i+d-1/2d^2∑_k=1^dσ_i^2. Therefore, a problem can be traced back to dimension 1 in which the volatility σ̂=1/d√(∑_k=1^dσ_i^2) ranges between 1/d√(∑_k=1^d(σ_i^min)^2)=min_σ_i∈[σ_i^min,σ_i^max](1/d√(∑_k=1^dσ_i^2))≤σ̂≤max_σ_i∈[σ_i^min,σ_i^max](1/d√(∑_k=1^dσ_i^2))=1/d√(∑_k=1^d(σ_i^max)^2) and the dividend η̂ is equal to η̂=1/d∑_k=1^dη_i+d-1/2d^2∑_k=1^dσ_i^2=1/d∑_k=1^dη_i+d-1/2σ̂^2. This case is a generalization of the UVM model in that the dividend is not constant but changes in relation to the value of volatility. The binomial tree method proposed by Avellaneda et al. <cit.> for the 1d UVM model can be used to calculate the price in this model, taking care to change the dividend in relation to volatility. In this regard, it should be noted that while in the one-dimensional UVM model the optimal σ value is always σ^min or σ^max (this feature is known as “the bang bang condition”), in this model, characterized by a variable dividend yield η̂, this property no longer applies. Therefore, at each node of the tree, the choice of optimal correlation is to be pursued not merely by comparing the objective function at the extreme values of σ, but by optimising the option price via an appropriate algorithm, such as the Sequential Quadratic Programming algorithm. The results are shown in Table <ref>. As shown, the values obtained for varying N and P are very close to the benchmark for the considered values of d. The largest deviation is observed for d=5, with a difference of 0.04 between the GTU price and the benchmark. However, this difference is minor, being less than 0.5% of the benchmark. Additionally, we observe that the computational time increases linearly with the number of time steps N and approximately linearly with the number of points P. This is particularly interesting since the cost for the GPR regression is cubic in P, indicating that most of the computational time is spent on solving optimization problems rather than on the regression itself. Lastly, the computational time grows exponentially with problem size, as the number of values to be evaluated with a tree step is 2^d. Despite this, we are able to handle high dimensions, such as d=10, with reasonable computational effort. To prevent the computational time from increasing exponentially, we can reduce the number of terms in the sum by randomly selecting a sample of M elements. This procedure is justifiable in that, as M approaches 2^d , the calculated price converges to the value we would get without this simplification. The results, shown in Table <ref>, indicate that even with just a few thousand samples, the outcomes are very stable. In addition, we can achieve high size of d=40 with reasonable computation time. The results shown in the Tables <ref> and <ref> are for unrelated underlying assets. To demonstrate that the GTU algorithm also works well with correlated underlyings, we assumed all correlations equal to 0.50 or 0.75. The results are shown in Table <ref>. In this case, we do not have a benchmark for comparison. However, the obtained values demonstrate a high degree of stability, suggesting their accuracy. §.§ Model with uncertainty on correlation We now test the GTU algorithm under the assumption that the correlation coefficients ρ_i,j are also unknown, let them vary between two bounds ρ_i,j^min=-0.5 and ρ_i,j^max=0.5. Evaluating an option in this scenario is more complex for two reasons. Firstly, it is necessary to verify that the correlation matrix Γ, resulting from varying the coefficients ρ_i,j during the optimization procedure, remains positive semidefinite. This verification is implemented directly by using MATLAB's function. Secondly, the number of these coefficients increases quadratically with dimension, leading to a rapid increase in computational cost, making it challenging to obtain results in high dimensions. §.§.§ Outperformer spread option with variable correlation The payoff of this option is given in (<ref>). We only consider the case d=2 since this is the only case considered by Guyon and Labordre <cit.>. Moreover, in this particular case it is unnecessary to verify that the correlation matrix Γ is positive semidefinite, as this holds true for any permissible value of ρ_1,2. We propose this particular case because it has also been studied by Guyon and Labordre <cit.> and thus it is possible to test our algorithm against theirs. In this regard, we can observe that our results reported in Table 7 are slightly higher than the reference value in <cit.>. This slight difference probably results from the fact that our optimization procedure performs better than Guyon and Labordre's one in <cit.> and therefore, it is better able to approach the optimal price in UVM. We also note that the price obtained with a variable correlation parameter, specifically -0.5≤ρ_1,2≤0.5, is significantly higher than the prices obtained considering a fixed correlation of ρ_1,2=0.5, ρ_1,2=0 or ρ_1,2=-0.5, shown in Table <ref>. Specifically, the higher result for a fixed correlation is achieved for ρ_1,2=-0.5 and it is equal to 11.39, while for a variable correlation, we reach 12.77. To conclude, it is crucial to emphasize the importance of considering a robust model for correlation among the underlying assets, as pricing results can vary significantly. Therefore, variable correlation is a risk factor that practitioners should not overlook. §.§.§ Geo-Outperformer option with variable correlation In Subsection <ref> we have introduced the Outperformer option, in which the payoff (<ref>) depends only on 2 underlyings. In this Subsection, we extend the definition of such an option to an option on multiple underlyings. Specifically, we replace the second underlying with the geometric mean of all the underlying with the exception of the first one, so the payoff function reads out: Ψ(𝐒_T)=(√(∏_k=2^dS_T^k)-S_T^1)_+ In this particular case, the optimal combination of the ρ_i,j coefficients is easily found: maximize the correlation between all the underlyings except the first one and minimize the correlation between the first underlying and the remaining ones. In our numerical setting, this implies ρ_i,j= 1 if i=j -0.5 if i and 0.5 otherwise.min(i,j)=1 This solutions, in fact, maximizes the volatility of the difference in (<ref>) and thus the expected value of the payoff. This remark allows us to compute a benchmark by means of the GTU algorithm for correlation fixed according to (<ref>). In particular, we compute the Benchmark by using N=128 time steps and P=1000 points. The results presented in Table <ref> are generally consistent with the benchmark, albeit slightly lower. This discrepancy is likely due to the method used to construct the sets X_n required for GPR. Specifically, as far as the benchmark is considered, these sets are constructed using optimal correlation in (<ref>), whereas for the general problem with variable correlation, they are determined by using the average correlation (<ref>), which is sub-optimal. Consequently, more points are needed, compared to the fixed correlation model, to achieve results of equivalent quality. §.§ Path dependent option: the Call Sharpe option We conclude our numerical analysis by discussing an option known as Call Sharpe, previously analyzed by Guyon and Labordre in <cit.>. This option is particularly interesting as it exemplifies a path-dependent option, where the value depends not only on the current price of the underlying asset but also on the previously realized volatility. Specifically, the payoff at maturity, which depends on a single underlying asset, and on its realized volatility which is computed on a monthly base. To this aim, we assume that the maturity time T is a multiple of 1/12, defining N_m=12T∈ℕ as the number of months comprising the life of the contract. The realized volatility computed using monthly returns is given by V_T=1/T∑_l=1^N_m(ln(S_τ_l/S_τ_l-1))^2, with τ_l=l/12, and the payoff at maturity is Ψ(S_T)=(S_T-K)^+/√(V_T). Following Guyon and Labordre <cit.>, at time t, the option value depends on and on the two path-dependent variables A_t^1 =∑_{l|τ_l≤ t}(ln(S_τ_l/S_τ_l-1))^2, A_t^2 =S_sup_{l|τ_l≤ t}τ_l. Moreover, we stress out that, if t=τ_l for a certain l value, than A_t^2=S_t, so we can discard this variate when t is a multiple of 1/12, that is 12t∈ℕ. Since this option is path dependent, some adaptations are required to use GTU. Let us begin by pointing out that the number of time steps N must be a multiple of N_m, so that the monitoring dates τ_l are included in the time grid. Next, we note that the elements of the sets X_n, used to define the points at which to evaluate the contract value, are determined through Monte Carlo simulations. Specifically, first we simulate P randoms paths of the underlying stock price: { S_t_n^p=S_t_n-1^pe^(r-η-(σ^avg)^2/2)Δ t+σ^avg√(Δ t)Z^n,p,n=1,…,N,p=1,…,P} , with S_0^p=S_0, Δ t=T/N and Z^n,p∼𝒩(0,1). Then, the elements 𝐱^n,p of X_n are defined as 𝐱^n,p=(S_t_n^p,A_t_n^1,p) if 12t_n∈ℕ, (S_t_n^p,A_t_n^1,p,A_t_n^2,p) otherwise, with A_t_n^1,p=∑_{l|τ_l≤ t_n}(ln(S_τ_l^p/S_τ_l-1^p))^2, A_t_n^2,p=S_sup_{l|τ_l≤ t}τ_l^p. We stress out that, as far as this path dependent option is considered, we opt for the Monte Carlo method over the quasi-Monte Carlo method to generate the points 𝐱^n,p of X_n in (<ref>), as the dimension of random simulations equals N_m, to much to make quasi-Monte Carlo be effective with only a few hundreds of samples. For a particular value σ̂ for the volatility, determined during the optimization procedure, the future points for 𝐱^n,p are only 2, namely 𝐱̃_i^n,p,1 and 𝐱̃_i^n,p,2, defined as 𝐱̃_i^n,p,m=(S_t_n^p,m,A_t_n^1,p+(ln(S_t_n^p,m/A_t_n^2))^2) if 12t_n+1∈ℕ, (S_t_n^p,m,A_t_n^1,p+(ln(S_t_n^p,m/A_t_n^2))^2,A_t_n^2,p) otherwise. for m=1,2 and with S_t_n^p,m=S_t_n^pe^(r-η-(σ̂)^2/2)Δ t+σ̂√(Δ t)(-1)^m. Moreover, as far as this path dependent option is considered, the elements that identify the market state, namely S,A^1 and A^2, represent very different quantities from each other. For this reason, as remarked in a similar framework by Goudnege et al. <cit.>, it is beneficial to use a kernel that considers a different length scale for each predictor, such as the ARD Matern 3/2 kernel k_M3/2^ARD which is defines as follows: k_M3/2^ARD(𝐱,𝐱')=σ_f^2(1+√(3∑(𝐱_i-𝐱'_i/σ_l,i)^2))exp(-√(3)/σ_l√(3∑(𝐱_i-𝐱'_i/σ_l,i)^2)), where σ_l,i is the length scale for the i-th predictor. In the specific case study considered in our numerical test, we consider T=1, so N_m=12. Results are reported in Table (<ref>). The benchmark value, computed through a PDE approach described in <cit.>, is 58.4. In comparison, our more reliable price – obtained for N=384 and P=2000 – yields a value of 57.93, much closer to the benchmark than the best price by Guyon and Labordre <cit.>, which is worth 55.55. Additionally, we emphasize that the relative deviation between our price and the benchmark is less than 1%, indicating a very small error. To conclude, we observe that, in this case, a large number of N time steps is necessary to achieve accurate results. This requirement for many time steps necessitates the use of an algorithm with low computational cost per time step. Our tree method, which in this case requires the evaluation of only two future nodes, proves to be an appropriate choice. § CONCLUSIONS In this study, we demonstrated the efficacy of Machine Learning in enhancing the Uncertain Volatility Model for high-dimensional option pricing. Our findings suggest that Gaussian Process Regression algorithm can effectively handle the complexity and variability inherent in financial markets, providing more accurate and reliable pricing under the UVM framework. The integration of Machine Learning not only improves the precision of the optimal volatility and correlation estimates but also optimizes computational efficiency, making it a valuable tool for traders and risk managers. Furthermore, our approach offers a robust method for superhedging strategies, ensuring non-negative profit and loss outcomes even in highly volatile market conditions. Future research can extend this work by exploring other ML techniques and their applications in different financial models, further bridging the gap between theoretical advancements and practical implementations in the field of quantitative finance. plain
http://arxiv.org/abs/2407.13229v1
20240718073020
Disturbance Observer for Estimating Coupled Disturbances
[ "Jindou Jia", "Yuhang Liu", "Kexin Guo", "Xiang Yu", "Lihua Xie", "Lei Guo" ]
cs.RO
[ "cs.RO", "cs.SY", "eess.SY" ]
Disturbance Observer for Estimating Coupled Disturbances Jindou Jia, Yuhang Liu, Kexin Guo, Xiang Yu, Lihua Xie, Fellow, IEEE, and Lei Guo, Fellow, IEEE This work was supported in part by the Defense Industrial Technology Development Program (Grant Number JCKY2020601C016), the National Natural Science Foundation of China (Grant Numbers 61833013, 61973012, 62388101, 62273023), the National Key Research and Development Program of China (Grant Number 2022YFB4701301), the Key Research and Development Program of Zhejiang (Grant Number 2021C03158), the Major Science and Technology Innovation Program of Hangzhou (Grant Number 2022AIZD0137), the Beijing Nova Program (Grant Number 20230484266) and the Outstanding Research Project of Shen Yuan Honors College, BUAA (Grant Number 230122104). (Corresponding author: Kexin Guo.) J. J. Jia, X. Yu, and L. Guo are with the School of Automation Science and Electrical Engineering, Beihang University, 100191, Beijing, China (e-mail: {jdjia, xiangyu_buaa, lguo}@buaa.edu.cn). J. J. Jia is also with Shenyuan Honors College, Beihang University, 100191, Beijing, China. Y. H. Liu and K. X. Guo are with the School of Aeronautic Science and Engineering, Beihang University, 100191, Beijing, China (e-mail: {lyhbuaa, kxguo}@buaa.edu.cn). L. H. Xie is with the School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore (e-mail: elhxie@ntu.edu.sg.) ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT High-precision control for nonlinear systems is impeded by the low-fidelity dynamical model and external disturbance. Especially, the intricate coupling between internal uncertainty and external disturbance is usually difficult to be modeled explicitly. Here we show an effective and convergent algorithm enabling accurate estimation of the coupled disturbance via combining control and learning philosophies. Specifically, by resorting to Chebyshev series expansion, the coupled disturbance is firstly decomposed into an unknown parameter matrix and two known structures depending on system state and external disturbance respectively. A Regularized Least Squares (RLS) algorithm is subsequently formalized to learn the parameter matrix by using historical time-series data. Finally, a higher-order disturbance observer (HODO) is developed to achieve a high-precision estimation of the coupled disturbance by utilizing the learned portion. The efficiency of the proposed algorithm is evaluated through extensive simulations. We believe this work can offer a new option to merge learning schemes into the control framework for addressing existing intractable control problems. Disturbance observer, coupled disturbance, learning for control. § INTRODUCTION High-precision control is crucial for nonlinear systems where model uncertainty and external disturbance are pervasive. A plethora of advanced schemes have been proposed to address model uncertainties and external disturbances, separately. Yet, for systems in which model uncertainty and external disturbance are coupled, such as the aerodynamic drag of quadrotor, which depends on not only the external wind speed but also the system attitude <cit.>, fewer schemes with theoretical guarantees have been developed. In the control community, many studies attempt to estimate the coupled disturbance with a bounded derivative assumption, such as Extended State Observer (ESO) <cit.> and Nonlinear Disturbance Observer (NDO) <cit.>. This bounded derivative assumption has limitations from a theoretical perspective. This assumption demands that the state of the system should be bounded even before we talk about whether the system is stable with subsequent anti-disturbance strategy <cit.>, which causes a causality dilemma. Moreover, this assumption usually results in a bounded disturbance estimation error. A smaller derivative bound of the coupled disturbance is required for a small estimation error, which may not be satisfied. Disturbance observer-based approaches have yet to achieve a zero-error estimation of coupled disturbance. Benefiting from the growing computing power, the availability of training massive data, and the improvement of learning algorithms, data-driven learning approaches appear to be an alternative for handling the coupled disturbance. In the data-driven paragram, some approaches attempt to learn the unknown structure and parameters of the coupled disturbance <cit.>. However, a major challenge is that the external time-varying disturbance (learning input) cannot be sampled, even offline. Nowadays, with the assistance of meta-learning philosophies, several works try to establish a bi-level optimization to handle coupled disturbances <cit.>. Merged with online adaptive control, meta-learning can remarkably improve the control performance, but these have yet to result in a zero-error estimation of the coupled disturbance. Moreover, offline training for these methods is labor intensive. §.§ Contributions In this work, by integrating data-driven learning and control-theoretic techniques, a convergent estimation algorithm is proposed for coupled disturbances. A learning algorithm is employed to learn the latent invariable structure of the disturbance offline, while an adaptive observer is used to estimate the time-varying part of the disturbance online <cit.>. The main contributions of this article are summarized as follows: * A variable separation principle (Theorem <ref>) is established to decompose the coupled disturbance into an unknown parameter matrix, a system-state-related matrix, and an external-disturbance-related vector, with an arbitrarily small residual. * With an analytic assumption on external disturbance, a corollary (Corollary <ref>) is further developed, which enables the unknown parameter matrix to be learned in a supervised way. Afterward, the learning objective is formalized as a Regularized Least Squares (RLS) problem with a closed-form solution. * By leveraging the learned knowledge, a higher-order disturbance observer (HODO) is finally designed, which can achieve zero-error estimation of the coupled disturbance (Theorem <ref>). In the proposed framework, 1) there is no need to manually model complex disturbance, 2) the bounded derivative assumption on the coupled disturbance in <cit.> and the constant assumption on the external disturbance in <cit.> can be avoided, and 3) the implemented learning strategy is explainable and lightweight compared with Deep Neural Networks (DNNs) based methods in <cit.>. Multiple numerical tests are conducted to verify the efficiency of the proposed method. Simulation code can be found at <https://github.com/JIAjindou/Coupled_disturbance.git>. §.§ Organization and Notation The article is organized as follows. Section <ref> surveys related works. Section <ref> formulates the disturbance estimation problem for general control affine systems. Section <ref> presents the main theoretical result. The applicability of the proposed algorithm is demonstrated in Section <ref>. Section <ref> concludes this article and indicates future work. Notation. Throughout the paper, ℝ denotes the real number set; ℤ^+ denotes the non-negative integer set; | x| denotes the absolute value of a scalar x; x denotes the 2-norm of a vector x; A_F denotes the Frobenius-norm of a matrix A; x_i denotes the i-th element of a vector x; X_ij denotes the i-th row, j-th column element of a matrix X; X( i,:) denotes the i-th row vector of a matrix X; ·⃗ denotes a unit right shift operator; λ_m(·) represents the minimum eigenvalue of a matrix; I and 0 represent the identity and zero matrices with appropriate sizes, respectively. Moreover, Mean Absolute Error (MAE) is defined as MAE = 1/n_d∑_i = 1^n_dx_i - x_d,i to evaluate simulation results, where n_d denotes the size of collected data, x_i and x_d,i denote i-th evaluated variable and its desired value, respectively. § RELATED WORK In this section, we review key areas related to this work. We begin by discussing recent research in the well-studied area of disturbance observers. As our proposed method falls into the realm of the scheme combining control and data-driven learning, the related advanced research is also reviewed. The connections between existing approaches and our contribution are emphasized. §.§ Analytical Disturbance Estimation The basic idea of the disturbance estimation approach is to design an ad-hoc observer to estimate the disturbance by utilizing its influence on the system <cit.>. The estimation method is a two-degree-of-freedom control framework <cit.>, which can achieve tracking and anti-disturbance performances simultaneously. For most disturbance observers, like Frequency Domain Disturbance Observer (FDDO) <cit.>, ESO <cit.>, Unknown Input Observer (UIO) <cit.>, Generalized Proportional Integral Observer (GPIO) <cit.>, and Time Domain Nonlinear Disturbance Observer (TDNDO) <cit.>, zero-error estimation can be usually achieved in the event of constant disturbances. For more complicated time-varying disturbances, accurate estimation usually requires a priori knowledge of disturbance features. For example, UIO <cit.> and TDNDO <cit.> can accurately estimate the harmonic disturbance if its frequency is known. GPIO <cit.> and higher-order TDNDO <cit.> can achieve an asymptotic estimation of the disturbance represented by a high-order polynomial of time series. More recently, for multi-disturbance with limited a priori information, the simultaneous attenuation and compensation approach appears to be a nascent solution <cit.>. Most disturbance observers are limited to external disturbances and show unsatisfactory performance for inner model uncertainty. Some researchers attempt to estimate a coupled disturbance with a bounded derivative assumption, such as ESO <cit.> and NDO <cit.>. This bounded derivative assumption has limitations from a theoretical perspective because it demands that the system state is bounded in advance <cit.>. Moreover, a large derivative bound can result in a large estimation error. A two-stage Active Disturbance Rejection Control (ADRC) strategy <cit.> is designed in order to avoid the requirement of bounded derivative assumption on system states. The controller in the first stage guarantees the boundness of the system state by a special auxiliary function, and a linear ESO in the second stage is employed to estimate the total disturbance. However, the existence of the auxiliary function is not discussed. Another solution is to utilize a priori disturbance structure. Focusing on wind disturbance, a refined disturbance observer is proposed in <cit.> to directly estimate the wind speed instead of the whole wind disturbance. By this means, not only the bounded derivative assumption of the coupled disturbance is avoided, but also the bound of estimation error is reduced. However, this scheme is limited to the case with an explicitly known disturbance coupling structure. §.§ Combining Analytical Control and Data-Driven Learning Nowadays, the interest in combining control-theoretic approaches with data-driven learning techniques is thriving for achieving stable, high-precision control. In <cit.>, DNNs are utilized to synthesize control certificates such as Lyapunov functions and barrier functions to guarantee the safety and stability of the learned control system. In <cit.>, DNNs are employed to learn the mass matrix and the potential energy in Lagrangian mechanics and Hamiltonian mechanics. Compared to naive black-box model learning, a more interpretable and plausible model that conserves energy can be obtained. With respect to the uncertainty satisfying Gaussian distribution, a Gaussian belief propagation method is designed in <cit.> to compute the uncertainty, which is finally utilized to tighten constraints of Model Predictive Control (MPC). <cit.> finds that a higher-order nonlinear system controller by the Reinforcement Learning (RL) policy behaves like a linear system. The stability of the RL policy can be analyzed by the identified linear closed-loop system with the pole-zero method. <cit.> combines a robust control and Echo State Networks (ESN) to control nonlinear systems, where ESN is employed to learn the inverse dynamics and to help mitigate disturbance. However, the bounds of disturbance and learning output need to be known. Even with these advances, for nonlinear systems perturbed by external time-varying disturbances that cannot be accurately sampled, data-driven supervised learning methods would no longer be applicable. Several works are proposed to handle the coupled disturbance by establishing a bi-level optimization problem <cit.>. Within the framework of adaptive control, the nonlinear features depending on the system state are learned via meta-learning offline in <cit.>. This work breaks through the assumption that the unknown dynamics are linearly parameterizable in the traditional adaptive control method. <cit.> develops a control-oriented meta-learning framework, which uses the adaptive controller as the base learner to attune learning to the downstream control objective. Both methods attribute the effect of external disturbance in the last layer of the neural networks, which is estimated adaptively online. However, the above scheme ensures zero-error convergence only when the external disturbance is constant. Moreover, laborious offline training is needed. In this work, the coupled disturbance can be accurately estimated by merging the data-driven learning with an analytical disturbance observer. Not only the bounded derivative assumption in estimation methods <cit.> can be avoided, but also the requirement of the external disturbance being a constant in learning methods <cit.> can be relaxed. § PROBLEM FORMULATION Consider a general control affine system of the form ẋ = f_x( x)+ f_u( x)u+ Δ( x, d), where x⊂𝒳∈ℝ^n and u⊂𝒰∈ℝ^o denote the state and the control input, respectively; 𝒳 and 𝒰 are the state and control spaces of dimensionality n and o, respectively; f_x( ·)∈ℝ^n and f_u( ·)∈ℝ^n× o are nonlinear mappings, which are continuously differentiable; Δ∈ℝ^n represents the coupled disturbance, which is analytic. It depends on the system state x and the external disturbance d⊂𝒟∈ℝ^m with 𝒟 being the disturbance space of dimensionality m. Δ( x, d) can encompass a wide variety of disturbances, such as the wind disturbance for the quadrotor and the unwanted base moving disturbance for the manipulator. Specifically, the wind disturbance for the quadrotor depends on not only the external wind speed but also the internal system attitude <cit.>, and the unwanted base moving disturbance for the manipulator depends on not only the external base variation but also the internal position of the end-effector <cit.>. Problem Statement: Consider system (<ref>). The objective is to develop an algorithm to accurately estimate the coupled disturbance Δ( x, d) only using control input u and measurable system state x. Previous works <cit.> usually estimate the coupled disturbance Δ with a bounded derivative assumption, i.e., there exists an unknown positive value γ_Δ such that Δ̇( x, d) ≤γ_Δ. Three limitations exist in this assumption. (L-1) The bounded assumption on Δ demands that x should be bounded even before we talk about whether the system is stable with subsequent anti-disturbance strategy <cit.>. (L-2) The evolution of x will change after the estimated disturbance is compensated. There is no guarantee that this assumption will always be satisfied. (L-3) The final disturbance estimation error usually depends on γ_Δ, which may be large. Our Solution: The core idea is to 1) decompose the coupled disturbance Δ( x, d) into an unknown parameter matrix, a x-related matrix, and a d-related vector, with an arbitrarily small residual, 2) offline learn the unknown parameter from past data, and 3) online estimate the remaining d-related portion convergently. The whole process is schematized in Fig. <ref>. By resorting to the proposed HODO, the limitations (L-1)-(L-3) can be breached. * Variable Separation: Divide the lumped disturbance as Δ( x, d) ≈Θℬ(x)ξ(d) by employing Chebyshev polyniminal basis, where Θ∈ℝ^n × s_1 represents unknown parameter matrix; ℬ(x) ∈ℝ^s_1 × s_2 and ξ(d) ∈ℝ^s_2 are made up of Chebyshev polynomial basis. * Offline Meta-Learning: The training data is collected under different degrees of external disturbance. A bilevel optimization problem is established to learn the parameter matrix Θ with an analytic gradient. * Online Estimation: With the learned Θ, a refined disturbance observer is finally proposed to directly estimate ξ(d). Furthermore, a higher-order refined disturbance observer is designed if d can be represented or fitted by a higher-order polynomial of time series. § METHOD §.§ Decomposition of the Coupled Disturbance Before introducing the variable separation theorem for the coupled disturbance Δ(x, d), a preliminary lemma from <cit.> is reviewed for a scalar Δ_i(x, d) firstly. For the sake of simplification, we consider the case on [x,d] ∈[-1, 1]^n ×[-1, 1]^m. By normalization, the following results can be generalized to the case on [x,d] ∈𝒳×𝒟. <cit.> Assume an analytic function Δ_i(x, d)∈ℝ for all [x,d] ∈[-1, 1]^n ×[-1, 1]^m. For any small value ϵ > 0, there always exist p = O(log( 1/ε)/√(n + m)) ∈ℤ^+, ϕ_i(x) ∈ℝ^1 × s consisting of Chebyshev polynomials and unknown constant parameters, ξ(d) ∈ℝ^s consisting only of Chebyshev polynomials such that sup_[x,d] ∈[ - 1,1]^n + m| Δ_i(x, d) - ϕ_i(x) ξ(d)| ≤ϵ, and s = (p+1)^m = O(log(1/ϵ)^m). ϕ_i(x) ξ(d) is a compact product form of the truncated Chebyshev expansions presented in (<ref>). b^i_k_1, ⋯ ,k_n,l_1, ⋯ ,l_m∈ℝ represents the polynomial coefficient. Later in the article, b^i_k_1, ⋯ ,k_n,l_1, ⋯ ,l_m is simplified as b^i_h_k,h_l by letting h_k = ∑_i = 1^n k_i( p + 1)^i - 1 and h_l = ∑_i = 1^m l_i( p + 1)^i - 1. T_i represents the i-th order Chebyshev polynominal. (<ref>) and (<ref>) detail the architectures of ϕ_i(x) and ξ(d) in a suitable form respectively, for the convenience of using in the remainder of this article. ϕ_i(x) ξ(d) = ∑_k_1 = 0^p⋯∑_k_n = 0^p∑_l_1 = 0^p⋯∑_l_m = 0^pb^i_k_1, ⋯ ,k_n,l_1, ⋯ ,l_m ·T_k_1( x_1 ) ⋯T_k_n(x_n )T_l_1( d_1 ) ⋯T_l_m( d_m ). 0.5.4pt ϕ_i(x)^T ξ(d) = ∑_k_1 = 0^p⋯∑_k_n = 0^p∑_l_1 = 0^p⋯∑_l_m = 0^pb^i_k_1, ⋯ ,k_n,l_1, ⋯ ,l_mT_k_1( x_1 ) ⋯T_k_n(x_n )T_l_1( d_1 ) ⋯T_l_m( d_m ), 0.5.4pt ϕ_i(x) = [ ∑_k_1 = 0^p⋯∑_k_n = 0^pb^i_h_k,h_lT_k_1(x_1) ⋯T_k_n(x_n)|_h_l=0 ∑_k_1 = 0^p⋯∑_k_n = 0^pb^i_h_k,h_lT_k_1(x_1) ⋯T_k_n(x_n)|_h_l=1 ⋮ ∑_k_1 = 0^p⋯ ∑_k_n = 0^pb^i_h_k,h_lT_k_1(x_1) ⋯T_k_n(x_n)|_h_l=(p+1)^m -1]^T. ξ(d) = [ T_l_1(d_1) ⋯T_l_m(d_m)|_h_l=0 T_l_1(d_1) ⋯T_l_m(d_m)|_h_l=1 ⋮ T_l_1(d_1) ⋯T_l_m(d_m)|_h_l=(p+1)^m -1]. ξ_i(x) = [ T_l_1(b) ⋯T_l_m(b)|_l_1, ⋯ ,l_m = 0, ⋯ ,0 T_l_1(b) ⋯T_l_m(b)|_l_1, ⋯ ,l_m = 1, ⋯ ,0 ⋮ T_l_1(b) ⋯T_l_m(b)|_l_1, ⋯ ,l_m = p, ⋯ ,p]. Lemma <ref> concludes that the analytic coupled disturbance can be decoupled to a x-related portion and a d-related portion with an arbitrarily small residual. Intuitively, it will be helpful to estimate the d-related portion if the knowledge of x-related portion can be exploited beforehand. In <cit.>, DNNs is adopted to learn the x-related portion, which needs laborious offline training and lacks interpretability. A more lightweight and stable learning strategy is pursued here. To achieve that, we need to exploit Lemma <ref> to drive a more explicit separation form for the coupled disturbance Δ. Δ_i(x, d) is a function satisfying the assumptions in Lemma <ref>, for all i ∈[1, 2, ⋯, n]. For any small value ϵ' > 0, there always exist s_1∈ℤ^+; s_2∈ℤ^+; an unknown constant parameter matrix Θ∈ℝ^n × s_1, two functions ℬ(x) ∈ℝ^s_1 × s_2 and ξ(d) ∈ℝ^s_2 that both consist only of Chebyshev polynomials such that sup_[x,d] ∈[ - 1,1]^n + mΔ( x, d) - Θℬ(x)ξ(d) ≤ϵ', where s_1 = (p+1)^m+n = O(log(√(n)/ϵ')^m+n) and s_2 = (p+1)^m = O(log(√(n)/ϵ')^m). Denote the j-th column of ϕ_i(x) in (<ref>) as ϕ_ij(x). By further splitting ϕ_ij(x), it can be obtained that ϕ_ij(x) = ∑_k_1 = 0^p⋯∑_k_n = 0^pb^i_h_k,h_lT_k_1(x_1) ⋯T_k_n(x_n)|_h_l=j-1 = b^ij·Π(x). where b^ij = [b_h_k,h_l^i|_h_l = j - 1^h_k = 0, b_h_k,h_l^i|_h_l = j - 1^h_k = 1, ⋯,. .b_h_k,h_l^i|_h_l = j - 1^h_k = ( p + 1)^n - 1] ∈ℝ^1×(p+1)^n, and Π(x) = [ [ T_k_1(x_1) ⋯T_k_n(x_n)|_h_k = 0; T_k_1(x_1) ⋯T_k_n(x_n)|_h_k = 1; ⋮; T_k_1(x_1) ⋯T_k_n(x_n)|_h_k = ( p + 1)^n - 1 ]] ∈ℝ^(p+1)^n. Denote the j-th column of ℬ(x) as ℬ_j(x), and it is constructed as ℬ_j( x) = [ 0, ⋯, 0_(j-1)(p+1)^n,Π(x) ^T,0, ⋯ ,0_((p+1)^m+n-j(p+1)^n)]^T, with s_1 = (p+1)^m+n and s_2 = (p+1)^m. Denote the i-th row of Θ as Θ _i, and it is constructed as Θ _i = [ b^i1,b^i2, ⋯ ,b^i( p + 1)^m] ∈ℝ^1 ×( p + 1)^m + n. It can be proven that ϕ_i(x) = Θ _i·ℬ(x). Let C_i(x, d) represent the i-th row of C(x, d)∈ℝ^n and define C_i(x, d) = ϕ_i(x)ξ(d), resulting in C(x, d) = ϕ(x)ξ(d)=Θℬ(x)ξ(d). Set ϵ_i ≤(ϵ'/√(n)). From Lemma <ref>, there exist s_2^i = O(log(1/ϵ)^m) ∈ℤ^+ such that sup_[x,d] ∈[ - 1,1]^n + m| Δ_i(x, d) - C_i(x, d)| ≤ϵ_i. Choose s_2 = max{s_2^1,s_2^2, ⋯ s_2^n}, it can be implies that sup_[x,d] ∈[ - 1,1]^n + mΔ(x, d) - C(x, d)≤√(∑_i = 1^n ϵ _i^2)≤ϵ', with s_1 = O(log(√(n)/ϵ')^m+n) and s_2 = O(log(√(n)/ϵ')^m). Theorem <ref> extends the result of Lemma <ref> to the multidimensional case and obtains a more explicit decomposed structure. It is proven that all unknown constant parameters of the coupled disturbance can be gathered into a matrix, which enables the coupled disturbance to be learned in a more explainable way compared with DNNs-based methods <cit.>. Due to that d cannot be sampled in most cases, the traditional supervised learning strategy cannot be directly applied to learn the unknown parameter matrix. In <cit.>, the meta-learning strategy is employed. However, in such a paradigm, the training data under different tasks (i.e., different constant d) are required, which may not be available in some cases. Moreover, the global convergence of the formalized bi-level optimization algorithm lacks rigorous analysis. In order to reliably implement an explicit learning procedure, it is further assumed that the external disturbance d(t) is analytic with respect to t. The following corollary can be obtained subsequently. Δ_i(x, d(t)) is a function satisfying the assumptions in Lemma <ref>, for all i ∈[1, 2, ⋯, n]. Assume d(t) is analytic with respect to t. For any small value ϵ' > 0, there always exist s_1∈ℤ^+; s_2∈ℤ^+; an unknown constant parameter matrix Θ∈ℝ^n × s_1, two functions ℬ(x) ∈ℝ^s_1 × s_2 and ξ(t) ∈ℝ^s_2 that both consist only of Chebyshev polynomials such that sup_[x,t] ∈[ - 1,1]^n + 1Δ( x, d(t)) - Θℬ(x)ξ(t) ≤ϵ', where s_1 = (p+1)^n+1 = O(log(√(n)/ϵ')^n+1), s_2 = p+1 = O(log(√(n)/ϵ')), and ξ(t ) = [ [ T_0( t ) T_1( t ) ⋯ T_p( t ) ]]^T. The proof procedure is similar to Theorem <ref> by replacing the argument d with t. Based on Theorem <ref>, Corollary <ref> further resorts the unsamplable external disturbance d to the samplable feature t, which allows the unknown parameter matrix to be learned in a supervised way. The external disturbance d depends on some constant parameters in Θ and a known t-related structure ξ(t). Although d is changing with time, the parameter matrix Θ remains unchanged. The change of d is revealed on the change of ξ(t). A practical example is given here to instantiate the decomposition (<ref>). In <cit.>, the wind disturbance for the quadrotor is modeled as Δ = RDR^Tv_w, where D∈ℝ^3×3 represents drag coefficients, v_w∈ℝ^3 is the unknown external wind speed, and R∈ℝ^3×3 denotes the rotation matrix from body frame to inertial frame. By regarding R as x and v_w as d respectively, it can be seen that (<ref>) has already been a decomposed form. However, the linear model (<ref>) lacks accuracy as the high-order aerodynamics are not captured. By resorting to the decomposition (<ref>), high-order aerodynamics can be included. Moreover, it is better to characterize the unknown time-varying wind speed as a polynomial function of t with an appropriate order than to treat it as a constant value like in <cit.>. §.§ Learning the Parameter Matrix In this part, a RLS optimization framework is established to learn parameter matrix Θ. Construct the training dataset 𝒟 ={( t_f^𝔫, x^𝔫 , Δ^𝔫) | 𝔫 = 1,2,⋯,N } with N samples. t_f represents the time in the offline training dataset. Note that Δ^𝔫 can be calculated by using Δ^𝔫 = ẋ^𝔫 - f_x( x^𝔫) - f_u( x^𝔫)u^𝔫, where ẋ^𝔫 can be obtained by offline high-order polynomial fitting. The learning objective is formalized as Θ ^ * = min_Θ1/2[∑_𝔫 = 1^N Δ^𝔫 - Θℬ_𝔫ξ_𝔫^2 +δΘ^2_F], where ℬ_𝔫 := ℬ(x^𝔫), ξ_𝔫 := ξ(t_f^𝔫) and δ regularizes Θ. Fortunately, the problem (<ref>) has the closed-form solution Θ ^ * = ∑_𝔫= 1^NΔ^𝔫ξ_𝔫^Tℬ_𝔫^T· (∑_𝔫= 1^N ℬ_𝔫ξ_𝔫ξ_𝔫^Tℬ_𝔫^T + δI)^-1. Until now, the x-related portion of Δ( x, d) has been separated and the unknown constant parameters Θ can be learned from the historical data. Denote t_l as the online time. Note that the offline learning phase and online estimating phase are in different time domains. In other words, the relationship between t_f and t_l is unknown. Thus Δ cannot be directly obtained by Θℬ(x)ξ(t) online. The remaining difficulty is to estimate ξ(t) online from control input u and measured x. By resorting to the HODO to be designed, ξ(t) can be exponentially estimated. §.§.§ Data Noise Handling The closed-form solution enables online learning of the parameter matrix Θ. Note that the online calculation requires the truth value of Δ. Δ can be calculated from the current state by using Δ = ẋ - f_x( x) - f_u( x)u, where ẋ can be obtained by a real-time robust differentiator <cit.>. The precise differential calculus inevitably produces serious noise. The obtained differential result can be further filtered by a low-pass filter. In this process, we can tolerate a millisecond signal delay. After synchronizing delays of other signals (t and x), the filter would not affect the learning performance. §.§.§ Infinite Time Avoiding As a result of the constraint on the computational accuracy of real processors, it will be intractable to execute the learning strategy as t →∞. To circumvent this issue, the feature t is reset at a certain period. At the end of each period, the parameter matrix is updated via (<ref>). §.§ Estimation via a Higher-order Disturbance Observer Before proceeding, ξ(t) can be further decomposed due to the structure of Chebyshev polynomials. It can be rendered that ξ(t) = 𝒟ς(t), where 𝒟∈ℝ^s_2× s_2, 𝒟( i,:) = {[ [ [ 1 0 ⋯ 0 ]], i = 1,; [ [ 0 1 ⋯ 0 ]], i = 2,; 2D⃗( i-1,:) - D( i - 2,:), 2 < i ≤s_2, ]. and ς( t ) consists of polynomial basis functions, i.e., ς( t ) = [[ 1 t ⋯ t^p ]]^T∈ℝ^s_2 . From (<ref>) and (<ref>), the coupled disturbance Δ( x,d) is finally represented as {[ ς̇( t ) = 𝒜ς( t ),; Δ =Θℬ(x)𝒟ς( t ), ]. where 𝒜∈ℝ^s_2× s_2, 𝒜( i,j) = {[ j, i = j + 1,; 0, i j + 1. ]. Define ς̂ and ς̃ as the estimation of ς(t) and the estimation error ς̃ = ς(t) - ς̂, respectively. The expected objective of the subsequent disturbance observer is to achieve ς̇̃̇ = Λ_h(x)ς̃, where Λ_h(x)∈ℝ^s_2 × s_2 denotes the observer gain matrix which is designed to ensure the error dynamics (<ref>) exponentially stable. To achieve (<ref>), the HODO is designed as ż_h = 𝒜ς̂ - Γ_h(f_x( x) + f_u( x)u+Θℬ(x)𝒟ς̂), ς̂ = z_h + Γ _hx, Δ̂ = Θℬ(x)𝒟ς̂, where z_h∈ℝ^s_2 is an auxiliary variable and Γ_h∈ℝ^s_2 × n is designed such that Λ_h(x) = 𝒜-Γ_hΘℬ(x)𝒟. Consider the nonlinear system (<ref>). Under the designed HODO (<ref>), the estimation error Δ̃ will converge to zero exponentially if Γ_h can be chosen to make the error dynamics ς̇̃̇= (𝒜-Γ_hΘℬ(x)𝒟) ς̃, exponentially stable. From (<ref>), differentiate the estimation error ς̃. It can be implied that ς̇̃̇ = ς̇(t) -ς̇̂̇ = ς̇(t) - ż_h - Γ _hẋ = ς̇(t) - 𝒜ς̂ + Γ_h(f_x( x) + f_u( x)u+Θℬ(x)𝒟ς̂)- Γ _hẋ = ^(g) (𝒜-Γ_hΘℬ(x)𝒟) ς̃, where (g) is obtained by substituting (<ref>). It can be seen that the estimation error ς̃ will converge to zero exponentially if Γ_h can be chosen to make ς̇̃̇= (𝒜-Γ_hΘℬ(x)𝒟) ς̃ exponentially stable. Finally, the estimated coupled disturbance can converge to the truth value exponentially as ς̃→ 0 because of Δ̃ = Θℬ(x)𝒟ς̃. According to (<ref>), the design of Γ_h for HODO (<ref>) is equivalent to the design of the state observer gain for the disturbance system (<ref>). The existence of Γ_h depends on the observability of the linear time-varying system (<ref>). The methods of state observer design for linear time-varying systems have been developed in many previous works, such as the least-squares-based observer <cit.>, the extended linear observer <cit.>, and the block-input/block-output model-based observer <cit.>. The method proposed in <cit.> is employed here to decide Γ_h online, whose computational burden is mainly concentrated on the inverse of the observability matrix. § EVALUATIONS §.§ Learning Performance From Corollary <ref>, the coupled disturbance Δ( x, d) can be decomposed into an unknown parameter matrix Θ and two known functions ℬ(x) and ξ(t) with arbitrarily small residual error. Based on Corollary <ref>, a supervised learning strategy is synthesized in Section <ref> to learn the unknown parameter matrix Θ. In this part, the learning performance is exemplified and analyzed by three nonlinear functions as follows Δ( x,d) = sin(x)sin( d), d= t, Δ( x,d) = x - 1/12x^3 - 1/4d, d = t^2, Δ( x,d) = - 1/9sin( x)d, d = t^3. The surface diagrams of these functions are depicted in Figures. <ref>(A)-(C). §.§.§ Setup The learning dataset is constructed from x∈[-2, 2] and t ∈[0, 4]. 10000 samples are collected and scrambled, where 5000 for training and 5000 for testing. The hyperparameter δ used for training is set as 0.01. Moreover, the influences of measurement noise and the selection of p are also analyzed. The state x in the training dataset is corrupted by noise 𝒩( 0,σ_x^2). The learning performance under different σ_x^2 and p is tested. §.§.§ Results The learning errors of the proposed supervised one under different noise variance σ_x^2 and parameter p in MAE are presented in Fig. <ref>(D)-(F). Two phenomena can be observed. On the one hand, the learning performance degrades as the noise variance increases. On the other hand, proper p can achieve decent learning performance, since small and large p can lead to underfitting and overfitting problems, respectively. §.§ Estimation Performance In this part, the estimation performance of the proposed HODO is demonstrated. Considering a second-order Newton system perturbated by a coupled disturbance {[ η̇ = v, v̇ = a,; ma = u + Δ(v, d), ]. with position η∈ℝ, velocity v∈ℝ, acceleration a∈ℝ, mass m∈ℝ, control input u∈ℝ, and coupled disturbance Δ(v, d)∈ℝ. Note that the measured v is corrupted by noise 𝒩( 0,σ _v^2). The coupled disturbance used in the simulation is modeled as Δ (v,d(t)) = - v^2 + 50 - 10t - 0.5t^2. The truth value of Θ of (<ref>) can be derived, i.e., Θ = [49.75, 0, -0.5, -10, 0, 0, 0.25, 0, 0]. The objective is to design control input u so as to ensure that η tracks the desired state η_d. The baseline controller adopts the proportional-derivative (PD) control, and the estimated disturbance Δ̂ by the proposed HODO is compensated via feedforward. The controller is designed as u = K_ηe_η + K_v e_v - Δ̂, with positive definite gain matrices K_η∈ℝ and K_v∈ℝ, and tracking errors e_η = η_d-η and e_v = v_d-v. §.§.§ Setup The learning dataset is constructed from v∈[-10, 10] and t ∈[0, 100]. 10000 samples are collected. The hyperparameter δ used for training is set as 0.01. p in Theorem <ref> is chosen as 2. The desired tracking trajectory is set as η_d = sin(1/2t). The variance σ _v^2 of imposed noise is set as 0.1. The baseline controller gains K_η and K_v are tuned as 10 and 25, respectively. The traditional disturbance observer-based controller <cit.> and the baseline controller (without the compensation of estimated disturbance) are taken as comparisons. For the sake of fairness, the observer gains (in charge of the convergence speed) of the traditional disturbance observer <cit.> and the proposed HODO (<ref>) are set to be the same. Here, all eigenvalues of Λ_h(x) are set as 0.4. §.§.§ Results Denote Θ̂ as the learning result of Θ. By employing the proposed learning strategy designed in Section <ref>, the learning error Θ-Θ̂^2_2 is finally 1.3163 × 10^-6, which demonstrates the effectiveness of the proposed learning strategy in Section <ref>. Fig. <ref>(A) presents the tracking results.The tracking performance of the traditional disturbance observer and proposed HODO-enhanced controllers outperform the baseline one, as the result of the compensation effect. However, as the imposed disturbance denoted by the black dotted line in Fig. <ref>(B) increases, the tracking performance of the traditional disturbance observer in Fig. <ref>(A) becomes worse. Focusing on the traditional disturbance observer <cit.>, there is always an estimated lag from Fig. <ref>(B). Since the learned knowledge of the coupled disturbance is utilized, the proposed HODO (<ref>) can accurately capture the evolution of the coupled disturbance (<ref>). After the estimated disturbance of HODO is compensated, it can be seen from the yellow line in Fig. <ref>(A) that the tracking performance is dramatically improved. It is revealed that data-driven learning is instrumental for the downstream online estimation. § CONCLUSION In this article, we propose a data-driven disturbance observer for nonlinear systems perpetuated by a coupled disturbance. The considered coupled disturbance is difficult to model explicitly. Firstly, a variable separation principle is presented by leveraging the Chebyshev series expansion. A RLS-based learning strategy is subsequently developed to learn the separated unknown parameter matrix using historical data, which maintains a low computational complexity. Finally, HODO is developed by utilizing the learned structure, which can achieve zero-error estimation of the coupled disturbance. The learning and estimation performance of the proposed method is demonstrated by several simulation examples. Future works: Although arbitrarily small approximation accuracy can be obtained theoretically by employing the proposed variable separation principle, there still exists a small learning residual error with a small bound when applied to real systems. Future work will pursue the integration of robust control schemes to attenuate the learning residual error, like in <cit.>. Moreover, the closed-form solution (<ref>) enables online learning of the parameter matrix for the case with limited computing power. Two challenges impede the online implementation. One is the online calculation of sample Δ^𝔫 (a certain amount of delay is allowed), and the other is the online ergodic dataset construction which directly affects the learning performance. Future work will attempt to find preferable solutions. IEEEtran
http://arxiv.org/abs/2407.13423v1
20240718114743
Jerk-limited Traversal of One-dimensional Paths and its Application to Multi-dimensional Path Tracking
[ "Jonas C. Kiemel", "Torsten Kröger" ]
cs.RO
[ "cs.RO" ]
DeepClair: Utilizing Market Forecasts for Effective Portfolio Selection Jaewoo Kang July 22, 2024 ======================================================================= empty empty 14.9cm(3.2cm,0.75cm) © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. § ABSTRACT In this paper, we present an iterative method to quickly traverse multi-dimensional paths considering jerk constraints. As a first step, we analyze the traversal of each individual path dimension. We derive a range of feasible target accelerations for each intermediate waypoint of a one-dimensional path using a binary search algorithm. Computing a trajectory from waypoint to waypoint leads to the fastest progress on the path when selecting the highest feasible target acceleration. Similarly, it is possible to calculate a trajectory that leads to minimum progress along the path. This insight allows us to control the traversal of a one-dimensional path in such a way that a reference path length of a multi-dimensional path is approximately tracked over time. In order to improve the tracking accuracy, we propose an iterative scheme to adjust the temporal course of the selected reference path length. More precisely, the temporal region causing the largest position deviation is identified and updated at each iteration. In our evaluation, we thoroughly analyze the performance of our method using seven-dimensional reference paths with different path characteristics. We show that our method manages to quickly traverse the reference paths and compare the required traversing time and the resulting path accuracy with other state-of-the-art approaches. § INTRODUCTION One of the most common tasks in industrial robotics is to follow a given reference path. The problem of traversing a reference path in a time-optimal manner, while respecting the kinematic limits of the robot joints, is known as time-optimal path parameterization (TOPP). For velocity and acceleration constraints, the problem has been studied extensively, and open source implementations are readily available <cit.>. However, constraints on the derivative of the acceleration, commonly known as jerk, are often ignored. By considering jerk constraints, wear and tear on the mechanical components can be reduced, which not only increases the reliability of a robotic system but also contributes to its overall cost-effectiveness. While TOPP with jerk constraints is still an ongoing research topic, progress has been made in addressing a related problem. Specifically, it is possible to compute a time-optimal trajectory considering jerk constraints when provided with an initial kinematic state and a desired target kinematic state of a robot joint <cit.>. These kinematic states consist of a position p, velocity v and acceleration a. With this in mind, we analyze the problem of traversing a one-dimensional path subject to jerk constraints. As shown in Fig. <ref>, a one-dimensional path can be described by its initial position p_0, its final position p_III and intermediate positions (p_I and p_II) at which the direction of the path changes. The corresponding velocities v_0, v_I, v_II, v_III need to be zero. The same applies to the initial acceleration a_0 and the final acceleration a_III. To fully specify kinematic target states for the intermediate points p_I and p_II, appropriate accelerations a_I and a_II need to be found. The main contributions of this paper can be * We show how a binary search can be used to compute a range of feasible target accelerations for each intermediate point of a one-dimensional path. Selecting the highest feasible acceleration of each range results in a time-optimized traversal of the reference path (see a_max_I and a_max_II in Fig. <ref>). * Based on these results, we compute trajectories leading to minimum and maximum progress along the one-dimensional path. * Using these trajectories, we propose an iterative scheme to quickly traverse a multi-dimensional path and benchmark the resulting traversing time and path accuracy with other state-of-the-art methods. § RELATED WORK Early research on time-optimal path parameterization (TOPP) for multi-dimensional paths dates back to the 1980s <cit.>. Nowadays, common TOPP algorithms are designed to handle both first-order and second-order constraints, ensuring that the joint velocity and joint acceleration stay within predefined limits. An exemplary implementation of such an algorithm based on reachability analysis is provided by the open-source library TOPP-RA <cit.>, which serves as a benchmark for our evaluation. In order to consider jerk constraints, a TOPP algorithms supporting third-order constraints is required. While several partial solutions based on numerical integration have been proposed <cit.>, an efficient algorithm for the general problem is still the subject of ongoing research. Singularities pose a major challenge in this context, as they make numerical integration difficult. TOPP with first-order constraints and second-order constraints can also be effectively handled by convex optimization <cit.>. However, the inclusion of third-order constraints leads to a non-convex optimization problem <cit.>. To address this issue, a convex approximation based on linear constraints is proposed in <cit.>. As an alternative, third-order constraints can be approximated by adding a penalty term to the objective function of the optimization problem <cit.>. While TOPP is typically performed offline, there are also online methods capable of considering jerk constraints. In <cit.> and <cit.>, a jerk-limited path tracking technique is proposed, however, without focusing on the traversing time. As mentioned in the introduction, it is also possible to compute a time-optimal trajectory from one kinematic state to another considering jerk constraints <cit.>. The Reflexxes motion library <cit.> can process a combination of a target position and a target velocity as a target state. A more recent development, the Ruckig library <cit.>, goes a step further by also supporting target accelerations. We use the open-source community version of Ruckig as a backend for our calculations. There is also a commercial closed-source version called Ruckig pro, which additionally supports intermediate waypoints. By densely sampling waypoints from a given path, Ruckig pro can be used to generate an approximate path parameterization. In our evaluation, we consider the results of this approximation as a benchmark. In <cit.>, it is shown that an upper and a lower trajectory can be computed for each joint such that position, velocity, acceleration and jerk constraints are not violated. This insight is used to construct a continuous action space for reinforcement learning in which each action leads to a feasible trajectory. Using this action space, it is possible to train a neural network to follow a path without violating jerk constraints <cit.>. As a reference, we also provide the results of this method in our evaluation section. § APPROACH §.§ Overview In a first step, we analyze the traversal of a one-dimensional path subject to the following constraints: v_min ≤ ṗ ≤ v_max, a_min ≤ p̈ ≤ a_max, j_min ≤ ⃛p ≤ j_max, where p, v, a and j stand for position, velocity, acceleration and jerk, respectively. As shown in Fig. <ref> and explained in (<ref>), a range of feasible target accelerations can be computed for each intermediate waypoint of a one-dimensional path. Given a feasible target acceleration, the Ruckig library <cit.> can be used to compute a time-optimal trajectory from one waypoint to another without violating the constraints (<ref>) - (<ref>). In section (<ref>), we explain how the traversal on the path can be controlled by repeatedly computing an upper and a lower trajectory. In section (<ref>), we finally introduce an iterative scheme to closely follow a multi-dimensional path. §.§ Computing feasible target accelerations In this section, we analyze feasible kinematic states for each waypoint p_0, p_I, p_II, p_III of a one-dimensional path shown in Fig. <ref>. While our exemplary path has two intermediate waypoints p_I and p_II, the same principle can be applied to arbitrary one-dimensional paths. It is evident that the velocity and acceleration of the first waypoint p_0 and the last waypoint p_III must be zero. The intermediate waypoints p_I and p_II are local extrema of the path. Consequently, their corresponding velocities also need to be zero. In order to compute feasible target accelerations for the intermediate waypoints, we first consider the movement from p_0 to p_I. Since p_I is a local minimum of the path, the target acceleration must be greater than or equal to zero. Starting from stillstand, a target acceleration of zero is always possible as it results in a normal point-to-point motion. The range of potential target accelerations a_out_I is continuous. For that reason, all potential target accelerations are known, once the maximum target acceleration a_out_max_I is found. However, it could happen that a_out_max_I is greater than the maximum input acceleration of the next section a_in_max_II. Consequently, the maximum target acceleration a_max_I is calculated as follows: a_max_I = min(a_out_max_I, a_in_max_II) Both, a_out_max_I and a_in_max_II are determined by a binary search. In Fig. <ref>, the principle is illustrated for the maximum input acceleration a_in_max. In a first step (a), a_in_max is assumed to be a_max. Using Ruckig, a trajectory from the current waypoint p_current to the next waypoint p_next is computed, selecting both the target velocity and the target acceleration to be zero. If the resulting trajectory is valid, the maximum input acceleration is a_max. If not, another test (b) is performed, assuming a_in_max = 0.5 · a_max. A trajectory is considered as valid if: * The kinematic limits (<ref>) - (<ref>) are not violated. * The velocity is never negative if p_next > p_current or never positive if p_next < p_current. In case of (a), the position overshoots the target. Consequently, the trajectory is considered as invalid as the velocity must become negative to compensate for the overshoot. In step (b) and step (c), the resulting trajectory is considered as valid. As a result, a_in_max must be greater than or equal to 0.75 · a_max but less than a_max. The binary search is continued until a_in_max is approximated sufficiently well. In order to compute a_in_max_II for the intermediate section II, we assume a target acceleration of zero. To compute a_out_max_II, we assume an input acceleration of zero. As shown in Fig. <ref>, it is nevertheless possible to compute a valid trajectory choosing an input acceleration of a_in_max_II and a target acceleration of a_out_max_II. In fact, any combination of an input acceleration α· a_in_max and a target acceleration β· a_out_max leads to a valid trajectory, with α, β∈ [0.0, 1.0]. The higher the value of α or β, the faster the traversal. The smallest traversal time t_min results from selecting . Thus, a time-optimized traversal of the path shown in Fig. <ref> and Fig. <ref> is achieved by selecting a_I =a_max_I and a_II =a_max_II. As highlighted by a circle in Fig. <ref>, the acceleration of the fastest trajectory, shown in pink, slightly goes up before reaching a_out_max. Thus, a slightly higher target acceleration could be selected, leading to a slightly smaller traversal time. However, this target acceleration could no longer be reached from an input acceleration of zero. In practice, we found the loss of time caused by including zero as a valid input acceleration and a valid target acceleration to be small. §.§ Controlling the traversal of a one-dimensional path In Fig. <ref>, a path with one intermediate waypoint p_I is traversed. As explained before, a time-optimized path traversal is composed of the following two parts: A trajectory from (p_0, 0, 0) to (p_I, 0, a_max_I), followed by a trajectory from to (p_II, 0, 0). We call the resulting trajectory upper trajectory. In contrast, the smallest feasible progress on the path is attained through a so-called lower trajectory. At the beginning of a section, the lower trajectory corresponds to a simple braking trajectory. The braking trajectory can be calculated with Ruckig by specifying a target velocity and a target acceleration of zero. However, if the resulting braking trajectory does not stay on the desired path, the calculation is adjusted. More precisely, when traversing section I, a target state (p_I, 0, a_min_I) is selected. The acceleration a_min_I is less than or equal to a_max_I and is computed in a similiar way using a binary search. As a next step, we discretize the time t into small time steps with a time distance of Δ t and compute the kinematic states (p_lower_Δ t, v_lower_Δ t, a_lower_Δ t) and (p_upper_Δ t, v_upper_Δ t, a_upper_Δ t) at the following time step. As shown in Fig. <ref>, a position and its corresponding section can be uniquely mapped to a path length s. We map p_lower_Δ t and p_upper_Δ t to s_lower_Δ t and s_upper_Δ t and define: s_desired_Δ t = s_lower_Δ t + m · (s_upper_Δ t - s_lower_Δ t), with m being a mapping factor ∈ [0.0, 1.0]. For small time distances Δ t, an intermediate trajectory leading to a path length close to s_desired_Δ t can be found. Consequently, the mapping factor allows us to control the traversal of the path. After each time step, the lower and the upper trajectory are recomputed. As shown in Fig. <ref>, both trajectories are almost identical prior to a section change. Thus, the traversal can hardly be influenced during this phase, which motivates the following iterative scheme. §.§ An iterative scheme to track multi-dimensional paths In this section, we present an iterative scheme to select the mapping factors such that a multi-dimensional path is approximately tracked. Fig. <ref> visualizes the first two iterations of the method for a two-dimensional path. As an initial step, we compute the time-optimized traversal of each individual path dimension and identify the slowest dimension. For the slowest dimension, we determine the path length s(t) over time corresponding to the time-optimized traversal. The path length s(t) of an individual dimension can be uniquely mapped to a path length u(t) of the multi-dimensional path. The corresponding u(t) of the slowest dimension serves as a reference u_ref_1 for the first iteration of our method. The basic idea of the next step is to track this reference with the other path dimensions by selecting suitable mapping factors. To do so, we define a small range around u_ref where the other dimensions should stay in. The deviation from the reference u_ref is denoted as Δ u_ref. Likewise, the corresponding position deviation is denoted as Δ p_ref. Looking at Δ u_ref_1 in Fig. <ref>, it can be seen that the mapping factors m_1 are selected such that dimension 2 oscillates within the desired Δ u_ref range. However, if it is not possible to keep the dimension within the desired range, the mapping factors are selected such that the lower bound of the range is undershot. The corresponding areas and their resulting position deviation are hatched in red. We now look for the area outside the desired Δ u_ref range that causes the largest integrated position deviation of all dimensions. As indicated by a dashed black line in Fig. <ref>, we update the reference u_ref_1 in the selected area such that dimension 2 stays within the desired Δ u_ref range in the following iteration. As a consequence, the resulting position deviation after the second iteration Δ p_ref_2 is significantly reduced compared to Δ p_ref_1. By repeating this iterative procedure, the tracking accuracy can be further increased. We note that the updated reference can decrease the tracking accuracy of other path dimensions. In practice, we repeat the procedure for a fixed number of iterations and choose the iteration with the best overall tracking performance as the final path parameterization. § EVALUATION For our evaluation, we use a KUKA iiwa robot with seven degrees of freedom. The selected kinematic limits for each joint are shown in TABLE <ref>. If not noted otherwise, the jerk limits from the table apply. However, for a more thorough evaluation, we additionally perform experiments with different jerk limits. In these cases, we report a so-called jerk limit factor that is multiplied with the jerk limits given in TABLE <ref>. §.§ Time-optimized traversal of one-dimensional paths As a first step, we evaluate the traversal of one-dimensional paths. For each of the seven robot joints, we generate 200 one-dimensional paths with randomly selected waypoints. To minimize the trajectory duration, we select the maximum acceleration for each intermediate waypoint as derived in section (<ref>). As a benchmark, we report the results of the closed-source library Ruckig pro. TABLE <ref> shows the resulting average trajectory duration and tracking accuracy for several jerk limit factors. As expected, the trajectory duration decreases if the jerk limits are increased. However, the relative impact on the trajectory duration decreases for higher jerk limit factors. Compared to Ruckig pro, the results of our method are almost identical. §.§ Tracking of geometric shapes As shown in Fig. <ref>, we apply our iterative tracking method to seven-dimensional paths that resemble geometric shapes in Cartesian space. It can be seen that the resulting Cartesian paths shown in red stay close to the reference paths shown in green. For further visualization, we refer to our accompanying video. A quantitative evaluation can be found in TABLE <ref>. Using Δ t = 2.5 as time step, 20 iterations were performed as described in section <ref>. Finally, the path with the lowest average deviation from the reference path was selected. The path deviation is computed by densely sampling waypoints along the reference path and the generated path. Next, waypoints sampled at the same path length u are compared by calculating the Euclidean distance between them. TABLE <ref> shows the mean and the maximum distance calculated in this way. We also provide results obtained with TOPP-RA <cit.> and Ruckig pro. With TOPP-RA, the path tracking is very accurate but the jerk limits are ignored. Ruckig pro supports jerk limits but is designed to accurately track intermediate waypoints rather than a full path. As a reference, we provide the results for an equidistant sampling of waypoints from the reference path using a distance of 0.1 rad (A) and 0.2 rad (B). Overall, our method managed to generate fast trajectories for all of the geometric shapes shown in Fig. <ref>. §.§ Tracking of paths with different characteristics In the following, we evaluate our method using three datasets with different path characteristics. Exemplary paths for each dataset are shown in Fig. <ref>. Dataset A contains a wide range of semicircles and straight lines. The paths in dataset B are generated by selecting random joint accelerations. Dataset C is composed of paths obtained by moving between randomly sampled Cartesian target points. In , we report the results for each dataset and compare them with <cit.>, a method that uses neural networks trained via reinforcement learning to track the reference paths. The indicated trajectory duration of the slowest individual path dimension serves as a lower limit for the resulting trajectory duration when considering the multi-dimensional path. It can be seen, that the presented method generates faster trajectories than <cit.> for all datasets. In addition, the tracking accuracy for dataset A and dataset C is higher. In TABLE <ref>, we also specify the average iteration chosen for our evaluation. TABLE <ref> shows additional results obtained with and Ruckig pro. For dataset A, generates the fastest trajectories with the highest path accuracy, however, without considering jerk limits. Using Ruckig pro with a waypoint distance of 0.1 rad leads to slower trajectories, but a higher tracking accuracy compared to our method. In contrast, selecting a waypoint distance of 0.15 rad or 0.2 rad leads to faster trajectories but a less accurate path tracking. For dataset B and dataset C, the resulting trajectories with TOPP-RA are slower than with our method but the tracking accuracy is higher. In TABLE <ref>, we finally analyze the impact of the selected jerk limits on the resulting trajectory duration and path deviation. As expected, higher jerk limits lead to a faster traversal of the reference paths. Moreover, the accuracy of the tracking improves. Thus, we conclude that higher jerk limits simplify the tracking of a desired reference path length u_ref. § CONCLUSION AND FUTURE WORK We presented a method to approximately track multi-dimensional paths considering jerk limits. As a first step, we analyzed each path dimension individually. For every intermediate waypoint of a one-dimensional path, we computed a range of feasible accelerations using a binary search algorithm. We then computed a lower and an upper trajectory to achieve minimum and maximum progress on the one-dimensional path, respectively. This allowed us to control the traversal on the path in such a way that a selected reference path length of a multi-dimensional path could be approximately tracked over time. We then applied an iterative scheme, where the reference path length was adjusted at each iteration such that the largest occurring position deviation diminished. Our evaluation on geometric shapes and datasets with different characteristics showed that our method succeeded in quickly traversing the reference paths while keeping the path deviation low. In future work, we would like to further improve our method, e.g., by searching for continuous points in time to switch between the upper and lower trajectory. -12.2cm IEEEtran
http://arxiv.org/abs/2407.13187v1
20240718055123
Growing tissue sheets on substrates: buds, buckles, and pores
[ "Hiroshi Noguchi", "Jens Elgeti" ]
physics.bio-ph
[ "physics.bio-ph", "cond-mat.soft" ]
[]noguchi@issp.u-tokyo.ac.jp Institute for Solid State Physics, University of Tokyo, Kashiwa, Chiba 277-8581, Japan Theoretical Physics of Living Matter, Institute of Biological Information Processing and Institute for Advanced Simulation, Forschungszentrum Jülich, 52425 Jülich, Germany § ABSTRACT Many tissues take the form of thin sheets, being only a single cell thick, but millions of cells wide. These tissue sheets can bend and buckle in the third dimension. In this work, we investigated the growth of suspended and supported tissue sheets using particle-based simulations. We combine particle-based tissue growth and meshless membrane models to simulate the growth of tissue sheets with mechanical feedback. Free suspended growing tissues exhibit wrinkling when growth is sufficiently fast. Conversely, tissues on a substrate form buds when the adhesion to the substrate is weak and/or when the friction with the substrate is strong. These buds undergo a membrane-mediated attraction and subsequently fuse. The complete detachment of tissues from the substrate and straight buckled bump formation are also obtained at very weak adhesion and/or fast growth rates. Tissue pores grow via Ostwald ripening and coalescence. The reported dynamics can also be applied in research on the detachment dynamics of different tissues with weakened adhesion. Growing tissue sheets on substrates: buds, buckles, and pores Jens Elgeti Received X XX, XXXX; accepted X XX, XXXX ============================================================= § INTRODUCTION Epithelia are thin tissues that serve as barriers around organs and to the outside world. While the most famous epithelium, the skin, is several cell layers thick, most epithelia are actually single-layered or just a few cells thick. All epithelia are much more extended in the lateral than in the perpendicular directions, motivating a two-dimensional (2D) model. However, a 2D model hides important tissue deformations in the third dimension. For example, the intestinal epithelium that lines the gut exhibits characteristic finger-like protrusions (called villi) and invaginations (called crypts) <cit.>. A second prominent example of a quasi-two-dimensional tissue growth leading to a three-dimensional (3D) structure occurs in the brain cortex: the differential growth hypothesis states that it is the lateral growth of the gray matter that leads to buckling and thus to the folding of the brain <cit.>. The theoretical modeling of quasi-2D sheets growing and embedded in 3D space has proven to be challenging. Analytical modeling is usually limited to small deformations, thus one has to revert to numerical simulations. Simulations of continuum models (e.g., finite element method) are useful for investigating the mechanical responses of tissues. However, the discrete nature of cells has important consequences, particularly in the fluctuations, evolution, or the probability of jackpot events <cit.>, since cell division plays an essential role. Therefore, three types of cell-based models have been developed for tissue mechanics involving cell deformation and proliferation: triangulated cell models <cit.>, cell vertex models <cit.>, and two-particle cell models <cit.>. Triangulated cell models can describe detailed cell shapes and internal structures but are not suitable for large-scale simulations due to their high numerical costs. Cell vertex models describe a cell as a polygon <cit.> or polyhedron <cit.> and widely used for large-scale tissue growth and deformation. Overdamped Langevin equations are typically used for cell motions so that the hydrodynamic interactions are neglected. In cell-centered models <cit.> (variants of the cell vertex models), the cell-center positions are used as the degrees of freedom, and the cell vertices are generated by the Voronoi tessellation; the same potential energy is used as in the vertex model. In this study, we use the third type of cell models, the two-particle cell models <cit.>. This type of models were developed to explore the role of mechanical feedback on growing matter. The basic idea is that cells can grow (i.e., expand in volume) under a finite force and fluctuations, and the cells divide only when a critical size is reached <cit.>. Owing to their simplicity, these models have been applied to various phenomena, such as fluidization of tissues <cit.>, growth response of cancer spheroids to pressure <cit.>, and competition and evolution of tissues <cit.>. Thanks to the particle-based nature, it was easily extended to include cellular motility <cit.>, stem cell dynamics <cit.> or even match bacterial growth <cit.>. In this work, we combine the two-particle growth model with a particle-based meshless membrane model <cit.> to create a meshless, particle-based model to simulate tissue sheets. The meshless membrane models were originally developed to simulate membrane dynamics accompanied by topological changes <cit.>, and were applied to the vesicle formation <cit.>, the membrane tubulation induced by curvature-inducing proteins <cit.>, the membrane detachment from a substrate <cit.>, and the assembly by inter-membrane entropic repulsion <cit.>. The aim of this study is to simulate the tissue dynamics on an adhesive substrate and characterize the growth of quai-2D tissue sheets in 3D space. We use a solid flat substrate to model a gel sheet in cultured-tissue experiments or hard tissue in a living body. In the skin, blisters are formed by various diseases (e.g., varicella <cit.>) and wounding (e.g., burns and rubbing <cit.>). We investigate the essential detachment mechanism using simple geometry and conditions. We focus our analyses on how the stress in tissue proliferation induces local or entire detachment and how detachment develops. Previously, Okuda et al. <cit.> simulated the formation of multiple cylindrical buds from growing spots in a spherical tissue; however, they did not observe bud fusion. Here, we demonstrate that buds fuse through attractive interaction generated by tissue bending. The simulation model and method are described in Sec. <ref>. The dynamics of a freely suspended tissue sheets are presented in Sec. <ref>. The dynamics of growing tissues on a substrate with constant and height-dependent division are presented and discussed in Secs. <ref> and <ref>, respectively. The pore formation in flat tissues is investigated in Sec. <ref>. Finally, the conclusions are presented in Sec. <ref>. § MODEL AND METHOD The cell growth mechanism is the same as that of the original 3D model <cit.>. Each cell consists of two particles and grows by a repulsive force of magnitude f_div between the two particles; for the k-th cell, the two particles push each other as ± f_div𝐫̂_cl,k, where 𝐫̂_cl,k= 𝐫_cl,k/r_cl,k, r_cl,k=|𝐫_cl,k|, 𝐫_cl,k = 𝐫_i-𝐫_j, and 𝐫_i and 𝐫_j are the positions of the two particles in the cell. When the distance r_cl,k exceeds the threshold value r_div, the k-th cell exhibits division into two daughter cells; the centers of two daughter cells locale at the original positions of the particles of the mother cell. The two particles in each daughter cell are placed 0.001r_div in a random direction from the cell center [see Fig. <ref>(a)]. The two particles of each daughter cell have the same velocity as the divided particle in the mother cell. Cells are eliminated at the rate k_a as cell death by either apoptosis or necrosis. We set the thermal energy k_BT=ε_0=1, the division length r_div=1, and particle mass m=1, so that τ_0=r_div(m/k_BT)^1/2=1 is the time unit. Here, we consider the effective temperature, including non-thermal distortion in tissues; hence, T is higher than room temperature. For cell–cell volume exclusion, we use the short-range repulsion for r_ij<r_cr as in Ref. <cit.>: U_rep = ε_rep∑_i,j[(r_cr/r_ij)^4 + 4r_ij/r_cr - 5]Θ(r_cr-r_ij), where Θ(r) denotes the unit step function. In this study, ε_rep=1 and r_cr=1.2 are used. To construct a one-layer tissue sheet, we add attractive and curvature potentials (U_att and U_α) developed for a meshless membrane model <cit.>. The potential U_att is a function of the local particle density ρ_i as U_att = ε_att∑_i1/4ln[1+exp{-4(ρ_i-ρ^*)}]- C, ρ_i = ∑_j iexp[A(1+1/(r_ij/r_ca)^12 -1)]Θ(r_ca-r_ij), where C= (1/4)ln{1+exp(4ρ^*)} and A=ln(2) {(r_ca/r_half)^12-1}. The summations in Eq. (<ref>) and in the curvature potential U_α are taken for all other particles, including the other particle in the same cell. Here, ρ_i denotes the number of particles in a sphere whose radius is approximately r_half. This multibody potential acts as a pair potential U_att≃ -ρ_i with a cutoff at ρ_i≃ρ^*, which can stabilize the fluid phase of 2D assembly over a wide range of parameter sets <cit.>. The curvature potential is given by U_α = ε_α∑_i α_pl( r_i), α_pl = 9λ_1λ_2λ_3/(λ_1+λ_2+λ_3) (λ_1λ_2+λ_2λ_3+λ_3λ_1), where λ_1, λ_2, and λ_3 are the eigenvalues of the weighted gyration tensor, a_αβ= ∑_j (α_j-α_ Gw) (β_j-β_ Gw)w_mls(r_ij), where α, β=x,y,z and r_ Gw=∑_j r_jw_mls(r_ij)/∑_j w_mls(r_ij). The shape parameter aplanarity α_pl represents the degree of deviation from a plane and is proportional to λ_1 for λ_1 ≪λ_2, λ_3 <cit.>. A Gaussian function with C^∞ cutoff <cit.> is employed as a weight function w_mls(r_ij)= exp[(r_ij/r_ga)^2/(r_ij/r_cg)^12 -1]Θ(r_cg-r_ij). In this study, ε_att=2, ρ^*=15, r_ca=2.1, r_half=1.8, ε_α=10, and r_ga=r_cg/2 = 3 are used. To represent the adhesion of the tissue to a flat substrate, we use the Lennard-Jones (LJ) potential, U_LJ=∑_i 4ε_wall[(r_div/z_i)^12- (r_div/z_i)^6], for the particles in all cells [see Fig. <ref>(b)]. The cell particles are moved by the Newton's equation with Langevin and dissipative particle dynamics (DPD) thermostats: m d 𝐯_i/dt = - ∂ U/∂𝐫_i -w_wall(z_i)𝐯_i + w_wall(z_i)^1/2ξ_i(t) + ∑_j≠i{-w_tiss(r_ij)𝐯_ij·𝐫̂_ij + w_tiss(r_ij)^1/2ξ_ij(t)}𝐫̂_ij. The second and third terms are the friction and noise from the substrate, respectively, which work near the substrate surface. Therefore, w_wall(z_i) = ζ_wall(R_LT - z_i)Θ(R_LT - z_i) is used. The fourth and fifth terms are the DPD thermostat (Langevin thermostat to neighboring particles keeping the translational and angular momenta of the pair) <cit.> with w_tiss(r_ij)=ζ_DPD(R_DPD - r_ij)Θ(R_DPD - r_ij). These thermostats obey the fluctuation-dissipation theorem. In this study, R_LT=1.5, R_DPD=2.1, and ζ_DPD=1 are fixed, and ζ_wall is varied to investigate the friction effects. Note that the cell division and death do not conserve the translational and angular momenta, although the DPD thermostat conserves them. The cell motion is numerically solved by the Shardlow's S1 splitting algorithm <cit.> with Δ t_MD=0.002, and the cell division and death are performed every Δ t_div=0.04. To generate the initial states and calculate the thermal properties, tissues are equilibrated in the absence of cell division and death with an additional potential U_init = ∑_kexp[20(r_cl,k-r_div)], to keep a distance of less than r_div. The tissue sheet is in the fluid phase and has a bending rigidity κ≃ 50ε_0, edge line tension Γ≃ 13ε_0/r_div, and membrane viscosity η = 8ε_0 τ_0 r_div^-2 (calculation details are given in <ref>). The simulation units can be mapped to real scale using these quantities. The bending rigidity and membrane viscosity of tissues depend on inter-cell adhesion energy and tissue thickness. The bending rigidity is estimated as κ∼ 10^-15 – 10^-11J <cit.>. The 3D viscosity of the aggregate of mouse embryonic cells is measured as 4× 10^5Pa·s <cit.>. When we consider a tissue sheet with a thickness of 10 μm, κ∼ 5× 10^-14J, and η∼ 4Pa·s·m^-1, the simulation units correspond to r_div∼ 10 μm, ε_0 ∼ 10^-15J, and τ_0 ∼ 10h on a real scale. Statistical errors are calculated from three (or more) and ten independent runs for steady states and time evolutions, respectively. § FREELY SUSPENDED TISSUE SHEET Before considering the interaction with the substrate, we investigate the dynamics of freely suspended tissues in the absence of a substrate. The tissue is initially in a tensionless state equilibrated in the absence of cell division and death with U_init under the periodic boundary conditions at L_x=L_y=200 and N_cell=33776 [see Fig. <ref>(a)]. As the cells start dividing without death (k_a=0), the tissue forms wrinkles [see Fig. <ref>(b) and Movie S1]. Neighboring small wrinkles fuse, but large Ω-shaped wrinkles do not, since the contact of the foot regions is prevented by the convex regions. As the death rate k_a increases, the growth becomes slower, and the number of wrinkles decreases through more frequent fusion, since the tissues can relax into a shape of lower bending energy. Eventually, a single buckle is formed (Fig. <ref>(c)), since it has the lowest bending energy (see <ref>). With a further increase in k_a, the tissue shrinks and disappears via forming a pore, which subsequently grows [Figs. <ref>(d) and (e)]. At the threshold condition k_a=0.016, the tissue forms a pore but subsequently grows and closes the pore. This initial slow growth rate is due to the initial relaxation of r_cl,k by the removal of U_init. These dynamics seem in contrast to previous simulations by the cell vertex <cit.> and cell-centered models <cit.>. They obtained straight or winding bumps are formed instead of buds. We speculate that the difference comes from the suppression of the vertical growth in their simulations owing to the harmonic constraint potential for vertical cell positions. Here, we use a DPD thermostat, such that the viscous tissue is placed in a vapor of negligible viscosity. Therefore, the tissue can move more easily in the vertical than lateral direction. When we use the Langevin thermostat instead, the tissue can move equally in both directions, so that more frequent wrinkles are formed owing to less frequent wrinkle fusion. The viscosity ratio exerts similar effects in lipid membranes <cit.>. § GROWING TISSUE ON SUBSTRATE WITH CONSTANT DIVISION FORCE We investigate the tissue growth on a substrate with constant f_div; the cell environment, such as the nutrient concentration, is assumed to be uniform, including far places from the substrate. The initial state is a disk-shaped tissue equilibrated in the absence of cell division and death with U_init at N_cell=1600. The tissues spread on the substrate and subsequently form buds [see Fig. <ref>(a) and Movie S2]. The number of cells N_cell increases exponentially. As k_a increases, the growth speed becomes lower, and the budding begins at larger tissues [see Figs. <ref>(b) and (c)]. For quantification, we consider cells at z_cc,k≥ 2 to be detached from the substrate, where z_cc,k is the height of cell center: z_cc,k = (z_i +z_j)/2 where the heights z_i and z_j of the two particles in the k-th cell. Figure <ref> shows the dynamic phase diagrams for ε_wall vs. f_div at four ζ_wall values. The buds are formed under weaker adhesion (low ε_wall) and/or faster growth (high f_div) with stronger substrate friction ζ_wall. We consider that a tissue forms buds when the ratio of detached cells is more than 0.1 at N_cell=100 000 for ε_wall≥ 0.1 and 0.25 for ε_wall=0.05. For a weak adhesion of ε_wall=0.05, 10% of cells are occasionally detached under thermal fluctuations; eventually, the growing tissues can be entirely detached from the substrate. For no substrate friction (ζ_wall=0), the budding does not occur except for the weakest adhesion, ε_wall=0.05 [see Fig. <ref>(d)]. This suggests that friction plays an essential role in the budding. To clarify it, we calculate the local density ρ_xy projected on the xy plane in tissues spreading on the substrate without budding, as shown in Fig. <ref>. The center region of tissue, ρ_xy increases with tissue growth at ζ_wall=2, whereas ρ_xy remains constant at ζ_wall=0. Hence, the cells can move sufficiently to maintain their preferred density at ζ_wall=0 during the exponential growth [see the red line in Fig. <ref>(a)]. Conversely, friction slows relaxation, resulting in high density and high stress in the center region, since friction increases under increasing growth speed with time (the velocity of tissue radius dR/dt∝exp(at/2) for N_cell=π R^2ρ_xy=N_0exp(at) with constant ρ_xy). Budding occurs, when this high stress overcomes substrate adhesion. Note that a similar retardation in growth speed with increasing friction was reported in an overdamped Langevin simulation using a 2D vertex model <cit.>. Therefore, this mechanism is likely generic for tissue growth. § GROWING TISSUE ON SUBSTRATE WITH HEIGHT-DEPENDENT DIVISION FORCE In cultured-tissue experiments, nutrients are typically supplied from a gel substrate, and in epithelia, growth factors come from the stroma below. Hence, tissues may not be able to grow at a place far from the substrate because of a lack of nutrients. The apoptosis rate also depends on environmental conditions. In anoikis, the cell detachment from the extracellular matrix induces apoptosis <cit.>. Under the former and latter conditions, the division force f_div and death rate k_a are functions of the cell height, respectively. For simplicity, we set f_div as a function of the height z_cc,k of the cell center while keeping k_a constant: f_div = f_da/ 1 + exp[2(z_cc,k -3)]. The force strength decreases from f_div≃ f_da to null at z_cc,k∼ 3 [see Fig. <ref>(c)]. We use f_da= 4 to 5, k_a=0.006 to 0.012, and ζ_wall=2 at L_x=L_y=200. The tissue grows on the substrate but shrinks at z_cc,k≳ 3. Figure <ref> shows examples of tissue dynamics during steady bud evolution. Tubular bud elongation is arrested at a certain height, neighboring buds move closer and subsequently fuse into one bud, and new buds are formed in an open space [see Fig. <ref>(a) and Movie S3]. Bud formation and fusion occur repeatedly, so that the cell density ρ_xy, the ratio of the detached cell N_deta/N_cell, and number of buds N_bud fluctuate around certain values [see Figs. <ref>(b)–(d)]. The number of buds is counted as the number of cell clusters at 5<z_cc,k <8 (foot of the red regions in snapshots) to exclude cells pinched off from the tissue. Bud fusion is caused by the membrane-mediated attraction, which reduces the bending energy of the adhered tissue (cyan and green region in snapshots). A similar membrane-mediated attraction has been observed in tubules generated by curvature-inducing proteins <cit.>. The dynamic phase diagrams are shown in Figs. <ref>(g) and (h). Similar to the case of constant f_div, the mean number of buds decreases with decreasing f_da and increasing k_a, and eventually, no buds are formed from the flat tissue at the region of upward-pointing triangles in Figs. <ref>(f)–(h). However, when a budded state is set as the initial conformation, a single bud remains in a steady state after bud fusion [blue-triangle points in Fig. <ref>(e)]. Hence, two types of final states are formed depending on the initial states. At a low death rate of k_a=0.006, vesicle division often occurs in long tubular buds via tube pinch-off (like droplet formation by Plateau-Rayleigh instability <cit.>) and tube branching during bud fusion [see Fig. <ref>(d)]. For the weak adhesion of ε_wall=0.2, entire tissues are often detached from the substrate [see Fig. <ref>(a) and Movie S4]. In particular, the entire detachment occurs more frequently when a flat tissue [as in Fig. <ref>(f)] is used as the initial state. In addition, a straight buckled bump [like the buckling of equilibrated membrane shown in Fig. <ref>(c)] is occasionally formed with the weak adhesion and/or low death rate [see Fig. <ref>(b) and Movie S5]. Tubular buds move towards this bump and fuse into it. When two buckled bumps are set as the initial state, two bumps fuse into a single one (see Movie S6), such that the bumps also have the membrane-mediated attraction as tubular buds. Note that with a buckled state set as the initial state, the buckle remains in the most region of red crosses in the diagrams. Thus, different tissue shapes can be obtained depending on the initial state. § PORE FORMATION IN FLAT TISSUES ON SUBSTRATE The formation of pores (or wounds) in tissue sheets and their healing are important for tissue maintenance <cit.>. Recently, Lv et al. <cit.> observed the coalescence of pores in epithelioid tissues on a hydrogel bed and simulated it using a cell-vertex-based model. However, the effects of physical conditions such as friction were not investigated. Here, we simulate the pore formation under various conditions of friction and cell death. The temporal evolution of tissue pores is shown in Fig. <ref> and Movie S7. The initial state is a flat tissue relaxed at f_div=5, ε_wall=0.4, and k_a=0.02. The other parameters are the same as those described in Sec. <ref>. First, many pores simultaneously open, and subsequently, larger pores grow; however, small pores shrink through Ostwald ripening <cit.> (compare the top two snapshots in Fig. <ref>(a)). Coalescence of the pores also occurs (see the bottom two snapshots in Fig. <ref>(a) and Movie S7). The Ostwald ripening more frequently occurs at lower friction levels. At ζ_wall=0, a single pore remains for a long period but the others disappear via pore shrinkage, similar to that for freely suspended tissues (see Fig. <ref>(d)). This is because the tissues can move easily in the lateral motion in the absence of substrate friction. As the friction increases, coalescence occurs more frequently than the Ostwald ripening. In contrast, the tissue shrinkage is only slightly dependent on the friction, as shown in Fig. <ref>(b). With an increasing death rate k_a, the tissues exhibit faster shrinkage (see Fig. <ref>(c)) and more frequent pore coalescence. Thus, the pore growth dynamics are strongly dependent on substrate friction and pore growth rate. § CONCLUSIONS AND OUTLOOK By combining the two-particle growth model with a meshless membrane model, we constructed a particle-based simulation technique for tissue sheet growth and shrinkage. Using this combined model, we have studied the growth and shrinkage of tissue sheets. Freely suspended sheets form wrinkles for rapid tissue growth. The wrinkling number decreases with decreasing growth rate. On the substrate, the growing tissue forms buds when the growth rate is too high to maintain the 2D density. When the induced stress overcomes the adhesion strength, local tissue detachment results in the formation of cylindrical buds. With stronger friction and/or weaker adhesion to the substrate, buds are formed more frequently. Neighboring buds fuse through a membrane-mediated attraction. The detachment of entire tissues and straight buckled bump formation also occur under weak adhesion conditions. Moreover, tissue pores (wounds) grow through Ostwald ripening and coalescence. The coalesce occurs more frequently with higher friction and/or death rates. Our simulations revealed that adhesion and friction with the substrate are important factors in controlling the detachment of growing tissues. Experimentally, the adhesion energy to a substrate can be changed by varying the protein concentrations for ligand-receptor binding and surface charges. These variations and surface roughness change the substrate friction. Therefore, our findings can be examined experimentally. Similar detachment dynamics can be observed in epithelial tissues from different tissues with weakened adhesion by wounds and diseases. The skin and brain cortex typically consist of several layers of tissue sheets with different elasticities. It is known that such a difference can generate wrinkling <cit.>. The proposed model can be extended to multiple sheets with different rigidities. The growth dynamics of multi-layer tissues is one direction for further study. § PHYSICAL PROPERTIES OF TISSUES The tissue membrane properties are calculated in the absence of cell division or death. The bending rigidity κ of membranes can be calculated using several methods <cit.>. Here, we calculated κ from the buckling of membranes <cit.>, since its condition (positive surface stress) is closer to the present simulation conditions. A flat membrane with N_cell=800 set along the xy plane is pushed along the x axis by changing L_x while keeping L_y=22. After the membrane buckling, the stresses γ_x and γ_y along the x and y axes are different owing to the increase in the bending energy. The bending rigidity κ can be estimated from γ_x as γ_x = π^2κ/4L^2(5 - L_x/L)^2 + O((1-L_x/L)^2), = 4π^2κ/L^2[1+ 1/2(1 - L_x/L) + 9/32(1 - L_x/L)^2 +21/128(1 - L_x/L)^3] + O((1-L_x/L)^4), in the leading- <cit.> and third-order <cit.> approximations, respectively, where the tensionless membrane area is A=LL_y. Both approximations can give excellent fits to the γ_x curve in Fig. <ref>(c), and κ is estimated as shown in Fig. <ref>(a). The edge line tension Γ is calculated from the stress of a membrane strip with N_cell=400. Since the edge energy per length is given by Γ, the edge tension is expressed as Γ = -P_xxL_yL_z/2, where P_xx is the x component of the pressure calculated by the virial tensor <cit.>. The obtained values of κ and Γ exhibit only a weak dependence on f_div [see Figs. <ref>(a) and (b)], so that we consider κ≃ 50 and Γ≃ 13 even in the presence of cell division and death. These values are sufficiently large to maintain a flat tissue sheet with a minimum edge length (circular edge for disk-shaped tissue). The membrane viscosity η of the 2D tissue (i.e., ε_wall→∞) is calculated from the tissue stress under simple shear flow at L_x=40 and L_y=10 <cit.>. The viscosity η increases with increasing cell density ρ [see Fig. <ref>(d)]. Since the typical density is 0.85 in our simulations (see Fig. <ref>), we consider η = 8 as the typical viscosity of the tissue. This work was supported by JSPS KAKENHI Grant Number JP24K06973. 76 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Beumer and Clevers(2021)]beum21 author author J. Beumer and author H. Clevers, 10.1038/s41580-020-0278-0 journal journal Nat. Rev. Mol. Cell Biol. volume 22, pages 39 (year 2021)NoStop [Shyer et al.(2013)Shyer, Tallinen, Nerurkar, Wei, Gil, Kaplan, Tabin, and Mahadevan]shye13 author author A. E. Shyer, author T. Tallinen, author N. L. Nerurkar, author Z. Wei, author E. S. Gil, author D. L. Kaplan, author C. J. Tabin, and author L. Mahadevan, 10.1126/science.123884 journal journal Science volume 342, pages 212 (year 2013)NoStop [Tallinen et al.(2016)Tallinen, Chung, Rousseau, Girard, Lefèvre, and Mahadevan]tall16 author author T. Tallinen, author J. Y. Chung, author F. Rousseau, author N. Girard, author J. Lefèvre, and author L. Mahadevan, 10.1038/nphys3632 journal journal Nat. Phys. volume 12, pages 588 (year 2016)NoStop [Hallatschek et al.(2023)Hallatschek, Datta, Drescher, Dunkel, Elgeti, Waclaw, and Wingreen]hallatschekProliferatingActiveMatter2023 author author O. Hallatschek, author S. S. Datta, author K. Drescher, author J. Dunkel, author J. Elgeti, author B. Waclaw, and author N. S. Wingreen, 10.1038/s42254-023-00593-0 journal journal Nat. Rev. Phys. volume 5, pages 407 (year 2023)NoStop [Liedekerke et al.(2020)Liedekerke, Neitsch, Johann, Warmt, González-Valverde, Hoehme, Grosser, Kaes, and Drasdo]lied20 author author P. V. Liedekerke, author J. Neitsch, author T. Johann, author E. Warmt, author I. González-Valverde, author S. Hoehme, author S. Grosser, author J. Kaes, and author D. Drasdo, 10.1007/s10237-019-01204-7 journal journal Biomech. Model. Mechanobiol. volume 19, pages 189 (year 2020)NoStop [Okuda and Hiraiwa(2023)]okud23 author author S. Okuda and author T. Hiraiwa, 10.1140/epje/s10189-023-00315-5 journal journal Eur. Phys. J. E volume 46, pages 56 (year 2023)NoStop [Cuvelier et al.(2023)Cuvelier, Vangheel, Thiels, Ramon, Jelier, and Smeets]cuve23 author author M. Cuvelier, author J. Vangheel, author W. Thiels, author H. Ramon, author R. Jelier, and author B. Smeets, 10.1016/j.bpj.2023.04.017 journal journal Biophys. J. volume 122, pages 1858 (year 2023)NoStop [Ichbiah et al.(2023)Ichbiah, Delbary, McDougall, Dumollard, and Turlier]ichb23 author author S. Ichbiah, author F. Delbary, author A. McDougall, author R. Dumollard, and author H. Turlier, 10.1038/s41592-023-02084-7 journal journal Nat. Methods volume 20, pages 1989 (year 2023)NoStop [Runser et al.(2024)Runser, Vetter, and Iber]runs24 author author S. Runser, author R. Vetter, and author D. Iber, 10.1038/s43588-024-00620-9 journal journal Nat. Comput. Sci. volume 4, pages 299 (year 2024)NoStop [Fletcher et al.(2014)Fletcher, Osterfield, Baker, and Shvartsman]flet14 author author A. G. Fletcher, author M. Osterfield, author R. E. Baker, and author S. Y. Shvartsman, 10.1016/j.bpj.2013.11.4498 journal journal Biophys. J. volume 106, pages 2291 (year 2014)NoStop [Alt et al.(2017)Alt, Ganguly, and Salbreux]alt17 author author S. Alt, author P. Ganguly, and author G. Salbreux, 10.1098/rstb.2015.0520 journal journal Phil. Trans. R. Soc. B volume 372, pages 20150520 (year 2017)NoStop [Basan et al.(2011)Basan, Prost, Joanny, and Elgeti]basan2011 author author M. Basan, author J. Prost, author J.-F. Joanny, and author J. Elgeti, 10.1088/1478-3975/8/2/026014 journal journal Phys. Biol. volume 8, pages 026014 (year 2011)NoStop [Murisic et al.(2015)Murisic, Hakim, Kevrekidis, Shvartsman, and Audoly]muri15 author author N. Murisic, author V. Hakim, author I. G. Kevrekidis, author S. Y. Shvartsman, and author B. Audoly, 10.1016/j.bpj.2015.05.019 journal journal Biophys. J. volume 109, pages 154 (year 2015)NoStop [Luciano et al.(2021)Luciano, Xue, Vos, Redondo-Morata, Surin, Lafont, Hannezo, and Gabriele]luci21 author author M. Luciano, author S.-L. Xue, author W. H. D. Vos, author L. Redondo-Morata, author M. Surin, author F. Lafont, author E. Hannezo, and author S. Gabriele, 10.1038/s41567-021-01374-1 journal journal Nat. Phys. volume 17, pages 1382 (year 2021)NoStop [Ogita et al.(2022)Ogita, Kondo, Ikawa, Uemura, Ishihara, and Sugimura]ogit22 author author G. Ogita, author T. Kondo, author K. Ikawa, author T. Uemura, author S. Ishihara, and author K. Sugimura, 10.1371/journal.pcbi.1010209 journal journal PLoS Comput. Biol. volume 18, pages e1010209 (year 2022)NoStop [Tetley et al.(2019)Tetley, Staddon, Heller, Hoppe, Banerjee, and Mao]tetl19 author author R. J. Tetley, author M. F. Staddon, author D. Heller, author A. Hoppe, author S. Banerjee, and author Y. Mao, /10.1038/s41567-019-0618-1 journal journal Nat. Phys. volume 15, pages 1195 (year 2019)NoStop [Prakash et al.(2021)Prakash, Bull, and Prakash]prak21 author author V. N. Prakash, author M. S. Bull, and author M. Prakash, 10.1038/s41567-020-01134-7 journal journal Nat. Phys. volume 17, pages 504 (year 2021)NoStop [Xu et al.(2022)Xu, Xu, Li, He, Li, and Ji]xu22 author author J. Xu, author X. Xu, author X. Li, author S. He, author D. Li, and author B. Ji, 10.1016/j.bpj.2021.12.015 journal journal Biophys. J. volume 121, pages 288 (year 2022)NoStop [Sonam et al.(2023)Sonam, Balasubramaniam, Lin, Ivan, Pi-Jaumá, Jebane, Karnat, Toyama, Marcq, Prost, Mége, Rupprecht, and Ladoux]sona23 author author S. Sonam, author L. Balasubramaniam, author S.-Z. Lin, author Y. M. Y. Ivan, author I. Pi-Jaumá, author C. Jebane, author M. Karnat, author Y. Toyama, author P. Marcq, author J. Prost, author R.-M. Mége, author J.-F. Rupprecht, and author B. Ladoux, 10.1038/s41567-022-01826-2 journal journal Nat. Phys. volume 19, pages 132 (year 2023)NoStop [Lv et al.(2024)Lv, Chen, Chen, Liu, Wang, Bai, Lv, Li, Shao, Feng, and Li]lv24 author author J.-Q. Lv, author P.-C. Chen, author Y.-P. Chen, author H.-Y. Liu, author S.-D. Wang, author J. Bai, author C.-L. Lv, author Y. Li, author Y. Shao, author X.-Q. Feng, and author B. Li, 10.1038/s41567-024-02504-1 journal journal Nat. Phys. , pages DOI:10.1038/s41567 (year 2024)NoStop [Guerrero and Perez-Carrasco(2023)]guer23 author author P. Guerrero and author R. Perez-Carrasco, 10.1098/rstb.2023.0051 journal journal Phil. Trans. R. Soc. B volume 379, pages 20230051 (year 2023)NoStop [Hirashima and Matsuda(2024)]hira24 author author T. Hirashima and author M. Matsuda, 10.1016/j.cub.2023.12.049 journal journal Curr. Biol. volume 34, pages 683 (year 2024)NoStop [Honda et al.(2004)Honda, Tanemura, and Nagai]hond04 author author H. Honda, author M. Tanemura, and author T. Nagai, 10.1016/j.jtbi.2003.10.001 journal journal J. Theor. Biol. volume 226, pages 439 (year 2004)NoStop [Okuda et al.(2013)Okuda, Inoue, Eiraku, Sasai, and Adachi]okud13 author author S. Okuda, author Y. Inoue, author M. Eiraku, author Y. Sasai, and author T. Adachi, 10.1007/s10237-012-0430-7 journal journal Biomech. Model. Mechanobiol. volume 12, pages 627 (year 2013)NoStop [Krajnc et al.(2018)Krajnc, Dasgupta, Ziherl, and Prost]kraj18 author author M. Krajnc, author S. Dasgupta, author P. Ziherl, and author J. Prost, 10.1103/PhysRevE.98.022409 journal journal Phys. Rev. E volume 98, pages 022409 (year 2018)NoStop [Okuda et al.(2018)Okuda, Miura, Inoue, Adachi, and Eiraku]okud18 author author S. Okuda, author T. Miura, author Y. Inoue, author T. Adachi, and author M. Eiraku, 10.1038/s41598-018-20678-6 journal journal Sci. Rep. volume 8, pages 2386 (year 2018)NoStop [Inoue et al.(2020)Inoue, Tateo, and Adachi]inou20 author author Y. Inoue, author I. Tateo, and author T. Adachi, 10.1007/s10237-019-01249-8 journal journal Biomech. Model. Mechanobiol. volume 19, pages 815 (year 2020)NoStop [Zhang and Schwarz(2022)]zhan22 author author T. Zhang and author J. M. Schwarz, 10.1103/PhysRevResearch.4.043148 journal journal Phys. Rev. Res. volume 4, pages 043148 (year 2022)NoStop [Meineke et al.(2001)Meineke, Potten, and Loeffler]mein01 author author F. A. Meineke, author C. S. Potten, and author M. Loeffler, 10.1046/j.0960-7722.2001.00216.x journal journal Cell Prolif. volume 34, pages 253 (year 2001)NoStop [Bi et al.(2016)Bi, Yang, Marchetti, and Manning]dape16 author author D. Bi, author X. Yang, author M. C. Marchetti, and author M. L. Manning, 10.1103/PhysRevX.6.021011 journal journal Phys. Rev. X volume 6, pages 021011 (year 2016)NoStop [Mimura and Inoue(2023)]mimu23 author author T. Mimura and author Y. Inoue, 10.1016/j.jtbi.2023.111560 journal journal J. Theor. Biol. volume 571, pages 111560 (year 2023)NoStop [Ranft et al.(2010)Ranft, Basan, Elgeti, Joanny, Prost, and Julicher]ranftFluidizationTissuesCell2010 author author J. Ranft, author M. Basan, author J. Elgeti, author J.-F. Joanny, author J. Prost, and author F. Julicher, 10.1073/pnas.1011086107 journal journal Proc. Natl. Acad. Sci. USA volume 107, pages 20863 (year 2010)NoStop [Montel et al.(2011)Montel, Delarue, Elgeti, Malaquin, Basan, Risler, Cabane, Vignjevic, Prost, Cappello, and Joanny]montelStressClampExperiments2011 author author F. Montel, author M. Delarue, author J. Elgeti, author L. Malaquin, author M. Basan, author T. Risler, author B. Cabane, author D. Vignjevic, author J. Prost, author G. Cappello, and author J.-F. Joanny, 10.1103/PhysRevLett.107.188102 journal journal Phys. Rev. Lett. volume 107, pages 188102 (year 2011)NoStop [Montel et al.(2012)Montel, Delarue, Elgeti, Vignjevic, Cappello, and Prost]montelIsotropicStressReduces2012 author author F. Montel, author M. Delarue, author J. Elgeti, author D. Vignjevic, author G. Cappello, and author J. Prost, 10.1088/1367-2630/14/5/055008 journal journal New J. Phys. volume 14, pages 055008 (year 2012)NoStop [Delarue et al.(2013)Delarue, Montel, Caen, Elgeti, Siaugue, Vignjevic, Prost, Joanny, and Cappello]delarueMechanicalControlCell2013 author author M. Delarue, author F. Montel, author O. Caen, author J. Elgeti, author J.-M. Siaugue, author D. Vignjevic, author J. Prost, author J.-F. Joanny, and author G. Cappello, 10.1103/PhysRevLett.110.138103 journal journal Phys. Rev. Lett. volume 110, pages 138103 (year 2013)NoStop [Büscher et al.(2020a)Büscher, Ganai, Gompper, and Elgeti]buscherTissueEvolutionMechanical2020 author author T. Büscher, author N. Ganai, author G. Gompper, and author J. Elgeti, 10.1088/1367-2630/ab74a5 journal journal New J. Phys. volume 22, pages 033048 (year 2020a)NoStop [Büscher et al.(2020b)Büscher, Diez, Gompper, and Elgeti]buscherInstabilityFingeringInterfaces2020 author author T. Büscher, author A. L. Diez, author G. Gompper, and author J. Elgeti, 10.1088/1367-2630/ab9e88 journal journal New J. Phys. volume 22, pages 083005 (year 2020b)NoStop [Ganai et al.(2019)Ganai, Büscher, Gompper, and Elgeti]ganaiMechanicsTissueCompetition2019 author author N. Ganai, author T. Büscher, author G. Gompper, and author J. Elgeti, 10.1088/1367-2630/ab2475 journal journal New J. Phys. volume 21, pages 063017 (year 2019)NoStop [Podewitz et al.(2016)Podewitz, Jülicher, Gompper, and Elgeti]podewitzInterfaceDynamicsCompeting2016 author author N. Podewitz, author F. Jülicher, author G. Gompper, and author J. Elgeti, 10.1088/1367-2630/18/8/083020 journal journal New J. Phys. volume 18, pages 083020 (year 2016)NoStop [Basan et al.(2013)Basan, Elgeti, Hannezo, Rappel, and Levine]basanAlignmentCellularMotility2013 author author M. Basan, author J. Elgeti, author E. Hannezo, author W.-J. Rappel, and author H. Levine, 10.1073/pnas.1219937110 journal journal Proc. Natl. Acad. Sci. USA volume 110, pages 2452 (year 2013)NoStop [Marel et al.(2014)Marel, Podewitz, Zorn, Rädler, and Elgeti]marelAlignmentCellDivision2014 author author A.-K. Marel, author N. Podewitz, author M. Zorn, author J. O. Rädler, and author J. Elgeti, 10.1088/1367-2630/16/11/115005 journal journal New J. Phys. volume 16, pages 115005 (year 2014)NoStop [Krämer et al.(2024)Krämer, Hannezo, Gompper, and Elgeti]kramerMechanicallydrivenStemCell2024 author author J. C. Krämer, author E. Hannezo, author G. Gompper, and author J. Elgeti, 10.21468/SciPostPhys.16.4.097 journal journal SciPost Phys. volume 16, pages 097 (year 2024)NoStop [Hornung et al.(2018)Hornung, Grünberger, Westerwalbesloh, Kohlheyer, Gompper, and Elgeti]hornungQuantitativeModellingNutrientlimited2018 author author R. Hornung, author A. Grünberger, author C. Westerwalbesloh, author D. Kohlheyer, author G. Gompper, and author J. Elgeti, 10.1098/rsif.2017.0713 journal journal J. R. Soc. Interface volume 15, pages 20170713 (year 2018)NoStop [Noguchi and Gompper(2006a)]nogu06 author author H. Noguchi and author G. Gompper, 10.1103/PhysRevE.73.021903 journal journal Phys. Rev. E volume 73, pages 021903 (year 2006a)NoStop [Noguchi(2010)]nogu10 author author H. Noguchi, 10.1143/JPSJ.79.024801 journal journal J. Phys. Soc. Jpn. volume 79, pages 024801 (year 2010)NoStop [Drouffe et al.(1991)Drouffe, Maggs, and Leibler]drou91 author author J. M. Drouffe, author A. C. Maggs, and author S. Leibler, @noop journal journal Science volume 254, pages 1353 (year 1991)NoStop [Noguchi and Gompper(2006b)]nogu06a author author H. Noguchi and author G. Gompper, 10.1063/1.2358983 journal journal J. Chem. Phys. volume 125, pages 164908 (year 2006b)NoStop [Noguchi(2022a)]nogu22a author author H. Noguchi, 10.1142/S021797922230002X journal journal Int. J. Mod. Phys. B volume 36, pages 2230002 (year 2022a)NoStop [Noguchi(2016)]nogu16 author author H. Noguchi, 10.1038/srep20935 journal journal Sci. Rep. volume 6, pages 20935 (year 2016)NoStop [Noguchi(2022b)]nogu22b author author H. Noguchi, 10.1063/5.0098249 journal journal J. Chem. Phys. volume 157, pages 034901 (year 2022b)NoStop [Noguchi(2019)]nogu19c author author H. Noguchi, 10.1039/c9sm01622h journal journal Soft Matter volume 15, pages 8741 (year 2019)NoStop [Noguchi(2013)]nogu13 author author H. Noguchi, 10.1209/0295-5075/102/68001 journal journal EPL volume 102, pages 68001 (year 2013)NoStop [Gershon et al.(2015)Gershon, Breuer, Cohen, Cohrs, Gershon, Gilden, Grose, Hambleton, Kennedy, Oxman, Seward, and Yamanishi]gers15 author author A. A. Gershon, author J. Breuer, author J. I. Cohen, author R. J. Cohrs, author M. D. Gershon, author D. Gilden, author C. Grose, author S. Hambleton, author P. G. E. Kennedy, author M. N. Oxman, author J. F. Seward, and author K. Yamanishi, 10.1038/nrdp.2015.16 journal journal Nat. Rev. Dis. Primers volume 1, pages 15016 (year 2015)NoStop [Hsu et al.(2018)Hsu, Lin, Harn, Hughes, Tang, and Yang]hsu18 author author C.-K. Hsu, author H.-H. Lin, author H. I.-C. Harn, author M. W. Hughes, author M.-J. Tang, and author C.-C. Yang, 10.1016/j.jdermsci.2018.03.004 journal journal J. Dermatol. Sci. volume 90, pages 232 (year 2018)NoStop [Español(1997)]espa97a author author P. Español, @noop journal journal Europhys. Lett. volume 40, pages 631 (year 1997)NoStop [Groot and Warren(1997)]groo97 author author R. D. Groot and author P. B. Warren, @noop journal journal J. Chem. Phys. volume 107, pages 4423 (year 1997)NoStop [Shardlow(2003)]shar03 author author T. Shardlow, @noop journal journal SIAM J. Sci. Comput. volume 24, pages 1267 (year 2003)NoStop [Noguchi and Gompper(2007)]nogu07a author author H. Noguchi and author G. Gompper, @noop journal journal Europhys. Lett. volume 78, pages 36002 (year 2007)NoStop [Hannezo et al.(2014)Hannezo, Prost, and Joanny]hann14 author author E. Hannezo, author J. Prost, and author J.-F. Joanny, 10.1073/pnas.1312076111 journal journal Proc. Natl. Acad. Sci. USA volume 111, pages 27 (year 2014)NoStop [Marmottant et al.(2009)Marmottant, Mgharbel, Käfer, Audren, Rieu, Vial, van der Sanden, Marée, Graner, and Delanoë-Ayari]marm09 author author P. Marmottant, author A. Mgharbel, author J. Käfer, author B. Audren, author J.-P. Rieu, author J.-C. Vial, author B. van der Sanden, author A. F. M. Marée, author F. Graner, and author H. Delanoë-Ayari, 10.1073/pnas.0902085106 journal journal Proc. Natl. Acad. Sci. USA volume 106, pages 17271 (year 2009)NoStop [Nakagawa and Noguchi(2018)]naka18 author author K. M. Nakagawa and author H. Noguchi, 10.1039/c7sm02326j journal journal Soft Matter volume 14, pages 1397 (year 2018)NoStop [Frisch and Screaton(2001)]fris01 author author S. M. Frisch and author R. A. Screaton, 10.1016/S0955-0674(00)00251-9 journal journal Curr. Opin. Cell Biol. volume 13, pages 555 (year 2001)NoStop [Grossmann(2002)]gros02 author author J. Grossmann, 10.1023/A:1015312119693 journal journal Apoptosis volume 7, pages 247 (year 2002)NoStop [de Gennes et al.(2003)de Gennes, Brochard-Wyart, and Quere]dege03 author author P. G. de Gennes, author F. Brochard-Wyart, and author D. Quere, @noop title Capillarity and wetting phenomena: Drops, bubbles, perls, waves (publisher Springer, address New York, year 2003)NoStop [Utada et al.(2005)Utada, Lorenceau, Link, Kaplan, Stone, and Weitz]utad05 author author A. S. Utada, author E. Lorenceau, author D. R. Link, author P. D. Kaplan, author H. A. Stone, and author D. A. Weitz, 10.1126/science.1109164 journal journal Science volume 308, pages 537 (year 2005)NoStop [McEvoy et al.(2022)McEvoy, Sneh, Moeendarbary, Javanmardi, Efimova, Yang, Marino-Bravante, Chen, Escribano, Spill, Garcia-Aznar, Weeraratna, Svitkina, Kamm, and Shenoy]mcev22 author author E. McEvoy, author T. Sneh, author E. Moeendarbary, author Y. Javanmardi, author N. Efimova, author C. Yang, author G. E. Marino-Bravante, author X. Chen, author J. Escribano, author F. Spill, author J. M. Garcia-Aznar, author A. T. Weeraratna, author T. M. Svitkina, author R. D. Kamm, and author V. B. Shenoy, 10.1038/s41467-022-34701-y journal journal Nat. Commun. volume 13, pages 7089 (year 2022)NoStop [Voorhees(1985)]voor85 author author P. W. Voorhees, 10.1007/BF01017860 journal journal J. Stat. Phys. volume 38, pages 231 (year 1985)NoStop [Kabalnov(2001)]kaba01 author author A. Kabalnov, 10.1081/DIS-100102675 journal journal J. Disper. Sci. Technol. volume 22, pages 1 (year 2001)NoStop [Watanabe et al.(2014)Watanabe, Suzuki, Inaoka, and Ito]wata14 author author H. Watanabe, author M. Suzuki, author H. Inaoka, and author N. Ito, 10.1063/1.4903811 journal journal J. Chem. Phys. volume 141, pages 234703 (year 2014)NoStop [Schweikart and Fery(2009)]schw09 author author A. Schweikart and author A. Fery, 10.1007/s00604-009-0153-3 journal journal Microchim Acta volume 165, pages 249 (year 2009)NoStop [Li et al.(2012)Li, Cao, Feng, and Gao]li12 author author B. Li, author Y.-P. Cao, author X.-Q. Feng, and author H. Gao, 10.1039/c2sm00011c journal journal Soft Matter volume 8, pages 5728 (year 2012)NoStop [Deserno(2009)]dese09 author author M. Deserno, @noop journal journal Macromol. Rapid. Commun. volume 30, pages 752 (year 2009)NoStop [Shiba and Noguchi(2011)]shib11 author author H. Shiba and author H. Noguchi, 10.1103/PhysRevE.84.031926 journal journal Phys. Rev. E volume 84, pages 031926 (year 2011)NoStop [Noguchi(2011)]nogu11a author author H. Noguchi, 10.1103/PhysRevE.83.061919 journal journal Phys. Rev. E volume 83, pages 061919 (year 2011)NoStop [Hu et al.(2013)Hu, Diggins, and Deserno]hu13a author author M. Hu, author P. Diggins, and author M. Deserno, @noop journal journal J. Chem. Phys. volume 138, pages 214110 (year 2013)NoStop [Tolpekina et al.(2004)Tolpekina, den Otter, and Briels]tolp04 author author T. V. Tolpekina, author W. K. den Otter, and author W. J. Briels, @noop journal journal J. Chem. Phys. volume 121, pages 8014 (year 2004)NoStop
http://arxiv.org/abs/2407.12722v1
20240717164030
"Halfway to Rayleigh" and other Insights to the Rossby Wave Instability
[ "Eonho Chang", "Andrew N. Youdin" ]
astro-ph.EP
[ "astro-ph.EP", "astro-ph.SR", "physics.flu-dyn" ]
0000-0003-4703-2053]Eonho Chang Graduate Interdisciplinary Program in Applied Mathematics, University of Arizona, Tucson, AZ 85721, USA Department of Astronomy and Steward Observatory, University of Arizona, Tucson, AZ 85721, USA 0000-0002-3644-8726]Andrew N. Youdin Department of Astronomy and Steward Observatory, University of Arizona, Tucson, AZ 85721, USA Lunar and Planetary Laboratory, University of Arizona, Tucson, AZ 85721, USA § ABSTRACT The Rossby wave instability (RWI) is the fundamental non-axisymmetric radial shear instability in disks. The RWI can facilitate disk accretion, set the shape of planetary gaps and produce large vortices. It arises from density and/or temperature features, such as radial gaps, bumps or steps. A general, sufficient condition to trigger the RWI is lacking, which we address by studying the linear RWI in a suite of simplified models, including incompressible and compressible shearing sheets and global, cylindrical disks. We focus on enthalpy amplitude and width as the fundamental properties of disk features with various shapes. We find analytic results for the RWI boundary and growth rates across a wide parameter space, in some cases with exact derivations and in others as a description of numerical results. Features wider than a scale-height generally become unstable about halfway to Rayleigh instability, i.e. when the squared epicyclic frequency is about half the Keplerian value, reinforcing our previous finding. RWI growth rates approximately scale as enthalpy amplitude to the 1/3 power, with a weak dependence on width, across much of parameter space. Global disk curvature affects wide planetary gaps, making the outer gap edge more susceptible to the RWI. Our simplified models are barotropic and height-integrated, but the main results should carry over to more complex and realistic scenarios. § INTRODUCTION The Rossby wave instability arises when radial disk structures, such as bumps or gaps, induce strong pressure gradients and non-Keplerian radial shear <cit.>. The RWI can generate large vortices <cit.>, for instance at the edges of planetary gaps <cit.>, which affects planet migration <cit.>. The RWI also helps transport matter falling onto accretion disks <cit.>. Dust is trapped in both RWI-produced vortices and the rings that trigger the RWI, in agreement with the disk structures observed by ALMA <cit.>. The RWI thus constrains observable rings and vortices <cit.>, for instance by regulating planet-carved gaps <cit.>. Dust trapped in such rings and vortices can trigger planet formation <cit.>. These significant consequences arise from simple considerations. The RWI does not require vertical motions, baroclinicity or cooling, in contrast to the vertical shear instability <cit.> and other thermal disk instabilities <cit.>. The RWI can be triggered by zonal flows arising from these hydrodynamic <cit.>, or magnetohydrodynamic <cit.>, instabilities. RWI analyses that include 3D motions <cit.>, cooling <cit.>, dust feedback <cit.> and non-ideal MHD <cit.> are crucial for a complete understanding, and generally find modest corrections to idealized cases. Even for simple cases, a general criterion for the onset of the RWI has been elusive. <cit.> found that the RWI was triggered partway between the Lovelace and Rayleigh criteria, for a variety of barotropic disk features. The Lovelace criterion, equivalent to a vortensity extrema in isentropic disks, is necessary but insufficient for the RWI <cit.>. The Rayleigh criterion gives axisymmetric instability for disks with radially decreasing angular momentum somewhere, i.e. negative squared epicyclic frequency, κ^2. <cit.> found that disk bumps (barotropic and baroclinic) triggered RWI when κ^2 was locally reduced to ∼60% of the Keplerian value. We colloquially refer to this criterion as “halfway to Rayleigh" instability. This work aims to develop a more fundamental understanding of the RWI boundary and growth rates, including the “halfway to Rayleigh" criterion. We develop scaling relations using the strength and width of disk features. We start with simplified shearing sheet models and test against global disk models. This approach is motivated by previous shearing sheet models studying incompressible <cit.> and compressible <cit.> shear instability, linear Rossby modes <cit.> and non-linear RWI with cooling <cit.>. We present our method for studying the RWI with shearing sheet models in <ref>. Sections <ref> and <ref> present our results for the incompressible and compressible sheets, respectively. We compare to global disks in <ref>. A suggesting starting point is the summary of our main results in <ref>. § SHEARING SHEET RWI MODELS §.§ The Compressible Shearing Sheet The shearing sheet models a disk patch centered at radius R_c, rotating at the local Keplerian frequency, , with cartesian x,y,z coordinates oriented radially, azimuthally and vertically. Vertical averaging gives the equations of motion <cit.> D/Dt =∇·v (D/Dt+2ẑ×)v =3^2xx̂-1/∇ P D(P/^γ)/Dt =0 for fluid velocity v, surface density , and (height-averaged) pressure P, with D/Dt=∂/∂ t+v·∇. An ideal gas with adiabatic index γ, adiabatic motions and no self-gravity or viscosity are assumed. Combinging Eqs. (<ref>, <ref>), Dq/Dt =∇×∇P/^3·ẑ , shows that vortensity, q≡(2+ẑ·∇×v)/, is conserved in the absence of baroclinic effects. We consider an axisymmetric equlibrium with linear perturbations (using 0, 1 subscripts, repectively) as =_0(x)+_1, P=P_0(x)+P_1, v=v_0(x)ŷ+u_1x̂+v_1ŷ. Perturbed quantities have a Fourier dependence ∝exp[(k_yy-ωt)], and x-dependent amplitudes. The equilibrium orbital motion is v_0 =-3/2x+Δv_0=-3/2x+1/2d_0/dx where _0, the equilibrium enthalpy, =∫dP/ gives the non-Keplerian motion, Δv_0. The equilibrium vortensity, q_0=κ^2/(2_0), depends on the squared epicyclic frequency: κ^2 =2(2+dv_0/dx)=^2+d^2_0/dx^2 . The linear equations of motion for the Fourier amplitudes (given the same symbols as perturbed quantities for simplicity) are -Δω_1 =-d/dx(_0u_1)- k_y_0v_1 -Δωu_1-2 v_1 =-1/_0dP_1/dx+_1/_0^2dP_0/dx -Δωv_1+κ^2/2u_1 =-k_yP_1/_0 -Δω(P_1/P_0-γ_1/_0) =-u_1dln(P_0/_0^γ)/dx with Doppler shifted frequency, Δω≡ω-v_0(x)k_y. We define a squared sound speed c_0^2≡γP_0/_0, scale-height H_0≡ c_0/, (inverse) entropy lengthscale L_S^-1≡1/γdln(P_0/_0^γ)/dx and radial buoyancy frequency N^2 ≡-1/γ_0dP_0/dxdln(P_0/_0^γ)/dx=-c^2/γL_Sdln(P_0)/dx . Manipulations yield an ODE for Ψ≡P_1/_0, Ψ”+B(x)Ψ' =C(x)Ψ, the shearing sheet version of Eq. (15) in <cit.> with primes for x-derivatives, B≡dlnℱ/dx and ℱ ≡_0^2/κ^2+N^2-Δω^2, C ≡k_y^2+_0/ℱ H_0^2+2 k_yB/Δω+C_2, C_2 ≡1-L_S'/L_S^2+B/L_S+4k_y/ΔωL_S-k_y^2N^2/Δω^2. This work considers isentropic equilibria with C_2=1/L_S=N^2=0. The corotation resonance at Δω=0 defines a corotation radius, x_c, where [Δω(x_c)]=0. At the Lindblad resonances, where Δω^2=κ^2+N^2 and 1/ℱ=0, B is singular. The Schrödinger form of Eq. (<ref>) uses Ξ=√(ℱ)Ψ to obtain <cit.>: Ξ” =D(x)Ξ D =B'/2+B^2/4+C . We solve Eq. (<ref>), since Ξ is singular at Linblad resonances, but D is a useful effective potential. §.§ The Incompressible Shearing Sheet For the incompressible shearing sheet <cit.> we take the limit γ→∞, so that Eqs. (<ref>, <ref>) give ∇·v=0. We replace ∇P/=∇ in equation (<ref>). The equilibrium is set by the choice of _0(x), from which v_0(x) and κ^2 follow Eqs. (<ref>, <ref>). The perturbed flow obeys a stream function, ψ, as u_1=-k_yψ, v_1=ψ'. The vorticity ζ=(∇×v)·ẑ, with equilibrium ζ_0=v_0'(x) and perturbation ζ_1=ψ”-k_y^2ψ is conserved Dζ/Dt=0. Thus -Δωζ_1 =-ζ_0'u_1 which gives ψ”=(k_y^2+v_0”/v_0-ω/k_y)ψ≡D_inc(x)ψ , the famous Rayleigh equation for non-rotating incompressible shear flows. Coriolis forces set v_0, but rotation is otherwise absent <cit.>. There is a vast literature on this equation <cit.>. Relevant results include Rayleigh's theorem that a vorticity extrema, ζ_0'(x)=0, is required for instability. Fjørtoft's theorem further states that this inflection point must be a maximum in |ζ_0(x)|. Since disks have ζ_0(x)<0, instability requires a (signed) vorticitiy minimum. Fjørtoft's theorem agrees with the interpretation of D_inc as a potential, since for corotation at a vorticity minimum (D_inc)<0 near corotation for long wavelengths, k_y→ 0. We further see that long wavelengths are the most unstable. When applied to compressible, barotropic disks, a vortensity minimum is required for instability. Comparing to the compressible case, we might expect D→D_inc in some incompressible limit. Despite a shared k_y^2 term, we find that for k_yW≪1, the incompressible limit has ℱ∝1/κ^2 and 2B→-4v_0”. Thus the compressbile corotation term is 4 times larger. This surprising result is possible since D has additional relevant terms and is a potential for a different fluid quantity. Despite this difference, our compressible results have a well-behaved incompressible limit (<ref>). §.§ Disk Features To understand the universal features of RWI, we consider various disk structures, including bumps, gaps and step. Our compressible and incompressible models share a common equilibrium enthalpy _0(x) and thus v_0(x). Our parameterization _0(x)=_b+ΔS(x/W) has two constants, the reference value _b (which only affects compressible models) and amplitude Δ>0. This work considers the shapes: S(X)=G(X) bump 1-G(X) gap 1-tanh(X)2 drop 1+tanh(X)2 jump for G(X)=exp(-X^2/2) and scaled distance X≡x/W. All shapes vary from 0 to 1 over a radial width ∼W. Since _b=min(_0(x))>0, all _0(x)>0. We describe some properties of our shape functions next, then apply them to compressible models in <ref>. §.§.§ Shape Functions Figure <ref> plots (in the top row) our enthalpy features. The scaled amplitude 𝒥≡Δ/(W)^2 measures a feature's vorticity amplitude. The middle row of Figure <ref> plots κ^2. The location of vorticity (and κ^2) minima is x_m=0 for bumps, ±√(3)W for gaps—which have a pair of vorticity minima—and Wln(2±√(3))/2≃±1.32W, for jumps and drops, respectively. The location of vortensity minima, relevant for compressible flows, will be slightly shifted. Inner and outer gap edges are symmetrically equivalent in the shearing sheet. So are the drop and jump cases. Henceforth the “step” case refers to both. Rayleigh instability occurs for min(κ^2)<0 and requires vertical motions, absent from our model. The Rayleigh instability is still highly relevant and occurs for 𝒥>𝒥_κ≳1. Specifically, from Equations (<ref>, <ref>) κ^2/^2 =1+𝒥d^2S/dX^2 . and 𝒥_κ≡1/max(-d^2S/dX^2)=1 for bumps, 2.241 for gaps, and 2.598 for steps. Thus min(κ^2)=(1-𝒥/𝒥_κ)^2 in the shearing sheet. For global models, κ^2 also depends on W/R_c (<ref>). This dependence vanishes in the shearing sheet limit, W/R_c≪1. The bottom row of Figure <ref> plots the incompressible effective potential as D_inc-k_y^2 (Eq. <ref>). The corotation radius is at a vorticity minimum, x_m, with phase speed c_ω=ω/k_y=v_0(x_m). This choice removes the corotation singularity and gives (consistent with Fjørtoft's theorem) a negative potential well for trapped modes. The compressible potential D behaves similarly, but waves also propagate exterior to Lindblad resonances, where D<0 (see Fig. <ref>). §.§.§ Compressible Shearing Sheet Features The compressible shearing sheet model requires not just _0'(x) but also _0 and P_0. We consider polytropic models with P_0/P_b=(_0/_b)^Γ, with reference values _b, P_b. The structure index Γ could differ from the adiabatic index γ (but doesn't here, see below). The polytropic enthalpy _0 =∫dP_0/_0=Γ/Γ-1P_0/_0 matches Eq. (<ref>) for _0 =_b[1+Δ/_bS(x/W)]^1/Γ-1 , and _b=ΓP_b/[(Γ-1)_b]. This compressible polytropic model requires 3 additional parameters, besides k_yW and 𝒥: γ, Γ and H≡c/ with c^2≡γP_b/_b=γ(Γ-1)_b/Γ . We drop b subscripts from reference H and c values for convenience. We don't need _b or P_b independently, as Eq. (<ref>) only depends on logarithmic derivatives of _0 and P_0. To reduce parameter space, we fix Γ=γ=4/3 for an adiabatic sheet with N^2=1/L_S=0. A diatomic gas with γ_3D=7/5 corresponds to our height integrated γ=(3γ_3D-1)/(γ_3D+1)=4/3 <cit.>. Thus H is the only additional free parameter our compressible models. The limits Γ→1,∞ describe constant temperature and _0 features, respectively <cit.>. For completeness, the Γ→1 limit of Eq. (<ref>) is _0 =γP_0/c^2=_bexp[γΔ/c^2S(x/W)] . with _b(Γ-1)→c^2/γ remaining finite. §.§ Boundary Conditions and Solution Methods Solving our second order ODEs requires a pair of boundary conditions, applied at large distances |x|≫W, 1/k_y, and (for the compressible case) |x|≫H. For the incompressible case (Eq. <ref>), D_inc→k_y^2 at large |x|. Physical solutions decay exponentially, with boundary conditions, ψ'=±k_yψ , at large ∓|x|. For the compressible case, boundary conditions exterior to the Lindblad resonances should match onto outgoing density waves. We seek WKB solutions of the form Ψ∼A(x)exp(∫^xk_x(χ)dχ). Compared to previous works <cit.> who used just k_x, we find A(x) to lowest order, which improves some numerical results. First we confirm that outgoing waves have k_x(x)>0. The large |x|, Keplerian limit gives Δω→3k_yx/2, ℱ/_0→-^2/Δω^2, B→-3k_y/Δω→-2/x and C≃D→-(Δω/c_0)^2→[3k_yx/(2H_0)]^2. To lowest WKB order, Eq. (<ref>) gives k_x=±√(-C), i.e. Δω^2=(k_xc_0)^2. The group velocity ∂ω/∂k_x =∂Δω/∂k_x≈k_xc^2/Δω≈2k_xH_0/3k_yxc confirms that k_x>0 for outgoing waves (as k_y>0 by convention). For more accuracy, we adopt the physical optics solution to Eq. (<ref>), Ξ ∼c_Ξ/√(k_x,D)exp(∫^xk_x,D(χ)dχ) with k_x,D=√(-D) (the desired positive root) and c_Ξ an arbitrary (complex) constant. Taking the derivative gives the boundary condition Ξ' =(√(-D)-1/4DdD/dx)Ξ . The desired boundary condition for Ψ=Ξ/√(ℱ) follows as Ψ' =(√(-D)-B/2-1/4DdD/dx)Ψ . At large |x|, dln(D)/dx/4→1/(2x), so that |Ξ|∝1/|x|^1/2 and |Ψ|∝|x|^1/2, in agreement with our numerical solutions. Our numerical solutions use the shooting method. At the inner boundary, x_i, we pick an arbitrary ψ(x_i) or Ψ(x_i) and set the derivative with the boundary condition, Eq. (<ref>) or Eq. (<ref>). We integrate with the Dormand-Prince method (“DOP853”) implemented in . The integrated solution deviates from the outer boundary condition. Using Muller's method, we minimize the residual error and find the complex eigenvalue ω≡ω_r+s. The shooting method requires good initial guesses. We use known solutions to gradually explore parameter space. For global models, we apply the same method but solve Eq. (15) in <cit.> instead of Eq. (<ref>). We have validated our numerical result several ways, including adjusting the outer boundary positions, finding the incompressible limit of compressible results and using different methods for the RWI stability boundary (below). Similar to <cit.>, we derive an energy equation from Eq. (<ref>), which after azimuthal averaging (denoted by brackets) is ∂/∂t[_0/2(⟨|v_1|^2⟩+⟨Ψ^2⟩/c^2)] = -dv_0/dx_0⟨u_1v_1⟩-d/dx⟨P_1u_1⟩ We verified that growth rates and eigenfunctions found by our numerical method satisfy this relation, over a range of parameters 𝒥, W/H, and k_yH. §.§.§ Locating the Stability Boundary We find marginally stable modes using a simplified method <cit.>. With s=0, we fix the corotation radius, x_c, to vorticity (or vortensity) minima for incompressible (or compressible) models, which sets ω=k_y v_0(x_c). With this choice, the ODE has real coefficients, and no corotation singularity, as shown in Figure <ref> for the incompressible case. One physical parameter, usually k_yW or W/H, varies as the shooting parameter (and eigenvalue). With other parameters held fixed, this method finds marginally stable solutions. For the incompressible model, this method uses ψ(x) purely real. For the compressible models, Ψ(x) has a complex boundary condition (Eqs. <ref>, <ref>). Thus the eigenvalue (W/H) can acquire an imaginary part, which is unphysical. Usually this imaginary part is negligibly small (≲10^-3 of the real part), which validates the method. Fig. <ref> shows example solutions obtained with this method. Growth rates away from the stability boundary are also mapped, using the usual method. For more extreme parameters—near Rayleigh instability and for k_yW≃1 (placing Lindblad resonances in the Rossby zone)—this method can fail, as it does (for different reasons) in baroclinic disks <cit.>. In these cases, we simply measure where s drops to small values. It is numerically difficult to find growth rates with s/Ω≲10^-3. Since both methods agree on the stability boundary location (when this simplified method works), the stability boundary is relatively sharp. § INCOMPRESSIBLE RESULTS Figure <ref> maps RWI growth rates for various shapes in the incompressible shearing sheet. For a given shape, the incompressible RWI is completely described by the parameters for amplitude, 𝒥=Δ/( W)^2, and width (scaled to wavenumber), k_yW. We describe the incompressible stability boundary, growth rates, and eigenfunctions below. §.§ Incompressible stability boundary The dotted yellow curves in Figure <ref> show the stability boundary, found as described in section <ref>. RWI occurs for larger 𝒥 or smaller k_yW than this boundary, and no modes (stable, unstable or damped) exist on the other side. The stability boundary is best understood as smoothly connected 𝒥≪1 and J≫1 limits. For 𝒥≪1, the stability boundary follows 𝒥≃f_MSk_yW, or Δ =f_MS^2k_yW^3 with f_MS≃1.20, 2.39, 2.65 for the bumps, gaps and steps, respectively. For 𝒥≫1, the stability boundary is simply k_yW=g_MS, with g_MS=√(2),1.05,2.0 for the bump, gap and step cases, respectively. While large 𝒥 values are Rayleigh unstable, this limiting behavior explains why the stability curve steepens for 𝒥≳1. These limiting behaviors can be understood several ways, described below. §.§.§ Intuitive Explanations The 𝒥≫1 instability condition, k_yW<g_MS follows the idea that counter-propagating Rossby waves (CRWs) drive shear instability <cit.>. For simplicity, we consider bumps and examine the approximate condition for CRWs at x≃±W to maintain stationary phase, with phase speed c_ω=ω/k_y=0, as illustrated in <ref>. With a ψ(x)∝exp(k_xx) WKB approximation, Eq. (<ref>) gives c_ω=v_0+v_0”/(k_x^2+k_y^2). At x=±W, v_0/( W)∼∓(1+𝒥) roughly accounts for Keplerian and non-Keplerian flow, and v_0”∼±𝒥/W. Taking k_xW≃1 matches the local wave packet to feature size, giving v_0”/k_x^2+k_y^2 ∼±𝒥/1+(k_yW)^2(W) . Thus c_ω=0 requires 𝒥∼(1+𝒥)(1+(k_yW)^2). For 𝒥≫1 this rough analysis requires 1∼1+(k_yW)^2 or k_yW≲1 for phase matching and instability, as desired. For 𝒥≪1, this analysis fails. Instead, for the 𝒥≪1 boundary, another WKB analysis applies. Since k_yW≪1, waves have a shallow decay at large |x|/W, as ψ∝exp(-k_y|x|). To match onto this decay, the slope across the Rossby zone must change sign, but only change magnitude by a small amount, ΔΦ≡Wψ'|_-W^W/ψ∼-k_yW. Across corotation, the slope change from WKB oscillations, ψ∝exp(√(-D_inc)x), is ΔΦ =W∫_-W^Wψ”dx/ψ∼D_incW^2∼-𝒥, Where the depth of the potential near corotation D_inc≃-𝒥/W^2 (Fig. <ref>). A trapped mode thus requires 𝒥∼k_yW, in agreement with the stability boundary. The small change in wave phase √(-D_inc) W∼√(𝒥)≪1 explains the failure of standard WKB theory for 𝒥≪1, as noted above. For a more physical explanation of the 𝒥≪1 stability boundary, we briefly summarize the analysis of shearing waves by <cit.>. Shearing waves interact with axisymmetric disk features of width W and vorticity amplitude Δζ_0≃ΔΠ/(W^2). A leading wave with initial radial wavenumber k_x(t=0)≃-1/W and fixed k_y>0 swings through a radial orientation, k_x(t_sw)=0, in time t_sw=(2k_x(0)/(3k_y)≃1/(k_yW), since for 𝒥≪1 the flow is nearly Keplerian <cit.>. While swinging, the wave couples to the disk feature and spawns a new leading wave. The amplitude of successive waves increases if Δζ_0 t_sw≳1 or Δ≳^2k_yW^3, reproducing the 𝒥≪1 instability criterion. §.§.§ More quantitative explanations The above arguments can be made more rigorous. For 𝒥≪1, <cit.> couples the physical argument (Δζ t_sw≳1) to a stability boundary given by the integral k_y =1/3∫_-∞^∞dζ_0/dx/x-x_c dx with the vorticity minimum at x_c. This result reproduces Eq. (<ref>), and, for our shapes, precisely gives f_MS=6/I_MS with I_MS=∫_-∞^∞S”'(X)/X-X_c dX where X_c=x_c/W. Integrating I_MS reproduces our numerical results. For bumps, f_MS=3/√(2π), and gaps, f_MS=6/√(2π). The numerically integrated I_MS for steps is also consistent. For 𝒥≫1, the stability boundary k_yW=√(2) for the bump case can be derived exactly. The 𝒥→∞ limit gives a parabolic potential well D_incW^2→(k_yW)^2+3-(x/W)^2. This potential has quantized bound states of “energy" E=3-(k_yW)^2=2n+1 for n=0,1,... <cit.>. For (k_yW)^2>0 only the n=0 bound state exists, which demonstates the lack of RWI modes with higher radial order. This bound state has k_yW=√(2), as claimed. For all shapes, a necessary condition for RWI follows from the requirement that D_inc<0. This necessary condition is only close to the stability boundary for 𝒥≳1. For 𝒥≫1 this necessary condition is k_yW<√(3) for gaps and bumps and k_yW<2 for steps. These simple necessary conditions are close to, but less strict than, the sufficient conditions for instability. §.§ Incompressible growth rates Figure <ref> shows that RWI growth rates increase with 𝒥 and with k_yW, except near marginal stability. While it is easier to trigger RWI for long wavelengths (small k_yW), growth rates well into the unstable region are faster for smaller wavelengths (large k_yW). The smooth variations in growth rates, especially away from the stability boundary, suggest an analytic scaling. Figure <ref> plots growth rates scaled by the characteristic rate s_inc ≡(k_y^2Δ)^1/3 . This approximation is reasonably good, aside from the rapid decay near the stability boundary. In the k_yW≪1 limit, <cit.> derives the growth rate s/=(3/2)αk_yW, where α follows from the complex integral constraint on α and β: k_y =1/3W∫_-∞^∞dζ_0/dX/(X-β)^2+α^2(X-β+α) dX. This result reduces to Eq. (<ref>) for marginal stability, where β→X_c. Analysis is simplest for bumps, where symmetry about the vorticity minimum gives β=0, and the imaginary part of the integral vanishes. Our parameterization, with S_B for the bump shape, gives: k_yW/𝒥 =1/f_US(α)≡1/6∫_-∞^∞XS_B”'(X)/X^2+α^2 dX, The function f_US(α) for unstable modes, gives f_US(0)=f_MS at marginal stability. Eq. (<ref>) only gives simple expressions in limiting cases. For α≪1, πα/2→1/f_MS-k_yW/𝒥 gives the rise in growth rates near the stability boundary as s/ ≈3/πk_yW(1/f_MS-k_yW/𝒥) The α→∞ limit gives f_US→α^4/√(2π) and s/→3/2(2π)^1/8(Δk_y^3W/^2)^1/4 Unfortunately this limit does not directly apply. We are mainly interested in k_yW/𝒥≳0.01, corresponding to α≲3.4, i.e. at most order unity. Our approximate Eq. (<ref>) corresponds to f_US∼α^3, a good approximation for order unity α. §.§ Incompressible Eigenfunctions To visualize RWI modes, Figure <ref> maps perturbed vorticity ζ_1 and velocity vectors v_1 for various growth rates and feature types. These incompressible eigenfunctions are similar to the standard global, compressible RWI <cit.>. Corotation is near the vorticity minima marked with the dotted line. The RWI mechanism is clearest for larger growth rates (bottom row). The pair of CRWs across corotation (analyzed in <ref>) is evident. This wave pair is shifted in azimuthal phase, but radially symmetric for the bumps, and asymmetric for gaps and steps, consistent with their asymmetric potentials (Fig. <ref>). The aximuthal phase shift causes flow though the vorticity minima to primarily enter regions of negative perturbed vorticity. This explanation of the growth mechanism is well known for general shear flows <cit.> and the RWI <cit.>. At lower growth rates (middle row), the phase shift decreases. The feeding of negative vorticity from the background into perturbations is less direct, entering narrow fingers near corotation. Even closer to marginal stability (top rows) the phase shift is nearly gone, and feeding via narrow glitches near corotation is harder to see. For marginal stability, ζ_1 is non-zero and smooth through corototation. However all growing modes have ζ_1=0 at vorticity extrema (as shown by Eq. <ref>). This fact explains the necessity of small glitches near marginal stability, and the width of prominent CRWs, which fit between a vorticity maximum and minimum (cf. Fig. <ref>). § COMPRESSIBLE SHEARING SHEET RESULTS We now analyze the compressible shearing sheet model of sections <ref>, <ref>. Compressible effects are captured by the value of k_yH (see below). Our incompressible results roughly correspond to the k_yH→∞ limit. For an effective Mach number, we use the Keplerian shear across a lengthscale of 1/k_y to define ℳ_eff≡/k_y/c=1/k_yH . RWI modes with ℳ_eff≲1 behave incompressibly, which is expected of subsonic flows. In global protoplanetary disks, the RWI is moderately compressible for m=1 modes, and more incompressible for higher m (<ref>, point 3). §.§ Compressible Stability Boundary Figure <ref> shows the effect of compressibility, measured by k_yH, on the RWI boundary. The bump feature is chosen, and is representative, with quantitative shape effects are noted below. For k_yH=1 the stability boundary overlaps the incompressible limit (k_yH→∞). As k_yH decreases, compressibility effects increase. For sufficiently small k_yH≲0.01, the stability boundary breaks into three distinct regions, approximately: Δ/^2 W^2 ≈f_MSW(k_y+1/4H) if W≲H 0.4j_MS if H≲W≲k_y^-1 ∞ if k_yW≳g_MS. The shape-dependent factors f_MS, g_MS (<ref>) and j_MS (below) are order unity. For incompressible parameters, k_yH≳0.3, this stability boundary reverts to the incompressible case, with no intermediate width region. For marginal compressibility, k_yH≃0.1, these regions are not as distinct, with overlapping transitions. For small widths, W<H, the compressible (k_yH≪1) stability boundary follows Δ≈f_MS^2W^3/(4H), independent of k_y. Compared to the incompressible k_yW≪1 boundary, Δ∝W^3 is identical, but compressible enthalpy features must be ≃1/(4k_yH) larger for instability. This stabilizing effect generally arises from the fact that some of the energy is used to compress the flow <cit.>. Wide features and/or short wavelength modes, k_yW≳g_MS∼1, are RWI-stable, like the incompressible case. However, compressible modes are more unstable between 0.3≲k_yW≲g_MS (see Fig. <ref>). This effect arises because Lindblad resonances, absent from the incompressible limit, approach the Rossby zone, as described below. Ultimately, the widest features require a global treatment (<ref>). For intermediate widths, with W≳H but W≲k_y^-1, the stability boundary is approximately given by 𝒥=Δ/(W^2^2)=0.4j_MS with j_MS≃1,2.8,3.0 for bumps, gaps and steps, respectively. The value of min(κ^2)=1-𝒥/𝒥_κ≃0.6,0.51,0.54 for bumps, gaps and steps (respectively) is more similar, emphasizing that κ^2, and being “halfway to Rayleigh" is more fundamental. Figure <ref> plots min(κ^2)/^2 for marginal stability, moderate compressibility, k_yH=0.03, and different shapes. (The global models in this figure are discussed in <ref>.) For W/H≳2, the stability boundary is “halfway to Rayleigh" with min(κ^2)≃0.5-0.6^2. For stronger stronger compressibility (smaller k_yH), min(κ^2) values would be more strictly constant (Figure <ref>). In Figure <ref>, the k_yW∼1 stability boundary is off scale at W/H∼30. For W/H≲1, the stability boundary approaches min(κ^2)/^2=1-(f_MS/𝒥_κ)(W/H)/4, following Eq. (<ref>). While shape effects are minor in the shearing sheet, bumps most readily trigger RWI, at larger min(κ^2) values (Fig. <ref>) and smaller enthalpy amplitudes (smaller f_MS and j_MS values, see Fig. <ref>). We next examine origin of the three limits in Eq. (<ref>). §.§.§ Small width compressible boundary In <ref>, incompressible instability for k_yW≪1 is given as the <cit.> wave shearing time criteria Δζ_0t_sh≳1. The corresponding W≪H compressible instability criterion is that the sound crossing time t_sc≡W/( H)≲Δζ_0/^2. Rotation appears explicitly in the compressible (but not the incompressible) instability condition, consistent with the discussion after Eq. (<ref>). We can also adapt the WKB derivation of the 𝒥, k_yW≪1 incompressible stability boundary in <ref> to the compressible k_yH≪1, W≲H case. The main difference in this case is that, from Eq. (<ref>), the decay outside the Rossby zone follows Ψ∝exp(-|x|/H). Thus the slope change across corotation (now for Ψ) becomes ΔΦ∼-W/H. In this limit, the potential depths are similar (to order unity), so the induced ΔΦ∼-𝒥. Matching these two gives 𝒥∼W/H, the desired compressible boundary. §.§.§ Intermediate width stability boundary The “halfway to Rayleigh" instability criterion is given above as the scaled enthalpy condition, 𝒥≳0.4j_MS∼1. In absolute terms, relative to _ b∼ c^2, this condition becomes Δ/c^2≳(W/H)^2. Thus widths larger than H require increasingly strong enthalpy features, a relevant point for the astrophysical origin of these features. To explain this stability boundary, a negative potential at corotation D(x_c)<0 gives a useful necessary condition for instability (similar to <ref>). While simple to state, there are many terms in D to evaluate. These terms are stabilizing (or destabilizing) if they make a positive (or negative) contribution to D(x_ c). We focus on the bump case with x_c=0 for simplicity, and take the k_yW≪1 and Δ≫c^2 (equivalent to W≫H as noted above) limits. These limits avoid the transitions to neighboring stability regimes. The main stabilizing term is _0/(ℱH_0^2), the usual source of the corotation barrier in disks. With _0/ℱ∝κ^2 this term is reduced near Rayleigh instability, and also, via H_0, by disk heating. In our limits ._0/ℱH_0^2|_x=0 →3/W^2(1/𝒥-1) The main destabilizing term is the corotation term C_cor=2k_yB/Δω, though B'/2 also contributes. At corotation B∝dln(q_0)/dx diverges approaching Rayleigh instability. Thus a simple explanation of the “halfway to Rayleigh" result is that the stabilizing corotation barrier vanishes, and the destabilizing corotation resonance diverges, for κ^2→0. Thus instability occurs somewhat before this point. Our limits give C_cor(0)+B'(0)/2 →-3/W^2(3/2(1-𝒥)+1/3+𝒥) with the advertised 𝒥→1 divergence. Combining Eqs. (<ref>, <ref>), the necessary criterion D(0)<0 becomes 𝒥>0.29. This condition is close to, but naturally below, the sufficient condition for bumps, 𝒥>0.4. One insight from this analysis is that equation of state effects should have a modest effect on this stability boundary, via H_0 and _0'. However κ^2 is the dominant effect. We defer a more detailed study of thermodynamic, including baroclinic, effects. §.§.§ Large width stability boundary The k_yW≳1 condition for stability matches the incompressible case which was physically justified in <ref>. We do generalize that argument to include compressibility, for reasons explored below. The enhanced instability of compressible models for 0.3≲k_yW≲1 is due to the proximity of Lindblad resonances, as noted above. While limited to a small region of parameter space, this result does go against the usual trend of compressibility hindering instability. We expect nearby Lindblad resonances to enhance RWI because the outer wave propagation zones approach the corotation amplifier <cit.>. A similar effect is the reduction of the forbidden zone width, i.e. Toomre Q barriers, in self-gravitating disks <cit.>. We defer a detailed study of this effect, but note the basic properties of Lindblad resonances in our models. Their location, where Δω^2=κ^2, is at |x|=±2/(3k_y) in limit of pure Keplerian flow in the shearing sheet (for corotation at x=0). This location clearly approaches |x|≲W for k_yW≳1. Non-Keplerian flow affects the exact location of Lindblad resonances in the Rossby zone. To understand why Lindblad resonances only affect compressible modes, note that density waves only propagate where D<0. For Keplerian flow, this propagation region follows from the first two terms in Eq. (<ref>) as |x|>(2/3)√(1/k_y^2+H^2), i.e. always with |x|>2H/3 <cit.>. This effective Lindblad resonance location is far from the Rossby zone for incompressible modes with k_yH≳1. The simplified analyses offered in other regimes is complicated by the presence of Lindblad singularities (where ℱ→∞) in the Rossby zone (see Fig. <ref>). Lindblad singularities can be removed from the ODE <cit.>. However, they are replaced by “sonic" singularities at |x|≃2H/3, which also lie in the Rossby region in this W>H regime. We thus defer further analytic exploration of this regime. §.§ Compressible growth rates Figure <ref> plots the growth rates for three different levels of compressibility, k_yH=0.03,0.1,0.3, from stronger to weaker, which also corresponds to a range of wavelengths from long to short. The bump case is shown, but other shapes are similar. The width is plotted as W/H, compared to k_yW in Figure <ref> for a different perspective. The characteristic value of k_yW=0.3 (where compressible effects transition from stabilizing to destabilizing, as described above) lies at W/H=10,3,1 (respectively) in these plots. For fixed values of W/H≲1 the stability boundary is similar for the longer wavelength (more compressible) cases k_yH=0.03,0.1 but higher for k_yH=0.3 (pushed by the incompressible limit). However in the unstable region, larger k_yH modes grow faster. For larger fixed values of W/H≳1 models with smaller k_yH have an extended region of instability, out to W/H∼1/(k_yH), but only have faster growth (than larger k_yH modes) close to the stability boundary. More quantitatively, we showed that incompressible growth rates are approximately s_inc∼(k_y^2Δ)^1/3. For k_yH≲0.1, the compressible growth rates in Figure <ref> are better characterized by a reduced rate: s_comp ∼(k_y^3HΔ)^1/3 , for regions not too close to the stability boundary. As with the incompressible case, the growth rate has a weak dependence on feature width, W, and increases with k_y in the unstable region. For quick estimates of RWI growth, Eq. (<ref>) can be used to determine if parameters are unstable, while the growth rate can be estimated as min(s_ inc,s_ comp) for parameters a factor ≳2 from the stability boundary. § COMPARISON TO GLOBAL MODELS To understand the validity and limitations of our shearing sheet models, we compare to global, cylindrical disk models. We demonstrate that shearing box models are valid for narrow features with sufficiently small W/R_c, and investigate the role of global disk parameters m=k_y R_c and h=H/R_c. We describe our global enthalpy features, and compare to other parameterizations in <ref>. We analyze the Rayleigh stability boundary in global models in <ref>. We compare the RWI in shearing sheet and global models in <ref>, and finally address RWI in dust traps in <ref>. §.§ Global disk models To best compare to our shearing sheet models, our global models use an enthalpy feature _0(R) =(R/R_c)^q[_b+ΔS(ΔR/W)] . with ΔR=R-R_c, powerlaw q, and using the same shape functions S as Eq. (<ref>). For the same polytropes, P_0∝_0^Γ, Eq. (<ref>) generalizes to _0(R) =_b(R/R_c)^n[1+Δ/_bS(Δ R/W)]^1Γ-1 with n=q/(Γ-1). We again set Γ=γ=4/3 and set n=q=0 for simplicity in this work. Works not using our enthalpy formulation can still be compared to our results. Surface density features <cit.>, _0,(R) =_b(R/R_c)^n[1+Δ/_bS_(Δ R,W)] . are equivalent to our enthalpy formulation, with _0≃_0,, for Δ/_bS_ ≃1Γ-1Δ_bS if Δ_b≪1, (Δ_bS)^1Γ-1 if Δ_b≫1. For small amplitudes the shapes are the same and the amplitudes are similar. Even the isothermal Γ=1 case follows, from (Γ-1)_b→c^2/γ (Eq. <ref>). For larger amplitudes, neither amplitudes nor shapes are the same, so comparisons require more care. Instead of the above analytic comparison, for general disk features, the corresponding enthalpy profile _0(R) can be computed, and the amplitude and width measured. For barotropic models, the profile is simply _0=∫dP_0/_0. For non-barotropic disks, _0(R)=_0(R_r)+∫_R_r^RdP_0/dR'/_0dR' , for arbitrary values of the reference disk location, R_r, and _0(R_r). The reference enthalpy is irrelevant, as a reference sound speed c (which is not arbitrary) can be used instead. Thus, with some effort, the enthalpy properties of any disk feature can be measured, and applied to the results of this work. §.§ Rayleigh Instability in Global Models The results for our global models are shown in Fig. <ref> for bumps and Fig. <ref> for other shapes. In global models, the shearing sheet symmetry of inner vs. outer gap edges and jumps vs. drops is broken. We show these additional cases. We first discuss the location of the Rayleigh stability boundary. In shearing sheet models, the Rayleigh stability boundary is at fixed 𝒥 (Eq. <ref>, Figs. <ref>, <ref>, <ref>). For global models, the critical 𝒥 value changes for larger widths, W/R_c. We describe Rayleigh instability as being “enhanced" (or “reduced"), when the critical 𝒥 value drops (or increases) for wider features. Of the shapes considered, most show enhanced Rayleigh instability, to quite different degrees. Only the inner gap edge shows obviously reduced Rayleigh instability. The difference with the outer gap edge (which has the strongest enhancement) is striking. Curiously, the drop shape differs from the inner gap edge, which also can be considered a drop. We wish to understand these effects, since the Rayleigh stability boundary is crucial for our RWI analysis. We start with the orbital frequency _0^2(R) = _K^2(R) +1/Rd _0/dR where the Keplerian _K = _Kc R_ c^3/2/R^3/2. Using Eq. (<ref>) with q =0 and κ^2=R^-3 d(_0^2 R^4)/dR gives κ^2/_Kc^2 = (R_c/R)^3+𝒥[ S”(X) +3W/R S'(X) ] for X=Δ R/W, which reduces to the local limit, Eq. (<ref>), for W, Δ R≪R_c. There are two main global effects in equation (<ref>). The first “Keplerian" effect is the (R_c/R)^3 term, which enhances Rayleigh instability if R>R_c at min(κ^2). For example, the outer edge of gaps and jumps have vorticity (and κ^2) minima at R>R_c. Conversely, this effect reduces Rayleigh stability for inner gap edges and drops. The second “non-Keplerian" effect is given by the term 3(W/R)S'(X). Sub-Keplerian speeds (S'(X)< 0) contribute to lower vorticity and enhanced Rayleigh stability. This term is positive (negative) for jumps (drops) and outer (inner) gap edges. Thus this second effect counteracts the first for these shapes (but not bumps, as discussed last). Which effect dominates depends on shape details, especially how far the vorticity (and κ^2) minimum is from R_c. For a quantitative criterion, we Taylor expand equation (<ref>) about W=0, and evaluate at R=R_m=R_c+X_m, the location of the local κ^2 minimum. This expansion shows that min(κ^2) is lower, and Rayleigh instability enhanced if f_W ≡ X_m-𝒥 S'(X_m)= X_m+ S'(X_m)/S”(X_m)>0 . The final expression uses 𝒥 =𝒥 _κ=-1/S”(X_m), the small W Rayleigh stability boundary. Since f_W>0 for outer gap edges (the “Keplerian" effect dominates) and for drops (the “non-Keplerian" effect dominates) the enhanced Rayleigh instability of these shapes is explained. Shapes with f_W<0 (inner gap edges and jumps) are expected to show reduced Rayleigh instability. This expectation holds for inner gap edges, but jumps are more complicated. Rayleigh stability is indeed reduced as W→ 0, but this small effect is not visible in Figure <ref>. At larger W/R_c, the first order Taylor expansion is insufficient for jumps. As X_m increases for larger W/R_c, the “Keplerian" effect dominates, explaining the enhanced Rayleigh instability seen for jumps. The stronger competition between the two effects explains why the jump case shows a weaker enhancement, starting at larger W, compared to other shapes. The bump case is special with f_W=0, which is marginal by our simple criterion. However since bumps have S'(X)<0 for X>0, both global effects combine constructively for the bump case, unlike the other cases. Thus bumps have enhanced Rayleigh stability at larger W, with min(κ^2) shifting to X>0. From this analysis, we come to a better understanding of the Rayleigh stability boundary for wide disk features. For even more extreme parameters than Rayleigh instability, outward pressure gradients can exceed stellar gravity, giving _0^2<0 (Eq. <ref>). The min(_0^2)=0 boundary appears in Figures <ref> and Fig. <ref>. This boundary is shown to emphasize how extreme this region of parameter space is. §.§ Global RWI vs. Shearing Sheet We now compare the results of our shearing sheet models to the equivalent compressible global models, described in <ref>. Global models introduce the lengthscale, R_c and thus one additional dimensionless parameter. Our shearing sheet parameters 𝒥,k_yW and k_yH we add the mode number m=k_yR_c. Removing the local wavenumber k_y, an equivalent set—𝒥,m,h,W/R_c—uses the aspect ratio h=H/R_c. §.§.§ Effect of Mode Number m Figure <ref> plots the marginal stability curves for m=1, 2 and 3 RWI modes in a h=0.1 disk versus W/R_c for a bump shape (other shapes are addressed next). The equivalent compressible shearing sheet models have k_yH=mh=0.1, 0.2, 0.3, and their stability curves are shown for comparison. Shearing sheet and global results agree very well for W/R_c≲0.1, as expected. This agreement is excellent even for the most global m=1 modes. For W/R_c≳0.1 global and shearing sheet results differ, moreso for lower m. At larger widths, global bumps are more susceptible to the RWI than shearing sheet bumps. In Fig. <ref>, the Rayleigh stability boundary deviates from constant 𝒥, as described in <ref>. The m=1 mode curves to avoid the Rayleigh stability boundary, for W/R_c≳0.1, another manifestation of the “halfway to Rayleigh" result. The m=2,3 modes don't share this behavior, crossing the Rayleigh boundary. Since these modes are only weakly compressible, with mh=k_yH=0.2,0.3, the “halfway to Rayleigh" behavior is not expected, and is also not seen in the comparison shearing sheet models. The shearing sheet stability boundary at k_yW≃0.7 is at W/R_c≃0.7/m≃0.7,.35,.23 for the modes in Fig. <ref>. The global models are more unstable, i.e. to larger widths than this boundary. This destabilizing effect diminishes for smaller W/R_c boundaries, as expected. §.§.§ Effect of Feature Shape Figure <ref> plots marginal stability curves and growth rates for m=1,h=0.1 RWI modes for a range of shapes (but not bumps, shown in Fig. <ref>). Global marginal stability is compared to shearing sheet results for k_yH=mh=0.1. The agreement is again good for W/R_c≲0.1, i.e. W/H≲1 on this plot. The inner and outer gap edges show the largest differences between global and shearing sheet models, starting for W/H≳0.5. The large distance between the gap center and vorticity minima (≃1.7W, see Fig. <ref>) is a natural explanation. For most shapes, global models are more susceptible to the RWI than shearing sheet models of equivalent parameters. Inner gap edges are the only exception (of the shapes considered). Inner gap edges are also the only shape to show reduced Rayleigh instability at larger widths (<ref>). The “halfway to Rayleigh" behavior of the RWI boundary thus also applies to global models as they deviate from the shearing sheet approximation. For narrow widths, W/H≲1, the growth rates in Fig. <ref> are consistent with shearing sheet (cf. Fig. <ref> for k_yH=0.1), similar for all shapes, and given approximately by Eq. (<ref>). For wider features, we do not offer a global correction to this analytic approximation, as the effects seem shape dependent. The basic behavior is that growth rates steadily increase away from the RWI boundary. §.§.§ Halfway to Rayleigh, Globally We refer back to Fig. <ref> for the the minimum value of κ^2/_0^2 along the RWI boundary for global models with m=1, h=0.03 (a slightly thinner disk than above). The results are generally consistent with the equivalent k_yH=mh=0.03 shearing sheet models. Note that global models compare to the orbital frequency _0(R), not the fixed of the shearing sheet. For W≳2H, all models have the RWI boundary occurring “halfway to Rayleigh" with min(κ^2/_0^2)∼0.5-0.6. The inner and outer gap edges again show the largest global corrections at larger widths. Inner gap edges are again the most special case and the most resistant to RWI, especially for wide gaps. Inner gap edges require the lowest values of min(κ^2/_0^2) and the largest enthalpy amplitudes (Fig. <ref>) to trigger RWI. For thinner disks, h≲0.01, global corrections are less significant (for fixed W/H) and min(κ^2/_0^2) more constant along the RWI boundary. This effect is due to stronger compressibility, with mh=k_yH<0.01 (Fig. <ref>). Disks with moderate thickness, 0.03≲h≲0.3, are more realistic but more complicated, due to intermediate compressibility and stronger global curvature effects. §.§ RWI in Dust Traps <cit.> examined which dust trapping rings became unstable to – and would thus be modified by – the RWI. The condition for dust trapping is a maximum in the midplane pressure, P_ mid = _ KP_0/c_0 = _ K/√(γ)√(P_0 _0) assuming a vertically isothermal structure. Figures <ref> and <ref> show the minimum amplitude needed for dust trapping. Note that no dust traps exist for inner gap edges or drops since they reinforce d P_ mid/dR < 0 instead of reversing it. Dust traps that are stable to RWI lie in the parameter space above the yellow dotted dust trapping curves and below the solid RWI boundaries. As in <cit.>, which only considered bumps, stable (to RWI) dust traps exist above a minimum width and for a range of intermediate amplitudes. This parameter space is larger for outer gap edges and jumps, vs. bumps, for reasons that can be explained by an analysis of P_ mid similar to that of κ^2 in <ref>. We defer a more detailed study of dust trap stability that further extends <cit.>. We mainly note that such an analysis is facilitated by the insights to the RWI boundary established in this work. § CONCLUSIONS We examine the linear Rossby Wave instability (RWI) with a suite of simplified models, to gain a basic understanding of the conditions for instability, and unstable growth rates. The disk features that trigger the RWI are best characterized by their enthalpy amplitude, Δ and width W. When different combinations of temperature and density produce the same enthalpy profile, the equilibrium velocity and vorticity profiles are the same (Eqs. <ref>, <ref>). We apply enthalpy features with various shapes (<ref>) to a suite of models, in the incompressible shearing sheet (<ref>), the compressible shearing sheet (<ref>) and global models (<ref>). Our main insights, explored in detail in the text, follow. * The RWI in the incompressible shearing sheet (ISS) is simply characterized by two dimensionless parameters: the scaled enthalpy amplitude 𝒥=Δ/( W)^2 and k_yW (wavenumber times width). The ISS RWI can be understood analytically, including the stability boundary (Eqs. <ref>, <ref>) and growth rate (Eqs. <ref>, <ref>). The ISS RWI has a similar mechanism and eigenfunctions to the full disk RWI and to generic shear instabilities (Fig. <ref>). * The RWI in the compressible shearing sheet (CSS) requires the additional parameter, k_yH (wavenumber times scale-height), an inverse Mach number. Modes with k_yH>1 behave incompressibly. Smaller k_yH values show stronger compressibility effects (Fig. <ref>). * The RWI is moderately compressible in typical protoplanetary disks with aspect ratios 0.03≲h≲0.3 <cit.>. Specifically, m=1 azimuthal modes have k_yH=mh→h, and are compressible, while m≳1/h modes are incompressible. * The RWI is usually most readily triggered by the longest wavelength, m=1 modes (; <ref>). However, in very thin disks, modes with different m but mh=k_yH≲ 0.01 will have nearly the same RWI boundary, due to strong compressibility (Eq. <ref>). * Only disk features with widths W≲ 1/k_y can trigger the RWI (Figs. <ref>, <ref>). In global models with a feature at radius R_c, this limit, W/R_c≲1/m, is relevant (i.e. smaller than the disk) for m>1 (Fig. <ref> and <ref>). This limit is roughly derived in <ref>. * The RWI boundary often lies “halfway to Rayleigh instability" in that min(κ^2) drops to ∼0.5-0.6 ^2. This behavior occurs for widths H≲W≲1/k_y=R_c/m (Fig. <ref>), a range that expands for thinner disks (and thus stronger compressibility, Fig. <ref>). This boundary is roughly derived in <ref>. * For narrow features, with W≲ H the RWI boundary follows Δ∝W^3 (Eq. <ref>). This scaling agrees with the low amplitude behavior in <cit.>, see Eq. <ref>. We explain the relevant factors that turn this previously known proportionality into an equality. * The stability boundary for the RWI of localized disk feature (with W≲0.2R_c) can be approximated by equation (<ref>). The enthalpy amplitude and width must be calculated (see <ref>). For wider disk features, the “halfway to Rayleigh" criterion is a good approximation for m=1 modes (Figs. <ref>, <ref>). * Shape effects are generally minor when comparing the same enthalpy amplitude and width. However bumps are the most susceptible to RWI. Wide gaps show the largest global corrections, compared to shearing sheet models. The inner edges of wide gaps are the least susceptible to RWI (Fig. <ref>, <ref>). This final point implies that wide, symmetric planetary gaps the outer gap edge should generally support more vigorous RWI and vortex formation. Vortices at the outer edges of gaps could be more prominent in simulations and observations for other reasons as well, including: larger area, longer orbital and viscous timescales, numerical resolution and more dust trapping <cit.>. Alternately, our results imply that over longer times, the RWI should make wide planetary gaps more asymmetric, with closer and steeper inner edges. The radial powerlaw of the background disk also affects gap asymmetries <cit.>. Our simplified models neglect many physical effects, notably baroclinicity, cooling, 3D motions and self-gravity. Previous works have studied the RWI with these effects and shown their importance (<ref>). Studying these effects with methods similar to this work—e.g. parameter studies of models of increasing complexity—could yield further insights. § ACKNOWLEDGEMENTS We thank Leonardo Krapp, Juan Garrido-Deutelmoser, Gordon Ogilvie, Roman Rafikov, Kaitlin Kratter, Wlad Lyra, Orkan Umurhan and other participants in the sptheory (University of Arizona Star and Planet formation Theory) and pfits+ (Planet Formation in the Southwest) group meetings for inspiring discussions and advice. We acknowledge support from the NASA Theoretical and Computational Astrophysical Networks (TCAN) via grant #80NSSC21K0497. aasjournal
http://arxiv.org/abs/2407.12192v2
20240716213643
Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts
[ "Sam Yu-Te Lee", "Aryaman Bahukhandi", "Dongyu Liu", "Kwan-Liu Ma" ]
cs.HC
[ "cs.HC" ]
ClaimCompare: A Data Pipeline for Evaluation of Novelty Destroying Patent Pairs Shiri Dori-Hacohen July 16, 2024 =============================================================================== § INTRODUCTION Prompting is a new way of biasing Large Language Models (LLMs) towards the desired output with natural language instructions <cit.>. Recently, OpenAI's GPT Store opened the door for non-technical people to customize chatbots with prompts. The low technical barriers and high customizability of prompting democratize a wide range of tasks that previously required programming skills <cit.>, e.g., personalized reading and writing assistants <cit.> or visualization creation <cit.>. On the other hand, composing a desired prompt (i.e., prompt engineering) is non-trivial. Design studies in prompt engineering <cit.> have pointed out that evaluation in prompt engineering remains arduous because of five challenges in current evaluation practices, i.e., evaluation is Opportunistic, Manual, Multi-criteria, Dynamic, and Unactionable. These challenges exist even when the evaluation is done on only a few test instances, and are amplified as the evaluation scales up. For example, a prompt designed for text summarization on a news article dataset with one thousand articles requires an evaluation on a larger test set than just a few instances to ensure its robustness. We refer to this kind of prompt evaluation as “dataset scale”, which is under-explored yet challenging. In contrast to machine learning evaluation where quantitative metrics like F1 score are typically used, the quality of a summarization prompt output is hard to capture with quantitative metrics. Traditional metrics such as ROUGE <cit.> have been criticized in many ways <cit.>, especially their inability to differentiate between state-of-the-art models, thus not suitable for evaluating summaries generated by LLM prompts. _1_1 As a consequence, people choose the test set opportunisticly, i.e., whatever they see first or the data points that seem easy to evaluate, instead of choosing rigorous representatives of the whole dataset. Moreover, the test set is evaluated by manually scanning through the outputs. This introduces multiple and dynamic criteria in prompt evaluation, where people change evaluation criteria as they see fit after the scanning. Most importantly, this evaluation approach does not necessarily lead to actionable insights, i.e., what kind of instruction is needed, or how should the criteria be expressed to generate the desired output. Designers would still need to go through a highly unpredictable trial-and-error prompt refinement process. In this work, we target text summarization prompt refinement and attempt to explore the tasks that are involved in a systematic evaluation for dataset-scale prompts and the visualization designs that foster actionable insights. We target individuals who seek to tailor chatbots for their specific needs or work requirements using prompts and aspire to efficiently perform systematic evaluations of their prompts. Through the lens of text summarization, we seek to generalize our findings to broader real-world tasks that require dataset-scale evaluation. While text summarization may not be representative of all possible prompting tasks that LLMs could perform, it presents challenges that are not straightforward to solve and studying them could provide insights towards more generalizable solutions. First, summarization is extensively studied yet NLP researchers can not agree on a robust quality metric (i.e., metrics like ROUGE <cit.> that seek to quantify the quality of a summary) for evaluating state-of-the-art systems. _1_2 This indicates that quality metrics might have reached their limits in differentiating nuanced quality differences. Second, a significant cognitive load is required to manually evaluate summaries. Designers need to read the lengthy text and the summarized text to decide the prompt performance and refinement directions, making it infeasible at the dataset scale. Through a pilot study, we found that feature metrics, i.e., computational metrics that characterize the summary from different facets, are more desirable than quality metrics for both technical and non-technical prompt designers. _1_3 For example, formality and complexity might be two critical characteristics (features) of a summary if the summarization goal is to produce academic-level writing. The key distinction of feature metrics is that each metric evaluates one characteristic of the generated summary, and the desired summary does not necessarily have a higher score, e.g., a complexity score suitable for kids (lower scores) might be more desirable than for professionals (higher scores). Compared to quality metrics, feature metrics provide a multi-faceted understanding of the generated summaries and a more concrete refinement direction. Based on this finding, we introduce a feature-oriented workflow that involves four tasks: Feature Selection, Example Sourcing, Prompt Refinement, and Evaluation for Refinement. We develop , a VA system that implements the workflow for text summarization, incorporating computational linguistics metrics as features and intelligent agents to support the tasks. It uses cluster visualization to provide an overview of the dataset and support the identification of ideal examples. It provides prompt suggestions based on established prompting methodologies to support prompt refinement. Finally, a scatter plot inspired by BubbleSets and enhanced with dimensional reduction techniques supports users generate actionable insights for prompt refinement. We recruit experts from various domains for practitioner review, from which we confirm the effectiveness of the system. We report the generalizability of the system to a broader range of tasks and discuss insights into human-agent interaction. Our contributions are as follows: * _1_4 We introduce a feature-oriented workflow to address challenges in supporting dataset-scale prompt evaluation, which we summarized from a literature review and a pilot study. * We develop a VA system, , that supports the feature-oriented workflow for text summarization with computational linguistics, intelligent agents, and interactive visualizations. * We evaluate the effectiveness and generalizability of the system with a case study and interviews with practitioners from various domains, and report implications for future directions in prompt evaluation and human-agent interaction. § RELATED WORKS In this section, we first explore the technical challenges and solutions of text summarization evaluation. We then examine current interactive visual interfaces and design studies for iterative prompt engineering, highlighting the research gap in evaluating dataset-scale prompts. §.§ Text Summarization Evaluation Automatic summarization is a classic natural language generation (NLG) task that converts a lengthy source text into a condensed text containing the most important information. However, the evaluation of a summarization system has remained a persistent challenge. In the past, people predominantly used n-gram overlap metrics like BLEU <cit.> or ROGUE <cit.> to assess the quality of a generated summary. Later, they are enhanced with embeddings-based similarity measurements like BERTScore <cit.> and MoverScore <cit.>. Still, studies have shown that they cannot reliably quantify improvements if the difference is too small, because they are not sensitive enough to capture the subtle differences in high-quality summaries that humans can perceive <cit.>. This became a serious problem as the capability of NLG models advanced, and finally, these metrics became ineffective as LLMs demonstrated the capability of generating human-level summaries. Another problem with these metrics is that they all require a labeled dataset (i.e., references) to work with, essentially transforming the summary quality evaluation into a similarity evaluation, where a human-written summary is assumed to be optimal. This is not practical in prompting as the practitioners can be non-technical and the outcome of prompting can easily exceed in quality that of any human-labeled dataset. Researchers have been calling for new metrics to assess the NLG quality of most recent models <cit.>. While some reference-free metrics that do not require labels are proposed, such as QA-QG metrics <cit.> or LLM-as-evaluators <cit.>, they do not yet show a consistent correlation with human judgments <cit.>, might be capturing spurious correlations <cit.>, or might exhibit various biases <cit.>. We argue that metrics that attempt to capture the overall “quality” of a prompt output (i.e., quality metrics) are susceptible to misaligning with human judgment, as previous studies have shown that human judgment is multi-criteria and dynamic <cit.>. As an alternative, we introduce feature metrics that characterize the outputs to support sensemaking on the prompt performances. For summarization, we use feature metrics such as formality and naturalness to support dataset-scale prompt evaluation. §.§ Interactive Visualizations for Model Refinement _2_1 Facilitating model refinement with interactive visualizations is an extensive field <cit.>. Many works help model developers debug their models <cit.> by visualizing the data flows or training patterns, but they are designed specifically for the underlying model architecture and are not applicable in prompt engineering. Works that focus on model tuning <cit.> are more similar to the settings in prompt engineering, where visualizations such as line charts or parallel coordinates <cit.> are used to visualize the performance metrics and their relations with hyperparameter settings. Still, new visualization techniques are needed in prompt engineering for three reasons. First, previous systems are designed for model developers. In prompt engineering, the target users are extended to non-technical people whose knowledge on machine learning can not be assumed. Second, previous systems evaluate performances with quantitative metrics such as the F1 score, but these quantitative metrics are not applicable in most prompt evaluation scenarios. Third, previous systems guide users by visualizing the relations between certain hyperparameter settings and the performance metrics. In prompt engineering, the search space is all possible text expressions that can not be enumerated, and the non-deterministic nature of prompting introduces a high uncertainty in the outputs, making it hard to identify relations between prompts and performances. In our work, we explore the possibility of evaluating prompt performances with feature metrics, which are more approachable for non-technical users and can guide them in discovering the relations between prompts and performances. §.§ Interactive Prompt Engineering and Design Studies _2_2 Given its uniqueness, several design studies have been conducted to explore the challenges in prompt engineering <cit.> and evaluation <cit.>. Zamfirescu et al. <cit.> found that prompt evaluation practices of non-technical users are opportunistic rather than systematic, due to the lack of experience in controlling automatic systems. Jiang et al. <cit.> found the mental load of manually skimming large volumes of text introduces a significant evaluation challenge. Moreover, it is hard to transform the evaluation into prompt refinement, i.e., little actionable insight is generated. Kim et al. <cit.> focused on prompt evaluation and reported two additional challenges. First, evaluation is multi-criteria, i.e., the quality of the outputs could not be evaluated with a single criterion. Second, evaluation is dynamic, i.e., designers expand or change their criteria as they observe unexpected flaws in the outputs. Many works have attempted to support prompt engineering in an interactive, code-free environment. PromptIDE <cit.> and PromptIterator <cit.> support the iterative experimentation process. As researchers and practitioners introduce more prompting techniques and best practices <cit.>, Kim et al. <cit.> and Arawjo et al. <cit.> propose to design chains of reusable blocks to separate prompt design, model selection, and evaluation. Another line of work focuses on providing prompt suggestions, ranging from word-level <cit.>, sentence-level <cit.>, to prompt-level <cit.>, where users prompts are replaced with an expert prompt template. We summarize the prompt evaluation challenges as Opportunistic, Manual, Multi-criteria, Dynamic, and Unactionable. Previous design studies and applications either focus on evaluation at the instance scale, i.e., the prompt is refined and tested on at most a few instances, or support only evaluation at the dataset scale in supervised settings, where performances can be measured with loss or accuracy with labeled datasets. In our work, we explore the tasks and designs to support text summarization evaluation at the dataset scale, which is under-explored and calls for new visualization techniques. § PILOT STUDY From the literature review, we have learned that quality metrics are ineffective in text summarization evaluation <cit.> and hypothesized that feature metrics, such as complexity and naturalness, could be a new way of dataset-scale prompt evaluation. As prompting is a relatively new research area and the typical workflows and challenges are not well-studied, we conducted semi-structured interviews with 6 participants to verify findings from existing works <cit.>, explore new challenges that emerge at dataset scale, and confirm the feasibility of using feature metrics for dataset-scale prompt evaluation. We recruited users of LLMs from both technical and non-technical backgrounds (e.g., ChatGPT). Below, we report our study design and findings. §.§ Study design In our interview, seek to answer the following research questions (RQs): * RQ1: What are the challenges in dataset-scale summarization prompt evaluation? * RQ2: Can feature metrics address the challenges in RQ1? * RQ3: What is the preferred level of explanation of feature metrics? * RQ4: What is the preferred way of supporting prompt optimization? To provide context for the participants, we used summarization on a news article dataset as a simulated scenario in the interview. For RQ1, participants were first asked to write an initial prompt that can summarize a news article, and then brainstorm ways to evaluate it. Then, we introduced a set of potential features to be used for evaluation, and then asked participants to brainstorm ways of evaluation again (RQ2). To answer RQ3 and RQ4, we prepared different levels of feature metric explanations and prompt editing support, as shown in <ref>, and asked the participants to choose levels that they prefer and explain why. Participants We recruited participants with varying backgrounds: two data/visualization scientists (P1-2), two NLP researchers (P3-4), and two non-technical researchers with backgrounds in environmental science (P5-6). They also have varying levels of experience with prompts. The most experienced participant is an LLM researcher who has worked on multiple related research projects and an internship. Middle-level experienced participants (N=2) had designed prompts programmatically. The least experienced participants (N=3) had used ChatGPT for preparation of presentations, or summarizing long texts. Participants reported frequent usage of prompts for at least half a year. Procedure The 30-minute interview was conducted in a semi-structured way. We started by asking about the participants' background, experience with prompting, and their typical workflow. Then, we introduced a simulated scenario in which they designed a summarization prompt for news articles, that takes one news article and outputs one summary and emphasized that the prompt should be generalizable to 100 articles. Participants were not required to write actual prompts since we were only interested in their thought process. Finally, we asked about their preferences on different levels of feature metrics and prompting support. All participants received a 5 USD compensation. §.§ Findings Regardless of their backgrounds, all participants reported a similar workflow for prompting: first write an initial prompt to get a baseline response, and iteratively refine the prompt by skimming through the responses. The reported challenges align with the five challenges we summarized from the literature review. In addition, we identified several new findings when prompts are evaluated at the dataset scale: F1 Dataset-scale evaluation is infeasible without support. The simulated scenario revealed that some external support is necessary for dataset-scale evaluation (RQ1). In their current practice, participants would randomly pick a few examples to see if the prompt works well. All participants (P1-6) agree that such a way of evaluation becomes infeasible as the evaluation scales up to a dataset as they would need to manually skim through hundreds of summaries, far beyond the physical cognitive limit of human beings. Some (P3, P4) suggested automated metrics as a possible solution, but “it is hard to find a metric that can generalize across the diverse tasks enabled by prompting”. The non-deterministic nature of prompting makes it hard to guarantee that the prompts can generalize well without actually executing the prompts, which would be costly in time and slow down the iterative refinement. F2 Features are critical to dataset-scale evaluation. Decomposing goals into features before the refinement presents opportunities to address the dynamic and unactionable challenge. When asked to evaluate prompts that generate an “academic” summary, participants were clueless about the suitable evaluation criteria. However, after introducing the feature metrics, all participants agreed that these metrics could be used as evaluation criteria to cover various summarization requirements (RQ2), e.g., “academic” could be expressed as complex, very formal, and having a neutral sentiment. Previous works have pointed out that prompt evaluation is dynamic in that designers frequently change the evaluation criteria and redefine “success” after finding unexpected flaws in the output, and that it is hard to gain actionable insights <cit.>. Our interview revealed that not knowing which features constitute the intended goal is a major obstacle in evaluation. By assisting designers to systematically make sense of potential features and select the most appropriate ones, the evaluation criteria are less likely to change, and designers can identify weak aspects in the iterative refinement process. F3 Metrics are guidance, not target. When asked about the preferred explanation of feature metrics (RQ3), participants predominantly (P2-6) chose L2: textual definitions in computational linguistics (<ref>). Most participants excluded L1 for being too generic and L3 for being too specific for the model to follow as an instruction. Moreover, some (P2, 4, 6) had doubts about the reliability of the L3 computational formula, questioning that “it might not be a good representation of complexity”. P2 emphasizes that “(designers) should not optimize for the formula, because LLMs can understand complexity more deeply”. This observation echoes with GoodHart's Law <cit.>: when a measure becomes a target, it ceases to be a good measure, and is reflective of the current situation in text summarization, where researchers are discouraged from using metrics like ROGUE to evaluate LLM-generated summaries. As a result, we assign each feature with semantically meaningful categorizations and refrain from showing metric values. F4 Prompting methodology is more important. All participants preferred prompt suggestions over prompt replacements (RQ4), despite the different levels of prompt editing support introduced in previous works. P6 commented that “(receiving suggestions) is a more learnable experience”. Other participants also expressed the desire to learn to design good prompts, i.e., the methodology of prompting <cit.>. We also found that even without knowledge of the established methodologies, participants developed similar methodologies through their own prompting experience, such as chain of thoughts (CoT) <cit.> or providing examples. This suggests that the design of prompting support should communicate the underlying prompting methodologies. § REQUIREMENT ANALYSIS Combining the literature review and the pilot study findings, we identified four tasks that users must perform in a systematic evaluation workflow: Feature Selection, Example sourcing, Prompt Refinement, and Evaluation for Refinement, as shown in <ref>. Since the workflow is guided by features, we refer to it as a feature-oriented workflow. Next, we introduce each task and discuss its necessity in detail. §.§ Systematic Workflow * Feature Selection: Designers need to make sense of the potential features and their correlations, and then pick a subset that best represents the intended goal. Designers should avoid fixating on minor feature values F3, and instead set target ranges (configurations) of the features. As indicated by F2, making Feature Selection explicit and systematic enables the subsequent evaluation to become less dynamic and more predictable. * Example Sourcing: Once the features are selected, designers should find ideal examples that fit the features, which can be used to validate the feature configurations. Examples are also used in the prompts or compare the outputs in subsequent tasks. * Prompt Refinement: To guarantee a systematic evaluation, designers need to also refine prompts systematically. First, a representative validation set must be chosen statistically to guarantee its validity. Then, prompting should follow the methodologies proposed by prompt engineers and researchers to write better and more controllable prompts F4. Finally, prompt drafts are executed on the validation set to generate insights for refinement. * Evaluation for Refinement: With previous tasks, we establish a base to ensure that a systematic evaluation is possible. After getting feedback from the selected features, we emphasize that designers should use features as guidance, not targets F3. Once the refinement on the validation set is completed, designers can test the prompt on the whole dataset to confirm the performance. Ideally, designers would complete the tasks sequentially (<ref>, yellow arrows). Nevertheless, the workflow supports an iterative process: designers can use the feedback from a later task to refine a previous task (purple arrows). For example, if T3 shows that the examples are not representative, designers can go back to T2 for better examples. Such a systematic workflow presents a structured way of refining and evaluating prompts, allowing easy identification of failure points. §.§ Design Requirements A systematic evaluation with the above tasks is non-trivial for prompt designers to do properly F1, especially for non-technical people. We summarized four design requirements to support T1–4: * Support sensemaking and recommendation of feature metrics. As our target audience does not necessarily come from a technical background, selecting features introduces a steep learning curve. Our system should support an automatic recommendation of feature metrics based on the user's goal and provide explanations. * Support overview and identification of ideal examples. Identifying and validating ideal examples at the dataset scale is cognitively demanding. To facilitate this process, our system should use statistical analysis to guarantee the soundness of the examples and interactive visualizations to reduce the cognitive load. * Provide suggestions based on established methodologies. Prompting is a relatively new area and methodologies are constantly improving. As our users might not be aware of such advancements, our system should be designed to enforce state-of-the-art methodologies and give suggestions when necessary. * Support visual tracking of prompt refinement effects. It is critical yet demanding for designers to track the evaluation results with pure statistics. Our system should support such tracking through visualization and convey the performance of each prompt. § : SYSTEM DESIGN Based on the design requirements, we developed [<https://github.com/SamLee-dedeboy/Awesum>], a visual analytics system that supports the feature-oriented workflow on dataset-scale summarization prompt refinement and evaluation (<ref>). Next, we introduce the details of each component in detail. §.§ Feature Computation characterizes summaries with six features and guides users toward their goals. We use well-established metrics for complexity, formality, and sentiment, but no known metrics suit our needs for faithfulness and naturalness, so we introduce new computations for them. Following R1, we categorize each metric into easy-to-understand levels, encouraging users to not fixate on minor differences in metric values. Below, we briefly summarize their definitions. Computational details and categorizations are presented in the appendix. Complexity The complexity score characterizes the ease with which a reader can understand a written text. We use the Flesch Reading Ease Index <cit.> to measure the complexity of a text based on sentence count, word count, and syllable count. We simplify its original eight-level categorization into five levels: Elementary, Middle School, High School, College, and Professional, which indicates the knowledge level required for a reader to easily understand the text. Formality The formality score characterizes how formal a piece of text is in terms of linguistic structures, conventions, and vocabulary. We use the measure of textual lexical diversity (MTLD) <cit.>, which is calculated from the ratio of unique word stems to the number of words. We categorize it into Informal, Standard, Formal and Very Formal. Sentiment The sentiment score characterizes the emotional tone expressed in a piece of text. We use VADER <cit.>, a lexical-based sentiment analysis model to measure the sentiment of a text, which generates a score between [-1, 1]. Then we categorize the sentiment into Negative, Neutral, and Positive using a threshold of 0.3. Faithfulness Faithfulness characterizes the degree to which a generated text is consistent with the input information in terms of semantic similarity, completeness, and accuracy <cit.>. We incorporate this feature in response to the “hallucination” issue that exists in most LLMs and is of the most concern to prompt designers. Although more advanced approaches exist, we calculate the faithfulness score based on Named Entity Recognition overlap (NER-overlap) <cit.> for its transparency, robustness, fast computation speed, and reasonable alignment with human judgment. We categorize it into Bad, Low, Avg, and Good. Naturalness Naturalness characterizes how well a text reads like human-written. To the best of our knowledge, no metrics have been proposed to evaluate the naturalness of texts. Based on the insights from Pu et al. <cit.> that LLM-generated text exhibits statistical differences in certain linguistic features, such as part-of-speech (POS) tags, we conducted experiments on a dataset <cit.> that contains both human-written and LLM-generated summaries to select differentiable linguistic features and use them to compute the naturalness score with a weighted sum. The score is similarly categorized into Bad, Low, Avg and Good. Length We use word count as the length of a text. Although a simple feature, prompt designers and our pilot study participants have reported difficulties in controlling the length of the generated text. We thus include it and categorize it into Short, Mid, Long, and Very Long. Finally, we construct a feature vector F=(f_1, f_2, f_3, …) for each summary, where f_i is the numerical score of a feature calculated on the generated summary. Then for each feature, we compute its z score as the value of each dimension in the feature vector. The feature vector of each summary is the basic computation unit in other parts of the system, namely correlation analysis, clustering, and dimensionality reduction. _5_1 We chose the above six feature metrics because there is a clear semantic meaning to each feature. Considering that our target users could include non-technical people, the interpretability of the metrics is essential to a systematic and rigorous evaluation. Moreover, we refrain from using LLM-evaluators to support user-defined features <cit.>, even though it might broaden the applicability of the system, for three reasons. First, LLM evaluators have received criticism for their inconsistent alignment with human judgment <cit.> and potential biases <cit.>. Second, we have conducted experiments to show that the non-deterministic nature of LLMs makes them unreliable in dataset-scale evaluation. Experimentation details are presented in supplemental materials. Third, the goal of developing in this paper is to verify the effectiveness of the feature-oriented workflow on text summarization task. Considering the above limitations of LLM evaluators, it might introduce unnecessary confounding factors in the evaluation. §.§ Feature Selection Feature Selection View supports T1 with two sub-tasks: feature correlation analysis and feature recommendation R1, which supports users to set feature configuration for their goals of the summarization prompt. Feature Correlation Matrix Feature correlation matrix shows the Pearson correlation coefficients between all pairs of features (<ref>-a), which is designed to prevent users from selecting conflicting features that are hard to accomplish simultaneously. For example, if the correlation matrix suggests that complexity and naturalness have a negative correlation, users should not pick a configuration that has both high (or low) complexity and naturalness. Visualizing the correlations as a matrix allows users to avoid such conflicts at a glance R1. The L2 explanations of each feature can be inspected by hovering over the feature tags. Based on the result of the baseline prompt, the system calculates all the feature vectors F on the generated summaries and the Pearson correlation coefficient between all pairs of features. If the coefficient exceeds a threshold, we use a square to indicate its significance and encode the correlation strength with the length. Negative and positive correlations are encoded by red and green, respectively. Features omitted by the user in the configuration are shaded in stripes. _5_2 Feature Recommendation In case the feature correlation matrix does not provide a concrete idea for the user, Feature Recommendation Panel (<ref>-c and -d) integrates a chatbot to recommend feature configurations. Users can express their goals in natural language (e.g., generate summaries suitable for academic writing), and the chatbot will respond with a recommended configuration with explanations R1. Under the hood, we provide the L2 feature definitions and their categorizations to the chatbot to ensure reasonable responses. If significant correlations exist in the features, the chatbot would highlight the correlations and give suggestions accordingly. The system automatically fills in the recommended configuration (<ref>-b). Dropdown menus are provided to change feature levels manually. §.§ Example Sourcing Example Sourcing View supports T2 with Cluster Plot (<ref>-b), which presents an overview of the dataset, and a side panel that can be switched between Cluster Profiles (<ref>), which visualizes cluster feature characteristics; and Feature Distributions (<ref>-c), which provides finer control over feature ranges selection R2. _5_3 Users can inspect Cluster Profiles or Feature Distributions and narrow down the target feature ranges (green bubble in <ref>-b and green ranges in <ref>-c) to identify ideal examples. Cluster Analysis The system applies the OPTICS clustering algorithm <cit.> on the initial summaries using their feature vectors. The OPTICS algorithm has several benefits over other clustering algorithms such as KMeans <cit.>. First, OPTICS is more flexible as it automatically detects the cluster densities to decide cluster numbers and thus does not require prior knowledge of the dataset's cluster shapes. Second, it ensures that all generated clusters have low variance in feature vectors, making each cluster distinctive. Third, it removes “noises” that are not close to any clusters, which is ideal for identifying examples as our users might not have the cognitive power to identify examples from a large amount of data. These benefits make the OPTICS algorithm ideal for our system. In addition, The clustering results are used to generate the validation set by under-sampling <cit.>, i.e., cluster centroids are assigned to the validation set, ensuring the diversity of the validation set for robust evaluation. Cluster Plot _5_4 In Cluster Plot, each initial summary is encoded as a circle and colored by their corresponding cluster. applies Kernel PCA <cit.> with cosine distance on the feature vectors of the initial summaries to generate 2D coordinates, which plots clusters with similar feature characteristics closer to each other, forming regions in 2D space that represent certain characteristics. The design considers maintaining the visual continuity between example identification and tracking prompt refinement effects (<ref>). Thus, we look for parametric dimensionality reduction methods, where an explicit mapping function (i.e., projection to low-dimensional space) can be reused, excluding popular non-parametric methods like t-SNE <cit.>, UMAP <cit.> or MDS <cit.>. Kernel PCA allows us to reuse the same projection in subsequent steps where a visual tracking of the performances of different prompts is provided, while capturing the non-linear feature relationships in the data. We additionally employ two techniques to improve visual clarity. First, we hide noise points deemed by the OPTICS algorithm as they are not representative, and only show them upon user toggling. Second, we apply collision detection and a force-directed layout that attracts each point to its cluster's centroid. Although this would affect point positions given by Kernel PCA projections, this is acceptable as the exact position of each point would not affect users identifying ideal examples T2. Since our target audience is non-technical prompt designers, we emphasize clarity over accuracy. Cluster Profiles Cluster Profiles show the feature ranges of each cluster in a scaled vertical bar chart (<ref>). Considering R2, we want to highlight distinguishing feature characteristics among clusters through Cluster Profiles. Inspired by the Difference Overlay design <cit.>, we scale the bars by aligning the global mean of each metric at the center of a profile. Then, we take the maximum of (global_max - global_mean, global_mean - global_min) as the range of half of the width and scale each bar accordingly. This way, users can easily identify the distinctive features of each cluster. Clicking on a cluster profile highlights its position in Cluster Plot (green bubble) and automatically sets the cluster points as ideal examples. Feature Distributions Feature Distribution Panel (<ref>-c) supports users in identifying ideal examples with finer control over the feature configurations R2. It shows the unscaled ranges of each cluster for each feature, which complement the scaled ranges in Cluster Profile while maintaining the same visual encoding for colors (cluster label), emphasizing visual continuity and simplicity. The cluster bars are ordered from top to bottom by their mean value on the corresponding feature. For each feature, users can click on a cluster bar to set its range as a target or drag the double-direction slider at the top to adjust the target range, indicated by a light green background. Supporting T2 at a finer scale ensures the system adapts to diverse goals. §.§ Recommendation Recommendation View (<ref>-d) supports validation of the target ranges of the features T2. It shows the content of the examples with their feature categorizations. After inspecting the examples, users can click the star icon and add them to the prompt. The design of Recommendation View considers two weaknesses in Example Sourcing View. First, it is hard for users to estimate the number of recommended examples in Cluster Plot, as users would need to estimate the region's area to estimate the number. To complement, we use a fixed height for each example in Recommendation View by collapsing excessive content, ensuring as many examples are shown in the viewport as possible. This allows users to estimate the number of examples at a glance with the total height of the examples. Second, Cluster Profiles are scaled so they do not strictly encode the range of each feature. This could mislead the users into choosing the wrong feature configuration. To complement, we encode the feature values in horizontal green bars (<ref>-d1). Users can skim through the examples with awareness of the true ranges of each feature at a glance. §.§ Prompt Editor Prompt Editor View supports users with Prompt Refinement T3. Informed by state-of-the-art prompting methodologies <cit.>, we divide a prompt into five blocks: Persona, Context, Constraints, Examples and Data. In each block, users have a clear goal for the content of the section R3. For example, Persona block should specify a role-based identity for the AI to adopt. This explicit division of a prompt enforces the users to follow established prompting methodologies with a minimum requirement for knowledge of the methodologies. Once a prompt is written, users can click the “Apply” button to test it on the validation set. In addition, users can hover over the block titles to learn the purpose of the block and get suggestions specific to their feature configuration from a chatbot R3, as shown in  <ref>-e. §.§ Prompt Comparator Prompt Comparator View supports T4 with two components: Prompt Tracking Panel (<ref>-f) and Bubble Plot (<ref>). Both components incorporate visualizations to reduce the cognitive load of keeping track of multiple iterations of prompts and their performances R4. Since the refinement is conducted on the validation set, a “Test” button is provided to execute the prompt on the whole dataset once users are satisfied with the prompt. Next, we introduce each component in detail. Prompt Tracking Prompt Tracking Panel (<ref>-f) stores all the prompts that the user has written, including the prompt content, the examples, and their evaluation result using a horizontal dot plot (<ref>-g). Tracking the history of prompts is essential as prompt refinement and evaluation is a highly iterative process. To provide visual tracking R4, each prompt snippet includes a dot plot that visualizes the performance of the prompt in detail, where each row is a feature and each dot in a row is a validation case. Since the validation set is generated from the cluster centroids, we encode the dot size with its corresponding cluster size to indicate its importance, i.e., larger dots are more important because they represent more points. We stroke the feature configuration in light green on the dot plot to clearly present the goal of the prompt: make all the dots fall into the light green bars. Bubble Plot Bubble Plot (<ref>) supports visual comparison between any two prompts and visualizes their “distances” from the ideal examples to support T4. As illustrated in <ref>, we reuse the Kernel PCA projection in Cluster Plot for all points in the plot to ensure visual continuity. Ideal examples are highlighted in a light green bubble as an indication of the goal. Two iterations (old and new) of the validation cases are plotted as circles, where the circle radius encodes its corresponding cluster size to suggest importance, and are surrounded in light and dark gray bubbles, respectively. This allows the user to estimate the performance of a prompt through the distance between its bubble and the green bubble R4. The expected visual pattern for a “better” prompt is that the dark bubble is “moving towards” the green bubble, and vice versa. To make the comparison more explicit, we connect the same validation case from two versions of prompts with curves generated with linear sampling in high-dimensional space. This effectively visualizes the “trajectory” of the validation case incurred by the new prompt. Specifically, for each pair of feature vectors (F_1, F_2), we linearly sample ϵ=100 points between F_1 and F_2 and project each point to 2D space, thus forming the curve and encoding the importance with curve width. _5_5 Since the projection is non-linear, the sampled trajectory appears to be curved. Also, using a fixed number of sampling points (ϵ) makes long trajectories appear dotted. We use a color-blind-friendly color palette to indicate the direction of the trajectory: yellow if the validation case is moving closer to the ideal examples, and vice versa, purple. Validation cases with non-significant changes (below a threshold) are colored in gray. Users can click the “Select Comparison” button to select any two iterations of prompts to compare. Bubble Plot Design Iteration The design of Bubble Plot went through three iterations, as shown in <ref>. In v1, we encoded the cluster of each validation case with color and connected validation cases from two different prompts with an animated straight dotted line to indicate their moving directions. However, the lines were cluttered, and interactions were needed for further analysis. Barely any insights can be generated at a glance. Inspired by Scheepens et al. <cit.>, in v2, we mitigated the clutter issue by connecting validation cases with curves generated by linearly sampling in high-dimensional space and used green (closer) and red (further) to encode the moving direction from the validation cases. Users can now compare two prompts at a glance, but it was hard to estimate the performance of individual prompts, which is critical in R4. This motivated us to surround the validation cases with BubbleSets to indicate the relative distance from the target bubble, providing a clear visual tracking of prompt performance. § CASE STUDY We demonstrate how a non-technical prompt designer with no prior prompting experience can use to refine and evaluate prompts. Alice is a middle school teacher and she wants to customize a chatbot that can summarize sports news in a way that appeals to teenagers. She loads the sports-related materials into and uses a baseline prompt to generate the initial summaries. The system executes the prompt and processes the summaries. Alice starts by selecting the features that make a summary appeal to teenagers T1. She asks in Feature Recommendation Panel: “I'm a middle school teacher. How to make the summaries appeal to teenagers?”. The chatbot answers with a recommended feature configuration with explanations. She checks the definitions of the features and adjusts “sentiment” to “positive” as she wants the summaries to be more energetic, as shown in <ref>-a. After setting the feature configuration, she then proceeds to find ideal examples that fit this configuration T2. She clicks the arrow in the green background at the top right, which triggers the system to automatically find a cluster that best fits the feature configuration. The system highlights the purple cluster in a green background and dotted lines that connect to “recommendations” at the top (<ref>-b). She skims through the content and the feature ranges of the purple cluster in Recommendation View (<ref>-d) to ensure that they fit her expectations. She is satisfied with the cluster and selects the two best examples to use in the prompt (starred in green). After Feature Selection and Example Sourcing, Alice proceeds to write the prompt T3. Even without prior prompting experience, she can easily understand what each block means and what should be written as the blocks resemble interaction patterns with chatbots. She clicks the “Get Suggestions” button to brainstorm some ideas for Persona block (<ref>-e), then decides to put “You are a middle school teacher who is trying to get students interested in sports news” in Persona block. She repeats similarly for Context block and Constraints block and then executes the prompt on the validation set. The system executes the prompt and recalculates the features. The new prompt's content and its evaluation result are presented in Prompt Tracking Panel (<ref>-f), with the new prompt colored in dark gray and the baseline prompt colored in light gray. From Bubble Plot (<ref>-h1), she can quickly tell that the new prompt has a bad overall performance as the gray bubble is not moving closer to the green bubble. She inspects the dot plot (<ref>-g) and finds that Complexity and Length are not being satisfied. She thus goes back to the prompt and refines the parts where she describes complexity and length T4, seeking the prompt suggestion chatbot for help. After the refinement, the new prompt is much better as the dark gray bubble has significant overlaps with the green bubble with only a few exceptions. The trajectories show that most validation cases are moving towards the green bubble, and only a few do not have significant changes. Alice is satisfied with this prompt and she clicks the “Test” button to test it on the whole dataset, which shows that the prompt works as expected. This case study shows that the system is effective in supporting dataset-scale prompt refinement and evaluation. Using the system, even non-technical people with no prior prompting experience could write a reasonably good prompt, evaluate the prompt, and figure out which parts need to be refined. By following the systematic feature-oriented workflow behind the system, users overcome the first four challenges (opportunistic, manual, multi-criteria, and dynamic) in prompt evaluation. Cluster Plot and Bubble plot allow users to overcome the unactionable challenge by visualizing the prompt performance. § PRACTITIONER REVIEW AND DISCUSSION is evaluated through a practitioner review. Even though the system is designed for text summarization, we aim to explore a broader range of tasks in people's everyday work that can also be supported. We recruit practitioners from various backgrounds who are interested in customizing LLMs to automate their professional workflow. We evaluate the effectiveness of the system with them and discuss its generalizability. In addition, we report challenges in prompt evaluation and human-agent interaction that arise in professional usage. §.§ Participants and Procedure Four practitioners from different backgrounds were recruited: C1 is a correspondent for the New York Times specializing in scientific topics; C2, C3, and C4 are researchers in ecology, NLP, and sociology, respectively. C1, C2, and C4 came from non-technical backgrounds and had only prompted in ChatGPT. All participants expressed strong interest in customizing LLMs with prompts to automate their workflow. The review procedure consists of three sessions, 20 minutes each. In the first session, we introduce the background and give a quick walkthrough of the interface. In the second session, participants choose a topic (from Politics, Sport, Technology, and Business), propose a summarization goal, and then use to write prompts that fulfill that goal. Finally, participants engage in a semi-structured interview. We provide at least 25 US dollars in compensation for all participants. §.§ User Feedback Overall, the system received positive feedback on usability and user-friendliness. Still, a minimum amount of training is needed to use the system. Below, we report the practitioner review on system design, prompting methodology, learning curve, and visualization literacy. System Design The system was highly praised for being easy to follow despite its complexity. Participants commented that “I have no trouble understanding what I should do (C2)” and that “The organization makes a lot of sense (C1)”. All participants successfully wrote a well-performed prompt for their goals based on the feedback from the system. As C3 said, “You can tell (from Bubble Plot) that initially my prompt wasn't good, then I changed the prompt according to the features (L2 descriptions) and it's giving exactly what I wanted.” Bubble Plot and dot plots facilitated participants to examine and locate unfulfilled features and the L2 descriptions helped them refine prompts. Prompting Methodology Providing suggestions supported by the established prompting methodologies inspired less experienced participants to explore more possibilities, going beyond our initial expectation that simply facilitates prompt writing: “(C4): If I had just a blank text box, I would not know how to write the first sentence …and I would not know that this is something that you can do. ” Most inexperienced prompt designers find prompting challenging because they do not know what the model is capable of. Before using the system, C4 did not know that designers could instruct the model to adopt a persona, and that inspired her to explore what could be used as a persona. As opposed to L1–3 prompt editing support (<ref>), providing prompting suggestions (L4) taught participants what to expect from the model. Learning Curve There is still a non-trivial learning curve to overcome, despite being highly praised for its clarity. For C1, “without you to talk me through (the interface), it would have been hard to figure out, (because) there is a ton of information onto one page.” This shows that for people from non-technical backgrounds, engaging in a systematic workflow is not something they are familiar with and could be challenging at the beginning. Still, the clarity of the workflow and visualization design helped smooth this process, as C1 and C4 both commented that “it was like playing a game”. By visualizing the prompt's performance with Bubble Plot, the system provided a clear goal and abundant visual hints for the participants to grasp their current status and how far away they were from the goal, leading to a less unpredictable experience. Visualization Literacy For people unfamiliar with high-dimensional visualization, Cluster Plot could be confusing. As C2 mentioned, “I can understand that each region has its own characteristics and that the green bubble is my goal, but I would still want to know how it works under the hood to really make sure I'm not misunderstanding.” C4 also raised a similar concern: “Figuring out what these functionalities do is one thing; figuring out what these functionalities mean, that's another thing.” This is a valid concern since Kernal PCA introduces distortion in the two-dimensional scatter plot, which could mislead users who are not aware of it. Even though we designed Bubble Plot with loose encodings to mitigate this issue and encouraged users to use metrics as guidance, users could still fixate on absolute metric values without a minimum amount of training. §.§ Design Implications In this section, we first discuss how the system can be generalized to a broader range of tasks. Then, we report three implications on human-agent interaction: the issue of trust, experimenting with LLM capabilities, and using features as boundary objects. Supporting Information-Condensing Tasks _7_1 Beyond summarization, using the system to evaluate prompts for condensing information is a highly praised usage. When asked about how well the system could be generalized to other tasks in their professional work, all participants agreed its strong potential in condensing information, e.g., generating logical relations for multiple articles (C1), leading to the importance of proper evaluation, as C4 mentioned: “Say I take 2000 tweets, something I might do with AI is to clean these tweets because not all tweets have meaningful things according to my use…but I can't use it unless I have a grasp of what the AI's assumption is about `meaningful'.” Challenges in evaluating information condensing tasks resemble those in text summarization in many ways: there is a great amount of information (data); it is cognitively demanding to evaluate even a small amount of outputs; and it is probably unreliable to evaluate all of them with quality metrics. Participants agreed that the current feature metrics in most scenarios for their professional work, which is reasonable because the standards for “professional writing” are quite similar. In general, the system is shown to readily apply to information-condensing tasks. Shift in Task Formulation _7_2 We observe a shift in the formulation of researchable tasks from practical tasks in the practitioner review. For example, to understand a large amount of customer feedback quickly (practical), researchers in traditional machine learning formulate the task as sentiment analysis or topic modeling task (researchable). In the pilot study and practitioner review, non-technical participants can design prompts that directly solve the practical task at hand. In prompt engineering, it is less important to rigorously define a researchable task given the high customizability of prompts. Therefore, while the system seems not applicable to many existing NLP tasks, such as data extraction <cit.>, making it less advantageous for NLP researchers, it still brings much value to non-technical practitioners. Applying to Image Generation Tasks _7_3 C3 was very positive about applying the system to image generation tasks: “you would need some other metrics (for image generation) like the styles of the images, but the other things are more or less the same.” The system design makes it easy to extend feature metrics for a broader range of tasks. Given the right set of features, the rest of the system is agnostic of the underlying features because OPTICS clustering and Kernel PCA both operate on feature vectors. For image generation prompt evaluation, a potential direction for computing feature metrics is to evaluate the style of an image through image similarity analysis <cit.>. For example, one could collect a dataset of cartoon images and calculate the similarity between a generated image and the dataset to quantify how “cartoony” the image is. Such stylistic metrics could be used as feature metrics to evaluate image generation prompts. In general, the system has the potential to be extended for image generation tasks, but developing corresponding feature metrics is out of scope and we leave it for future work. Beyond generalizability, participants also brought up topics related to human-agent interaction in their professional work. Experimenting with LLM Capabilities By observing the changes in features, designers can experiment with prompts to better understand the capability of LLMs, as shown by two unexpected usages by C3 and C4 during the practitioner review. C4 tried paraphrasing, e.g., changing the persona from “reporter” to “editor”, and observing changes in relevant features, such as complexity and formality. For inexperienced prompt designers, coping with the ambiguity inherent in natural language has been one of the reasons why prompting is challenging. Experimenting with prompts mitigates this issue as it guides designers in sorting through the ambiguity: “It's actually not putting stuff in unambiguous form, but still less ambiguous enough that you can track where things are going. (C4)” C3 used the system for a more technical experiment, where he tested the positional bias <cit.> of LLMs. C3 put conflicting instructions in different parts of the prompt, e.g., asking the model to adopt the persona of “an elementary school teacher writing for kids” but generate summaries “as academic as you can”, and observing which part of the instruction is followed based on the features. Both usages showed that evaluation through features has the potential for understanding and even diagnosing LLMs. Features as Boundary Objects The practitioner review revealed the potential to use features as boundary objects <cit.>, i.e., a common ground that could be reached between humans and agents in prompting. C2 and C4 both felt that using natural language to communicate with AI agents is different from communication with humans. When asked why, C4 replied that “I know what I want, but how do I say it has to do with context”, suggesting that context needs to be presented before proper communication can continue with each new conversation. C2 pointed out that the ability to sort through vague communication is critical: “If the professor tells me to make something better, I'll figure out myself what is `better' and how to make it better. But with GPT, I have to say exactly what is better.” We believe that this issue could be mitigated by using features as boundary objects and reverting the initiative in feature selection. Instead of a human selecting the feature configuration, the agent could use the features to make confirmations with the human. For example, to generate an “academic level” summary, the agent could confirm with the human what “academic level” means by asking for confirmations on the complexity level, formality level, and so on. Incorporating features as boundary objects presents a promising direction for supporting better human-agent interaction. Issue of Trust As a professional journalist, C1 had serious trust issues with applying intelligent agents in journalism work: “Because I know about GPT hallucinations, and if I am saying false things in this material that I produce, that's really bad for me.” For C1, the level of trust needed for the agent seems unreachable: “ (To trust it) I have to check every single thing that it says, and it defeats the purpose of having a tool that increases my efficiency.” In the feature-oriented evaluation workflow, we do not consider the evaluation of the trustworthiness of the prompts as it can not be easily measured through any feature that we know of. Supporting the evaluation of trustworthiness remains challenging yet critical for many professional tasks. § LIMITATIONS AND FUTURE WORK Limitations of the system are centered around features. First, the feature metrics have their limitations. For example, the Faithfulness score computation is confined to the token level, which could hinder its accuracy when the entities have different meanings across sentences. The capability of to capture user intent accurately is thus limited by the feature metrics. We provide a more thorough discussion of the limitations of feature metrics in the supplemental material. _8_1 Developing or selecting suitable features that match the task at hand remains a challenge in the feature-oriented workflow, especially for non-technical prompt designers. This includes the challenge to decompose a complicated task into subtasks that are easier to evaluate, which remains hard even for technical designers with relevant training <cit.>. One possible solution is to maintain a repository of features and provide recommendation support for each task, following the approach for facilitating the selection of layouts for large graphs <cit.>. The practitioner review revealed that the evaluation criteria of most prompting tasks could be decomposed into a combination of features, making it possible to cover endlessly long-tail tasks with a finite amount of features. This observation aligns with the findings of Kim et al. <cit.>, that task-specific considerations are often secondary to broader criteria. Technical contributors can conduct meta evaluation on the features to guarantee their efficacy and maintain a repository of the results, thereby benefiting non-technical prompt designers. Reliance on the dataset hinders generalizability, as one is not always available in every application scenario. A potential solution is LLM-powered data augmentation <cit.>. Given a small dataset, LLMs have the capability to generate high-quality data points by retrieving and extrapolating from large datasets with sufficient contextual information. The discussion with C3 revealed that data augmentation is a promising direction to deal with the absence of a dataset. Finally, certain social factors might not be applicable to the feature-oriented workflow, such as trustworthiness, privacy, ethics, and social biases, which have a significant impact on the adoption of LLM-based intelligent agents deeper into our everyday lives. As mentioned in the practitioner review, the level of trust needed for an intelligent agent to conduct certain professional tasks might take a long time to be gained, defeating the purpose of adopting them for efficiency in the first place. This is the same for other social factors, in that intelligent agents can not be recognized without a transition process that lasts for a significant period. We believe evaluating these social factors is critical before deploying prompts for high-stake tasks, but it is out of the scope of our work and we call for future work to continue on this issue. § CONCLUSION Evaluating human-level summarizations of LLMs requires capturing nuanced quality differences, surpassing the capabilities of quality metrics. Via , we have shown that feature metrics have the potential to provide actionable insights in the summarization prompt evaluation setting. _9_1 While still limited, the practitioner review reveals positive directions to extend the feature-oriented workflow for broader prompting tasks. We call for more research to follow this direction to advance prompt evaluation via a human-in-the-loop approach. abbrv-doi-hyperref <ref> presents computation details for the feature metrics used in , <ref> presents the prompts for the intelligent agents that provide feature recommendations and prompt suggestions, and <ref> presents an experiment to assess the consistency of LLMs as evaluators. All LLM-based implementations use OpenAI's “gpt-3.5-turbo” model. The prompt block definitions and L2 definitions of each feature and used both in the interface and the prompts are listed in <ref> and <ref>. § FEATURE METRICS §.§ Complexity We define complexity as the opposite of readability, which is defined as the ease with which a reader can understand a written text. Following previous works <cit.> in computational linguistics showing that readability is closely associated with the lexical diversity and structure of text, we first calculate the readability of summaries using the Flesch Reading Ease Index <cit.>, a readability metric that measures the readability of a text based on sentence length and syllable count. The resulting score r is a number between 0 and 100 that indicates the approximate educational level a person will need to be able to read a particular text easily, as shown in <ref>. Then we calculate the complexity score as 100-r. The Flesch Reading Ease Index provides an eight-level breakdown. For simplicity, we merge similar levels and generate a five-level breakdown, as given in <ref>. Since we do not extend the formula, its limitation is also inherited: From <ref>, we observe that r relies on text length, and splitting long sentences into shorter ones can artificially decrease complexity. However, since LLMs don't inherently split sentences unless instructed, this limitation has minimal impact on our complexity calculation. r = 206.835-1.015*(Total_Words/Total_Sentences)- 84.6*(Total_Syllables/Total_Words) §.§ Formality Formality in language is defined as the degree of adherence to formal linguistic structures, conventions, and vocabulary. In natural language processing, a more formal language is often associated with a broader range of vocabulary <cit.>. This is further supported by Heylighen et al. <cit.> who associate formality with the explicitness of vocabulary. We use the measure of textual lexical diversity (MTLD) <cit.> for formality calculation. MLTD measures the richness and diversity of vocabulary used in a text, considering both the frequency and the number of distinct words. It is calculated by averaging the length of consecutive word sequences in a text while ensuring they maintain a specific type-token ratio. It evaluates the text sequentially in both forward and backward passes to determine the average factor count, which represents the text's lexical variation. A higher MTLD score signifies a more formal text. We classify the final score as Informal, Standard, Formal, and Very Formal based on the quantiles they lie in. Since we do not extend the formula, its limitations persist. MTLD's sensitivity to text length can cause inaccuracies in very short summaries due to significant partial factors. Additionally, its sequential processing may not fully capture the non-linear nature of human text comprehension. Computation ProcessThe text is processed sequentially from the beginning to the end. A Type-Token Ratio (TTR) is calculated incrementally as each word is added to the sequence. The TTR at each point is the ratio of the number of unique words (types) to the total number of words (tokens) up to that point. A factor is defined as a segment of the text where the TTR reaches a specified threshold, which is set at 0.72. When the TTR reaches the threshold, one factor is counted, and the TTR calculation resets for the next segment. If the text ends before the TTR reaches the threshold for the last segment, a partial factor is calculated. Calculation of partial factors is given in <ref>. The total factor count is the sum of all complete and partial factors. Then the final MTLD score is given by <ref>. We directly adopted the implementation of the metric provided by the authors [<https://github.com/kristopherkyle/lexical_diversity.git>] Partial_factor = 1.00-TTR/1.00-0.72 MTLD = Total number of words/Total Factor Count §.§ Sentiment Sentiment refers to the emotional tone or attitude expressed in the text. We use VADER (Valence Aware Dictionary and sEntiment Reasoner) <cit.>, a lexicon and rule-based sentiment analysis tool that generates a sentiment score based on the words used, considering both the polarity (positive, negative, neutral) and strength of emotion. The calculation of VADER adopts a human-centric approach, where a valence-aware sentiment lexicon is constructed by human annotators. Also, it adopts five generalizable heuristics that reflect grammatical and syntactical patterns that humans use to express sentiment intensity. The final sentiment score is calculated by summing the valence scores of each word in the lexicon, adjusted according to the five heuristics. Since VADER calculates sentiment for each word individually, it may not accurately represent phrases, idioms, or other complex sentences and it's heavy reliance on lexicon quality and coverage may overlook misspellings, grammatical errors, and domain-specific words in overall sentiment calculation. However, we use VADER over other deep learning (DL) models because DL models are often fine-tuned to a domain-specific dataset and lack transparency and robustness. A rule-based sentiment analyzer allows fast computation with consistent results. VADER also generalizes better, as it uses a number of well-established lexicon word banks and captures human perception of sentiment very well. The sentiment score of a summary is given by <ref> Sentiment Score = ∑_n=1^N S_i/N, where S_i is the sentiment score of the i^th word in the text, and N is the total number of words. §.§ Faithfulness Liu et al. <cit.> define faithfulness in natural language generation (NLG) as the degree to which the generated text is consistent with the input information or the reference text in terms of semantic similarity, completeness, and accuracy. Although more advanced models exist, such as question generation and question answering (QG-QA) frameworks <cit.> and textual entailment models <cit.>, we use entity overlap to measure faithfulness. It measures the overlap between the most important entities in the reference and generated texts. A high faithfulness score means the most important entities in the reference text are also present in the generated text. We use entity overlap instead of other state-of-the-art metrics because they tend to have slower computations and might introduce biases and randomness, with insignificant differences in performance. We prefer a robust metric devoid of randomness and offering fast computation speed for real-time analysis. Computation Process The implementations of the whole process are outlined in Algorithm <ref>. We use the pre-trained SpaCy tagger <cit.> to extract entities from the article and the summary, and further incorporate an entity disambiguation step that groups similar entities with fuzzy string matching, as shown in Algorithm <ref>. We first create a pair-wise similarity matrix between all entities. Next, we organize these entities into disjoint sets, which are sets that do not share any common elements and have no intersection. Each disjoint set represents entities that have a fuzzy match ratio of at least ϵ within the set and less than ϵ when compared to entities in other disjoint sets. We set the value of ϵ as 0.7 in all our implementations. After grouping all the similar entities, we disambiguate the entities by only taking one unique entity from each disjoint set. Since not all entities in the article are important, Algorithm <ref> extracts important entities based on their frequency of occurrence using a fuzzy count. Although this is a naive ranking approach, we avoid using other supervised ranking algorithms for providing a fast and robust computation in real time. The most important entities extracted from the source are compared with the disambiguated entities in the summary. We then count the number of matches using the fuzzy match ratio. When there are very few (0–2) from the summary but many (more than 5) entities in the article, we adjust the score by comparing a fixed number of entities (Algorithm <ref>, Line <ref>). Finally, the score is calculated as the ratio of the number of matches to the number of most important entities in the article. We classify the final score as “Bad”, “Low”, “Avg” and “Good” based on the quantile they lie in. One significant limitation of our approach is its reliance on token-level entity overlap, which disregards the semantic context of the entities within the article and summary. While our method accommodates entity and synonym matching, it fails to consider paraphrasing and sentence-level similarity. To address this issue, we could employ deep learning-based sentence embedding models to evaluate the similarity between sentences in the article and the summary. However, in real-time applications, generating embeddings for lengthy texts can be both resource-intensive and time-consuming. §.§ Naturalness The naturalness of text is a measure of how well the text flows, sounds like something a native speaker would produce, is easy to understand, and adheres to the rules of the language. To the best of our knowledge, no known metrics were proposed to measure the naturalness of a text. In this work, we measure naturalness based on linguistic feature differences observed from human and LLM-generated text. All subsequent experiments are conducted on the dataset released by Zhang et al. <cit.>. It is important to note that our approach, tested on general-domain articles, may not generalize to other tasks, domains, or languages. Its applicability in scientific terminology and languages with different grammatical and semantic structures remains unexplored, where other linguistic features might be more relevant. Additionally, this study does not cover NLG tasks like question-answering and logical reasoning. However, similar methods could be applied to these tasks, domains, and languages, as analyzing a set of features has shown better alignment with human judgments in various studies <cit.>. §.§.§ Differentiating Linguistic Features Inspired by the findings of previous works <cit.> that human and LLM-generated summaries exhibit different statistical patterns on certain linguist features, we measure the naturalness of text by those that can differentiate human and LLM-generated texts. As LLMs advance quickly, we sought to verify their findings on state-of-the-art LLMs and selected a set of linguistic features to experiment with, namely average arc lengths, average subtree heights, average dependency-tree heights, average sentence lengths, and average word lengths. First, we computed these features on all the summaries in the dataset and conducted PCA on these features, as shown in <ref>. To better understand the PCA results, we generated visualizations to explain the differentiating capability of each feature. We observed that the number of subtrees (right and left), average arc lengths (total, right, and left), and average word lengths do not show any significant difference. Next, we present the observed differences in other features. Subtree Heights An analysis of the subtrees in the dependency parse trees of summaries shown in <ref> revealed that humans tend to produce text with balanced left and right subtrees more frequently as compared to LLM summaries. This is also coherent with the works of Temperley et al. <cit.> which shows that sentences that have a balanced depth on either side of the root in the dependency parse tree are judged to be more natural and well-worded by humans. We also note that human-written summaries tend to have shorter average left and right sub-tree heights more frequently than LLM summaries. These observations indicate that texts with shorter and balanced sub-tree heights are more likely to exhibit human-like naturalness. Dependency Tree Heights We observe that humans tend to produce summaries of shallower dependency tree heights more often than LLMs. In natural language, deeper dependency parse trees have more linguistic and dependency complexity. Groves et al. <cit.> train classifiers on word embeddings and linguistic features to weigh their importance in the automatic evaluation of naturalness. They find dependency tree height to be one of the most important linguistic features. Following this insight, we compare the dependency tree heights of LLM and human summaries in <ref>-a. LLM tends to produce very few summaries with short (<3.5) dependency tree heights, while human summaries have less frequency of long (>3.5) dependency tree heights. This indicates that text with shorter dependencies and lesser linguistic complexity correlates better with humans. Sentence Length We further compare the average sentence lengths of human and LLM summaries in <ref>-b which shows that humans tend to produce lesser summaries with longer average sentence lengths as compared to LLM summaries. Human texts also have shorter sentence lengths more frequently. Conversely, LLM tends to generate longer sentences more frequently, which can be attributed to their tendency to hallucinate in some cases. §.§.§ Naturalness Score Calculation Based on previous works <cit.> and the aforementioned experiments, we identified a set of linguistic and lexical features that have been shown to exhibit statistical differences between human and LLM-generated texts, consisting of average subtree height (right and left), average dependency tree height, and average sentence length. This feature set is used to train a random forest classifier. The classification task is to predict if a summary is written by a human or an LLM based on the linguistic features. The classifier achieves an accuracy of 99.6% on a test size of 300 human or LLM-generated texts <cit.>. The final weights assigned to these metrics by the classifier are used to calculate the weighted average of the feature values in <ref>: X̅=∑_n=1^N W_i*x_i/N W_i is the weight of the i^th feature, x_i is the feature value and N is the number of features. In <ref>, we normalize the score and invert it to better align with human judgment (higher means more natural). We classify the final score as “Bad”, “Low”, “Avg” and “Good” based on the quantiles they lie in. naturalness_score=1-X̅-min(X̅)/max(X̅)-min(X̅) § PROMPTS The intelligent agents for supporting prompt suggestions and feature recommendation are implemented with a template prompt using OpenAI's “gpt-3.5-turbo” model. Each template consists of two parts: a system prompt that describes the task and provides necessary contexts, such as feature definitions and prompt block definitions, and a user prompt, which is injected with the user's input. Below, we provide the templates for prompt suggestions and feature recommendations. §.§ Prompt Suggestions The prompt for prompt suggestions takes three inputs: block_name, block_definition, as listed in <ref> and user question (question): §.§ Feature Recommendation The prompt for feature recommendation takes two inputs: feature_descriptions as listed in <ref> and user question (question), and uses the JSON mode functionality to define a structured output: § EXPERIMENT: LLMS AS EVALUATORS Recently, LLM evaluators <cit.>, where LLMs are prompted to output a numerical evaluation score, have become more popular due to their low technical barrier and high customizability in defining an evaluation criterion. However, previous works <cit.> have shown that LLMs are non-deterministic and sensitive to slight changes in prompts in diverse NLP contexts, such as sentiment classification, code generation, and text summarization. Shen et al. <cit.> concluded that LLMs are not robust enough as human level evaluators. Building upon these works, our experiment is designed to explore, from the end-user's perspective, how consistent LLMs are as evaluator in text summarization. We seek to answer the following research questions: * RQ1: How do minor variations in the prompt defining an evaluation metric affect the variability of the metric score? * RQ2: Do LLMs maintain consistency in metric scores when the same prompt defining an evaluation metric is applied repeatedly? Procedure To investigate RQ1, we chose three evaluation metrics: sentiment, readability, and truthfulness, ranging from 1 to 5. For each metric, we created three prompts with varying levels of definition: (1) no definition, (2) a beginner-level, non-technical definition, and (3) an expert-level, detailed definition. A prompt with no definition of evaluation criterion represents prompt designers with no prior knowledge of natural language evaluation. The beginner-level prompt provides a naive way of defining an evaluation metric representative of non-technical prompt designers. The expert-level prompt gives a detailed definition of the metric with breakdown of scores to better align the responses according to the user's intent, representing experienced prompt designers. These prompts were then used to evaluate all summaries in the dataset. We calculated the variance between the three definitions to inspect the sensitivity of LLMs. The same procedure is repeated under four temperature settings: 0.0, 0.3, 0.7, and 1.0. To investigate RQ2, We chose two evaluation metrics, sentiment and readability, each with a single prompt that includes a beginner-level definition of the metric, outputing a score ranging from 1 to 5. For each metric, we repeatedly applied the same prompt to the dataset 10 times, and then examined the variance in the generated scores. The same procedure is repeated under three temperature settings of 0.0, 0.5, and 1.0. The specific prompts employed in these experiments are documented in <ref>. Our experiment utilizes the dataset released by Zhang et al. <cit.> comprising 599 articles and corresponding summaries generated by LLMs. For sentiment and readability, we only assess the summaries. For truthfulness, we take each pair of article and summary as input. For the model, We utilized OpenAI’s “gpt-3.5-turbo” for generation of all metric scores. We do not use GPT-4 due to its cost of usage and rate limit restrictions. Moreover, the non-determinism of GPT-4 as compared to GPT-3.5-turbo remains to be persistent <cit.>. Conclusion and discussion We analyze the variance in metric scores across the three levels of definitions to address RQ1. From <ref>, we observe the inconsistency in the truthfulness scores for three different prompts defining a single metric. The temperature parameter balances the consistency-creativity trade-off, with 0.0 temperature being their most consistent setting and 1.0 the most creative. However, even at 0.0 temperature, the variance across the three truthfulness scores assigned by the three definitions is non-zero for a significant amount of summaries, highlighting the inconsistencies in scoring. The variance increases as temperature is increased, with 1.0 temperature setting having the highest variance. Similar patterns are observed for readability and sentiment. This indicates that variations in the evaluation prompts can lead to variance in scores. Consequently, allowing users to define new evaluation metrics can introduce more uncertainty to the system. The exact variance in scores for the three metrics across the three definitions for the whole dataset is documented in <ref>. To address RQ2, we analyze the variance of scores when the same prompt is applied multiple times to the same dataset. From <ref>, we observe that even at 0.0 temperature, the variance of sentiment scores is non-zero for a significant number of summaries. As the temperature is increased, we observe a higher variance in scores and more number of summaries with non-zero variance, with 1.0 temperature setting being the most inconsistent. We observe similar patterns in readability. This indicates that the application of an identical prompt to the same group of summaries can yield varying scores, leading to concerns about the reliability of the results amongst users. The exact variances observed are given in <ref>. Limitations Given the resource constraints, our experiment was conducted at a rather small scale, using a single dataset, 10 iterations, three different metrics, and four temperature settings. For a more robust conclusion, the experiment can be extended to cover more diverse datasets and hyperparameter settings. Also, at a larger scale, a comprehensive statistical analysis can be added to examine the robustness of the experiment. For this experiment, we targeted the application scenario in the system to more robustly show the inapplicability of LLM evaluators, and leave a more comprehensive evaluation of LLM evaluators for future research.
http://arxiv.org/abs/2407.11970v1
20240716175956
PECCARY: A novel approach for characterizing orbital complexity, stochasticity, and regularity
[ "Sóley Ó. Hyman", "Kathryne J. Daniel", "David A. Schaffner" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.GA" ]
Sóley Hyman, Kathryne J. Daniel soleyhyman@arizona.edu, kjdaniel@arizona.edu 0000-0002-6036-1858]Sóley Ó. Hyman Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA 0000-0003-2594-8052]Kathryne J. Daniel Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA 0000-0002-9180-6565]David A. Schaffner Bryn Mawr College Department of Physics Bryn Mawr, PA 19010, USA § ABSTRACT () is a computationally inexpensive, statistical method by which any can be characterized as predominately regular, complex, or stochastic. Elements of the method have been used in a variety of physical, biological, economic, and mathematical scenarios, but have not yet gained traction in the astrophysical community. This study introduces the technique with the specific aims to motivate its use in and optimize it for the analysis of astrophysical orbital systems. works by decomposing a time-dependent measure, such as the x-coordinate or orbital angular momentum , into ordinal patterns. Due to its unique approach and statistical nature, is well-suited for detecting preferred and forbidden patterns (a signature of chaos), even when the chaotic behavior is short-lived or when working with a relatively short duration or small sets of data. A variety of examples are used to demonstrate the capabilities of . These include mathematical examples (sine waves, varieties of noise, sums of sine waves, well-known chaotic functions), a double pendulum system, and astrophysical tracer particle simulations with potentials of varying intricacies. Since the adopted used to diagnose a given can affect the outcome, a method is presented to identify an ideal sampling scheme, constrained by the overall duration and the natural of the system. The accompanying Python package and its usage are discussed. § INTRODUCTION () is a statistical method used to characterize a as regular, stochastic (i.e., random or noisy), or complex, and identify its relevant <cit.>. The use of and measures has been gaining traction in a wide variety of physical, biological, and mathematical scenarios, including plasma turbulence <cit.>, Solar wind and space plasma <cit.>, geological processes <cit.>, river flow <cit.>, economic trends <cit.>, biological or medical rhythms <cit.>, and for understanding the spread of the COVID-19 virus <cit.>. One of the drivers of the evolution of a dynamical orbital system depends on the relative fraction and distribution of regular and complex orbits, as well as the associated with each. Indeed, the formulation of chaos theory itself is firmly rooted in the study of chaotic behavior in astrophysical dynamical systems <cit.> and continues to inform studies of the secular evolution of galaxies <cit.>, planetary systems <cit.>, and black hole dynamics <cit.>, to name a few. The role of chaos in the evolution of dynamical systems is not the same as that of stochastic processes <cit.>, though they can be nearly indistinguishable in practice <cit.> and quite often both are confusingly labeled “stochastic." Differentiating between the two is particularly relevant in understanding the difference between dynamical processes and issues that arise from the limited resolution in discretized computation, like shot noise <cit.>. The method is able to discern the nature of fluctuations in a through its decomposition into a distribution of the occurrence frequency of patterns (described in Section <ref>). can be set apart from well-known methods for determining regions of orbital chaos or irregularity, such as exponential divergence <cit.>, Kolmogorov-Arnold-Moser (KAM) theory analysis <cit.>, Frequency Map Analysis <cit.>, and Surface of Section (SoS) analysis <cit.> since it is optimized to identify chaotic behavior on relatively short and is agnostic to underlying physics. expands the toolbox that astrophysical researchers have at their disposal for understanding dynamical systems. It provides an analysis technique that is suitable in situations where long-standing traditional techniques may not be as applicable, such as simulations with time-dependent potentials (e.g., a slowing bar or in systems that are accreting mass). This work introduces the theoretical framework of and explores its applicability and limitations in astrophysical systems. <ref> gives an overview of the theory, how ordinal patterns are determined, how the metrics of and are computed, and the usage of the . <ref> discusses the usage, interpretation, and limitations of the method, as well as an idealized sampling scheme. <ref> demonstrates the capabilities of via a variety of mathematical, physical, and astrophysical examples. <ref> provides an outlook into future work and tests to be done with , and <ref> summarizes the conclusions. § OVERVIEW OF THE METHOD is comprised of two different statistical measures: and . The and measures were developed in the early 2000s <cit.> as a way to distinguish noise from discrete chaotic maps, such as the logistic map or bifurcation diagram. In this context , uses a discretized through a sampling scheme (described in Section <ref>) and calculates the and values in order to determine what type of behavior (regular, stochastic, complex) is exhibited. This is done by extracting and counting the occurrence frequency of the sampled data, which are called “ordinal patterns." Ordinal patterns are groups of points that are ordered from smallest to largest relative amplitude. The resulting order of indices is that ordinal pattern. For example, if a series of points had values [8, 3, -2, 5] the resulting pattern would be “3241" since the third value of the array is the smallest, the second one is the second smallest, etc. Section <ref> gives a more in-depth discussion of how these ordinal patterns are extracted and determined. operates on the principle that ordinal patterns may be found within any that has N discrete, sequential measurements, calculations, or simulated quantities taken at fixed separation. Since the ordinal patterns are determined purely by comparing relative amplitudes, is agnostic to the physics and other parameters of the system (which often factor into other chaos/noise differentiation methods). §.§ Determination of Ordinal Patterns In their pioneering work, <cit.> developed the (H) measure as a means to identify chaotic behavior. Their approach relied on what they called “ordinal patterns." An ordinal pattern is defined as the order in which a subset of n sequential, discrete measurements from a given appears such that their values increase from lowest relative amplitude to highest relative amplitude. In cases where there exist two equal values, the original order of points in the is preserved. The values themselves are irrelevant since the magnitude of change between steps plays no role in this analysis. Figures <ref> and <ref> illustrate this definition. In Figure <ref>(a), a set of N=19 points is shown representing an arbitrary along the horizontal axis. The three shaded regions highlight sets of n=5 s, which in this case are both sequential and contiguous. These sequences are again shown in Figures <ref>(b), (c), and (d), with the ordinal pattern written at the bottom of each panel. The ordinal pattern for these sets of five points is found by first determining which ordinal position has the lowest value, then which ordinal position has the next lowest value, and so on through each of the five points. This pattern can be represented by the numerical sequence shown at the bottom of each of the lower panels in Figure <ref>. Ordinal patterns are extracted through a method that uses two parameters: the n and the . The n is the number of sequential points extracted to construct the ordinal patterns. Any can be decomposed into consecutive, overlapping sets of n s, where the number of possible permutation orders is n!. The ℓ is the integer number of timesteps from one sampled point to the next and probes the corresponding physical in <cit.>. The for an extracted ordinal pattern associated with a given and (known as the “") is given by, = δ t (n-1) , where δ t is the resolution and the pattern length spans n-1 s ℓ. The for consecutive points is given by =1. It is not necessary, and often not optimal, for the n extracted s used to construct an ordinal pattern be contiguous (i.e., =1). Figure <ref> illustrates how ordinal patterns using =3 are constructed using the same pattern from Figure <ref>. Here, every third point is grouped as shown by a given color. These three sets of five points are shown in panels (b)-(d) in Figure <ref> and the numerical representation of their ordinal patterns are also indicated, as in Figure <ref>. Note that <cit.> called the the “embedding dimension" and the the “embedding delay." These refer to the same parameters, but this paper adopts different language for a more intuitive framing. In order for to produce meaningful results, the n must be large enough that the set of possible ordinal patterns can be robustly used to describe the time-series <cit.>, but not so large that the number of patterns becomes intractable. To the second point, requires the condition n! ≪ N be met, where N is the total number of sequential points in the time-series <cit.>. <cit.> noted that a practical choice lies between 3≤ n ≤ 7. They did not attempt a thorough proof; rather, they showed that chaotic behavior is well identified (even for noisy data) using values of n within these bounds, with a slightly clearer signal near n=6. A of n=5 is adopted throughout this study, following the practice of several studies that effectively use  <cit.>. A more thorough theoretical treatment is beyond the scope of this work but would yield useful justification for one's choice of n in future studies. §.§ Pattern Probability and Pattern Probability Distributions After extracting the (n-1) ordinal patterns from the , the pattern probability distribution P (also called the “pattern distribution" by <cit.>) can be produced for the n! possible patterns. The probability p(π_i) for each pattern π_i in P is found by normalizing the occurrence frequency of that pattern so that ∑_i^n! p(π_i) = 1 , where the subscript i denotes one of the n! possible patterns. The nature of a as regular, stochastic, or complex, can be discerned by calculating two statistical measures, the H and C of the resulting pattern probability distribution P, for a given n and . Section <ref> introduces and describes these measures in depth. It is useful to consider the following two extreme cases: (1) a distribution where every pattern is equally represented (e.g., white noise), as in Figure <ref>(a), and (2) a periodic dominated by very regular patterns (e.g., a sine wave), as visualized by a histogram of the occurrence frequency in Figure <ref>(c). Most distributions have a more complex set of occurrence frequencies, as exemplified by the distribution in Figure <ref>(c). Such distributions reveal favored (high occurrence frequency) or forbidden patterns (low or zero occurrence frequency). Time-series data analyzed using must adequately populate the n! patterns in order to ensure the value for each occurrence probability p(π_i) is statistically significant. In practice, either the must be sufficiently long or multiple can be combined. The first case is appropriate for long s where the characteristic behavior of the is not time-dependent, as in the case study in <ref>. The latter is appropriate for shorter characteristic and requires an ensemble of . In any case, is only able to probe the characteristic behavior of a at corresponding to an appropriately sampled time domain. Section <ref> discusses a method for determining the minimum duration as well as a range of appropriate sampling intervals. §.§ Permutation Entropy and Statistical Complexity The core of the method is the calculation of the and , which are used in combination to evaluate whether a given is regular, complex, or stochastic. Table <ref> provides a glossary of the terms used in this section, as well as equation references. §.§.§ Permutation Entropy A common metric for a pattern probability distribution P is the Shannon Entropy (or information entropy) <cit.>, expressed as, S[P] = -∑_i^n! p(π_i) log p(π_i) . The value of S normalized to its maximum possible value, i.e., H[P] = S[P]log n!, is the <cit.>. Using this metric, a that is dominated by a single pattern (e.g., a linear ramp), would have H= 0, while an equally probable distribution of patterns (e.g., white noise, as in Figure <ref>(a)), would have H=1. An intermediate pattern, as in Figure <ref>(b), would have an intermediate value for H. Periodic time-series, such as sine waves or triangle waves, have limited numbers of possible permutations, as in Figure <ref>(c). There are upward and downward ordered ramp patterns and some additional patterns from permutations that include local maxima or minima in the periodic function. The number of possible patterns does not change with embedding delay since the same patterns are possible no matter the sampling resolution. However, the probability for a given pattern does depend on the sampling resolution. For example, the probability for ramp patterns increases as the decreases. The lower limit for the value of H for a periodic function is the limit when only two ramping patterns (upward and downward) are measured. That is, (n)=log 2/log(n!). The limiting minimum possible value for the of a periodic function when n=5 is thus (n=5)=0.14478. The limiting maximum possible value of H for a periodic function is the case when the probability of each possible pattern is equal. The number of possible patterns for any periodic function composed of two ramping patterns is conjectured to be N_ periodic(n)=2[2(n-2)+1]. This can be understood in the case of a triangle function. There is one upward ramping pattern (last term in brackets), and there are (n-2) non-ramp patterns around the crest where any evenly spaced sampling will either have points sampled on the right side staggered with higher than values than on the left or vice versa (second term in brackets). There is also a symmetry for downward ramping points and points around the trough. The same ordinal patterns exist for single-frequency periodic functions, like sine and cosine, which are indistinguishable from a triangle function with the same maximum/minimum frequency. This hypothesis has been tested for the range of s between 3<n<8. It follows that, (n)=log N_ periodic(n)/log(n!). Since N_ periodic(n=5)=14, a periodic function sampled with an of n=5 is expected to have H≤=0.55124. §.§.§ Disequilibrium and Complexity The pattern probability distribution P can also be characterized by how poorly it is described by the pattern probability distribution for the uniform case, P_e, where each possible pattern has equal probability p_e=1/n!. The divergence of ensemble P from P_e is called the “disequilibrium" and is defined as, d[P,P_e] = S [ P+P_e/2] - 1/2S[P]-1/2S[P_e] , where S[P+P_e] is the Shannon Entropy for the sum of pattern probability distributions P and P_e. The value of the disequilibrium d(P,P_e) is normalized by its maximum possible value (d/d_ max) is given by <cit.>, D[P,P_e] = 2d[P,P_e]2 log(2n!)-log(n!)-n!+1/n!log(n!+1) , and scales in the opposite direction to the (e.g., D→1 as H→ 0). , also known as the Jensen-Shannon Complexity, is given by the product <cit.>, C[P,P_e] = D[P,P_e] H[P]. Low values for C indicate a system with a distribution of ordinal patterns that are either far from the uniform distribution, as H approaches zero, or very near the uniform distribution, as D approaches zero. Maximum complexity occurs in an intermediate range when both H and D are non-zero. The (C) measure has been compared to several established (e.g., analysis) and emerging <cit.> methods for identifying chaotic or complex behavior and is shown to be robust for a wide range of scenarios <cit.>, including logistic maps, the skew tent map, the map, the Lorenz map of Rossler's oscillator, and Schuster maps <cit.>. Further, C is an intensive measure that can be used to provide insight into the dynamics of a system <cit.>, such as relevant (explored in Sections <ref> & <ref>). It is also reliably able to quantify the degree of chaos in systems that also have some degree of periodicity <cit.> or <cit.>. Lllc Glossary of Statistical Terms Symbol Name Definition Eq. no P [l]Pattern probability distribution [l]All possible n! ordinal pattern permutations <ref> P_e [l]Equilibrium pattern probability distribution [l]Uniform distribution of all possible n! ordinal pattern permutations <ref> π_i i-th ordinal pattern [l]A possible pattern permutation of the pattern probability distribution P <ref> S Shannon Entropy [l]Information entropy <ref>, Eq. <ref> H Permutation Entropy [l]Normalized Shannon Entropy, measure used in analysis <ref>, Eq. <ref> (n) [l]Minimum possible for a periodic function [l]Smallest value of H for a periodic function (e.g., sine wave); dependent on <ref>, Eq. <ref> (n) [l]Maximum possible for a periodic function [l]Largest value of H for a periodic function (e.g., sine wave); dependent on <ref>, Eq. <ref> H-curve [l] as a function of the <ref> d Disequilibrium [l]Measure of how far pattern probability distribution P is from a uniform distribution of patterns <ref>, Eq. <ref> D Normalized disequilibrium [l]Normalized measure of disequilibrium used in calculation of C <ref>, Eq. <ref> C [l]Jensen-Shannon Statistical Complexity [l]Measure used in analysis <ref>, Eq. <ref> C-curve [l] as a function of the <ref> §.§ The HC-Plane Any can be qualitatively sorted into its degree of regular, stochastic, and/or complex behavior by combining the metrics for the H (normalized Shannon Entropy) and C (normalized Jensen-Shannon Complexity) for a given and n <cit.>. Specifically, H provides a quantitative scale for how stochastic or noisy the is, while C measures the degree of complexity or chaos by how many statistically preferred and/or forbidden patterns there are. These regimes can be visualized as locations on a coordinate plane where the (H) is on the x-axis and the (C) is on the y-axis. Figure <ref> illustrates the with maximum and minimum limiting values for C(H) indicated with solid curves, where the grey regions outside these curves are forbidden. The bounding curves are computed following a technique from  <cit.> using a Lagrange multiplier technique for each fixed from Equation <ref>. Regular generate coordinates that occupy the left-hand region of the while noisy occupy the lower right. Complex or chaotic occupy the upper middle region <cit.>. Pure periodic functions (see discussion in <ref>) and circular orbits (see <ref>) fall on or within the region bounded by the dashed boundary line. There are three example plotted on the in Figure <ref>. These are shown in Figure <ref>, where sampling parameters of = 1 and n=5 were used to produce each coordinate in Fig. <ref>. The generated from a uniform random number generator shown in Figure <ref>(a) (purple) is labeled as white noise and occupies the most extreme lower-right position in the . The Map is a system described by <cit.>, (x_m,y_m)= x_m+1 = 1-ax_m^2 + y_m y_m+1 = bx_m , which produces the chaotic shown in Figure <ref>(b) (yellow) using the selected parameters, a=1.4 and b=0.3. The coordinate from this lies near the very top of the complexity region. The sine wave shown in Figure <ref> falls on the pure-periodic region boundary line on the left side of the HC plane. § USAGE AND INTERPRETATION §.§ Setting up To use , the code can be installed from the Python Package Index (PyPI) via the command or downloaded from the GitHub repository.[<https://github.com/soleyhyman/chaos-orbits>] Documentation and tutorials for running the code can be found on the website.[<https://peccary.readthedocs.io>] At its most basic, all that is needed is a and a chosen sampling interval . By default, the is set to n = 5 (see Section <ref>). Typically, the measures used are determined by the system and the symmetry in question. For example, when investigating orbital behavior in a barred disk, the appropriate choice may be the Cartesian coordinate along the length of the bar in the rotating frame to discern the behavior of those orbits. §.§ Idealized Sampling Scheme and Limitations Due to the flexibility of the method, it is possible to probe the orbital behavior on a variety of different . This can be done by calculating H and C for a range of different s and producing , or and . Alternatively, a single or can be chosen to probe the of maximum or any generic . However, if the chosen , , is poorly matched with the natural of the system, or the overall duration of the , , is insufficient, the interpreted results may be inaccurate. For example, if a continuous chaotic is sampled at small enough intervals, ramping behavior will dominate. Similarly, if the same is sampled with too large of a , it will appear stochastic. Any given has three primary of interest. These are the overall duration of the , the natural of the system , and the of the ordinal pattern sampling scheme. One can find ratios to relate these to one another. Below is a description of the method adopted by this study to guide in the selection of appropriate parameters for a given that is based on these ratios. Table <ref> lists the relevant and sampling parameters, their definitions, and references to their descriptions in this text. The number of natural (e.g., orbital periods) in a given is represented by the ratio . The time-resolution required to capture the nature of the can be represented by the ratio /. Systems with well-known periodic or chaotic behavior are used to determine the minimum necessary constraints for and /. §.§.§ Minimum duration, Several with a range of durations, , were created for a sine wave with given fixed period, where here t_ period=. These were used to determine the minimum duration necessary to diagnose a given system, . Ratios ranged between =0.5 -10. For each of these , values for were calculated for a range of selected such that =0.1-0.7. The resulting values were then plotted on the and as H() curves. Pure periodic/closed functions such as sine waves have a characteristic behavior in their s in that they increase from the lower limit (at small s) until they reach the upper limit of and then oscillate between and lower values of H. On the , this corresponds to the points falling exactly on the periodic boundary line or zig-zagging between that boundary line shown in Figure <ref> and the region to the left of it. Within the range of durations sampled, values diverged to the right of the periodic boundary and did not reach the upper limit when fell below critical thresholds. For the sine wave, H() stopped reaching at ∼1.5, while the behavior significantly deviated from the aforementioned characteristic behavior at ∼ 1. The first two rows of Figure <ref> illustrate this behavior. In cases where the duration of the orbital behavior in question is shorter than this limit, one might consider stacking for multiple orbits. Initial explorations indicate that stacking multiple, shorter duration can return reliable results. This will be further explored in a later paper in this series. §.§.§ Timescale resolution, To identify the largest value of that should be used with , the same H() plots created for identifying the minimum (Section <ref>) were used. To establish a conservative upper limit, the maximum was found by locating the lowest value of at which H() for a sine wave fell significantly below the line, regardless of the use of an appropriate ratio. For the sine wave, this occurred at ∼ 0.5 when ∼ 0.6. The third row of Figure <ref> shows this graphically. For the lower limit of , the x-coordinate from the chaotic Lorenz strange attractor simulation were used. Similar to the processes used to constrain and to establish an upper limit for , s and plots were generated for a range of values, ranging from 0.1 to 0.7 with the and lines overplotted. The minimum was set to be the value at which the crossed the line (i.e., transitioning from appearing regular to appearing complex). For the = 1.5, x-coordinate for the Lorenz strange attractor, this occurred at ∼ 0.25. For a more conservative constraint, this was rounded up to 0.3. The fourth row of Figure <ref> illustrates this. §.§.§ Recommended sampling scheme constraints The sampling scheme tests performed in Sections <ref> and <ref> used two systems with known behavior, i.e., a periodic (sinusoid) function and a continuous chaotic system (Lorenz strange attractor). To obtain reliable values, the duration must be at least on order the natural (i.e., / > ∼1) and preferably /≳ 1.5, and the time-resolution should fall in an approximate range of 0.3 ≲/≲ 0.5. In practice, this ratio can be used to select an appropriate value for . Note that all of these limits are derived using a of n=5 and a similar process will need to be followed in order to find the appropriate constraints when using other values for n. Figure <ref> shows example diagnostic plots used to obtain the constraints reported in this paper. lLllc Glossary of Sampling Terms and Parameter Type Symbol Name Definition Eq. no 2*Sampling n [l]Number of data points for each extracted pattern (i.e., sampling window) <ref> [l]Number of points in between each extracted point (i.e., sampling interval) <ref> 4*Timescales δt [l]Time element associated with a single step in the <ref> for an ordinal pattern <ref>, Eq. <ref> duration Total duration for a <ref> Natural [l]Natural or approximate period of oscillation for the system <ref> §.§ Interpreting Values There are two methods by which one can interpret the and values produced by . The most exact way is to plot the for each orbit within the system, though this can be difficult with many particles. This will show the different behaviors of the orbit(s) on different timescale probes, as regular, complex, and stochastic all have different and shapes. Should the system be evolving with time, the of the orbital behavior in question should be used to approximate the duration of the , , when considering the whether or not its nature can be discerned using a single orbit with . As mentioned in Section <ref>, stacking techniques will be developed in a future publication. For stochastic , the and curves are close to constant, with ∼ 1 and ∼ 0. This is due to the fact that generated noise or do not have any characteristic . Chaotic systems, on the other hand, have a characteristic shape to their curves that depend on whether they are discrete or continuous in nature. Discrete chaotic systems have a characteristic timescale that is inherently set to be = 1. As such, the maximum value for the occurs when = 1. By contrast, the maximum for continuous chaotic system depends on the approximate natural timescale. In terms of , the value for increases with increasing , while the value for increases to some maximum value at a particular , and then decreases. Examples of both discrete and chaotic maps are given in Sections <ref> and <ref>. Compared to stochastic and complex signals, regular generally have smaller values for H(=1) that rise with increasing . The s of purely periodic (such as a sine wave) also exhibit a characteristic pattern of H() → at /∼0.6 ratio and regularly returning to that value as the continues to increase. This behavior is reflected in the as well (since C depends on H), which results in a pure periodic function falling on or within the periodic boundary of the for all values. This is further discussed in Section <ref>. However for very large datasets and many particles, generating for each can be impractical. The next-best method is to choose a within the limits for /, as described in Section <ref>. The location of where the values fall within the in Figure <ref> result in the classification of the orbit type. In the case of ambiguous cases, it may be necessary to incorporate additional methods, such as Fourier analysis in order to break some of the degeneracy/uncertainty. This will be the subject of a future paper in this series. See Section <ref> for further discussion. § EXAMPLES This section provides a variety of examples demonstrating the performance of the method in systems with known outcomes. In Section <ref>, is applied to several well-characterized, mathematical functions, while Section <ref> demonstrates the usage of on tracer particle simulations of four well-known orbital systems. §.§ Well-Characterized Mathematical Examples §.§.§ Sine wave The sine wave is a classic example of a pure periodic function. This example generates five sine waves of different periods. Each of these have a duration of =10 s, sampled at a resolution of δ t = 2^-8 s, and the duration of each is greater than at least five completed cycles (> 5). Figure <ref> shows how the values for the (top, left panel) and (top, middle panel) depend on the choice of (). The patterns in these plots are clearer when plotting both as a function of the divided by the period (i.e., natural ), / (bottom panels). These panels illustrate that there is a characteristic shape to the and curves that depends on the period of the oscillatory behavior. The curve for the has a maximum value set by H^ max_ periodic (Equation <ref>) with the exception of three spikes that are numerical artifacts. The (top, right panel) shows the distributions for all choices for . §.§.§ Noise varieties is effective for a variety of colors (or power spectra) of noise. Figure <ref> shows the values for the (left panel) and (middle panel) as a function of the () for five varieties of noise. These are white noise (power spectral density equal at all frequencies ν), blue noise (power spectral density ∝ν), violet noise (power spectral density ∝ν^2), Brownian noise (also called red noise, power spectral density ∝ν^-2), and pink noise (power spectral density ∝ν^-1). Using 's class, five sample of 10^4 discrete measures were created for the aforementioned noise colors. Each noise spectrum has values on the that are indicative of . Furthermore, the and curves (i.e. and ) from any type of noise have nearly constant values for all choices for , where the value for C is close to or near 0 and the value for H is close to or near 1 at all scales. This is due to the fact that the value of the at each comes from a random distribution, which means the occurrence frequency of patterns at every will always be at or very near uniform. §.§.§ Chaotic systems Two well-studied examples of chaos are the map (Eq. <ref>) and the Lorenz strange attractor. The map is a discrete chaotic map, while the Lorenz strange attractor is continuous. The Lorenz strange attractor is a system described by <cit.>, dx/dt = σ (y-x) dy/dt = x(ρ -z) -y dz/dt = xy - β z which produces chaotic . Figure <ref> shows a 3D plot of the system using the standard parameters of σ=10, ρ=20, and β = 8/3, which Lorenz used in his paper. Four are generated to diagnose 's effectiveness for well-characterized chaotic systems: one from the map and three for the Cartesian coordinates of the Lorenz strange attractor. While the idealized sampling scheme described in Section <ref> uses the natural oscillatory of a , it can be difficult to identify a baseline oscillatory period for a chaotic . This set of chaotic examples demonstrate two ways to approximate a relevant . If the to probe is unknown, a rough oscillatory can be determined by identifying the locations (in time) of the local maxima (or local minima) of the , calculating the elapsed time between each peak, and taking the average of those values. This is called the “approximated " method. Alternately, the at which the maximum (C) occurs can be used for the ideal sampling. In this method, the corresponding to the peak value in the curve is determined. From that value, the can be calculated (via Equation <ref>). Depending on the / ratio used (typically 0.4), that value can be used to find the natural oscillatory . This is called the “ from maximum C()" method. Figure <ref> illustrates the difference in how discrete and continuous chaotic maps behave in and curves when applying to the four different . While both types of chaotic maps increase in H as the increases, in , the discrete map falls from its initial value and stays constant, while the continuous map increases and then eventually drops as the increases. The shows the values calculated from ideal sampling with the “approximated method" as circles and those calculated with the “ from maximum " method as diamonds. All points fall within the chaotic regime of the . This exercise illustrates that a chaotic may appear to be entirely stochastic if it is sampled at s where favored or forbidden patterns cannot be resolved. A chaotic signal may have multiple characteristic for favored and forbidden patterns, but these can only be discerned within the explored by the selected range of . In addition, the optimal , n, can be modified in consideration of sampling favored patterns for more or less complex or lengthy patterns. §.§ Double Pendulum 's was used to generate for a double pendulum system. The model assumes upper and lower pendulum masses of 1 kg each and pendulum lengths of 1 m. The system was allowed to evolve for a range of times (i.e., 2.5 s, 5 s, 10 s, 50 s, and 100 s) at a time resolution (i.e., step size) of δ t = 2^-6 s. Figure <ref> shows that while certain durations (e.g., 5 s and 10 s) remain in the complex region after turning off from the periodic/regular regime, too short of a duration (e.g., 2.5 s) will appear regular. The sampling for the longer durations (50 s and 100 s) show that too large of a sampling interval will cause chaotic behavior to appear as noise on the . With the idealized sampling scheme, all the points fall in the complex regime, with the exception of the too-short 2.5 s simulation. §.§ Astrophysical Examples The tracer particle simulations for the astrophysical examples in the following subsections were created with <cit.>, using the integrator. §.§.§ Keplerian Potential While there are many types of regular orbits in astrophysics, only the point-mass (i.e., Keplerian) potential produces a special case of non-circular orbits that close in a single period, due to the fact that the radial and azimuthal frequencies are equal. As such, the Keplerian potential is an ideal scenario for testing regular orbits that close after 2π. In this case, 's function was used to create a tracer-particle simulation of the orbits of the eight planets of the solar system for a duration of 100 years, at a resolution of 2.85 days. Figure <ref> demonstrates that all the values fall on and within the periodic/regular boundary of the when using the x-coordinate for the orbits. §.§.§ Globular Cluster A spherical potential is a minimally intricate astrophysical example for testing the method. Using a spherical Plummer potential () and self-consistent isotropic and spherical Plummer distribution function (), 10^4 tracer particles (representing stars) were evolved for a duration of 1 Gyr and an orbit integration time resolution of 0.1 Myr. Figure <ref> shows a sampling of 50 stellar orbits plotted on the based on the values calculated from the x-coordinates of orbits with /≥ 1.5 and = 0.4. Many orbits fall on or within the periodic/regular boundary, with others falling within the “regular" regime. The divergence from the periodic boundary line is likely due to the fact that these are not closed orbits. The method will be developed further for investigating regimes such as this in a future paper. §.§.§ Triaxial Halo A triaxial potential will exhibit regular orbits, as well as chaotic orbits. To verify that returns this same conclusion, a tracer-particle simulation of a Navarro-Frenk-White (NFW) potential () was created with an isotropic NFW distribution function that samples a spherical NFW halo (). The default settings were used for the triaxial prescription, with the exception of the y/x and z/x axis ratios. These two ratios are referred to as b and c, respectively, and were set to b=1.66 and c=3 with a normalization of 0.35 based on the reported values for a Milky Way-like halo in <cit.>. 10^4 stars were evolved for a duration of 100 Gyr at a time resolution of 15.625 Myr. Figure <ref> shows values at = 0.4 for the x-coordinates of simulated stars with ≥ 1.5. Several of the orbits fall within the regular regime, where many of these were visually identified as tube orbits. Some orbits, particularly those with shorter orbital periods, have characteristics that are highly complex. Future work will compare the results from to other known diagnostics of chaos. §.§.§ Injected Noise To test the sensitivity of on several idealized examples of “noisy data," different levels of noise were injected to the various orbits presented in Section <ref>-<ref>. The noise injections were determined by calculating the approximate amplitude of each orbital x-coordinate (i.e., maximum absolute value less the average value) and then adding a random value to each . The random “errors" were drawn from a uniform distribution ranging from -1 to 1 and were then scaled by the amplitude of the and the fraction of the noise (i.e., 0.1, 0.5, or 1). The noise injection “percentages" are defined as the amplitude of the noise relative to the amplitude of the signal. Figure <ref> compares values on the s of the three noise levels at s equivalent to = 0.4 and for orbits with ≥ 1.5. Compared to the original datasets, periodic and regular orbits with injected noise appear to increase primarily in H, while complex orbits appear to have similar coordinates. To the level of 10% noise, the values have shifted significantly compared to the pure signals, although they do not appear as stochastic. At 50% and 100% noise, almost all or all of the points fall primarily within the stochastic region. § FUTURE WORK This paper introduces the method for usage in astrophysics. While the measures of and have been used in other fields, including plasma physics <cit.>, to great success, those bodies of work involved systems or models that were inherently discrete and could use a of =1. The fundamentally continuous nature of many astrophysical systems, as well as the varied origin of stochasticity (i.e., natural noise or background sources/behaviors), require great care with choosing the appropriate sampling schemes. The method provides the first clear recommendations for using and measures to characterize the behaviors of continuous systems. This study investigates how well the method works for several astrophysical systems with known behaviors (Section <ref>), but additional work is needed for widespread uses. Future papers will develop a method for estimating the confidence of the periodic/regular/chaotic/stochastic diagnosis and run comparisons with existing methods for chaos identification, such as frequency analysis mapping <cit.> and potentially exponents to test the robustness of in this regime. While Section <ref> demonstrated an example of 's ability to handle noise injected into a known system, additional work is needed on understanding the sensitivities of the method for characterizing orbital behavior in a noisy signal. Future research will center on developing more intricate astrophysical simulations (e.g., evolving, time-dependent potentials and large-scale n-body simulations) to test the method. Other work will explore the efficacy of stacking in order to improve reliability for shorter duration simulations and windowing, for understanding how a system changes dynamically over time. § CONCLUSIONS This paper introduces the method to the astrophysics community for the first time. is a statistical method that samples ordinal patterns from any sort of , creates a probability distribution of all possible permutations of those patterns, and calculates the H and C from that distribution. The location that these and coordinate values fall on the indicate the classification of the orbital behavior. This paper provides an overview of the underlying theory and discuss best practices for initial implementations of the method. This work also demonstrates that for pure periodic functions, the orbital period can be easily extracted by using the shapes and initial peak of the curves. The method is effective for where the overall duration of the is at minimum equal to the approximate period of the orbit, though a ration of ≳ 1.5 is ideal. While the overall shapes of the and curves provide the best indication for classifying the behavior of the data, in many cases it is better to necessary or more efficient to sample a single method. For cases such as these, the ratio between the and the period should be between 0.3 and 0.5, i.e., 0.3 ≲≲ 0.5. The corresponding can be calculated with the package or using Equation <ref>. Finally, a variety of different examples, both mathematical and astrophysical, are presented as a proof of concept of . Additional tests of the method's sensitivity, limitations, and wider applications will be presented in future papers in this series. Extensive documentation and examples for the corresponding Python package are available online.[ <https://peccary.readthedocs.io>] The source code can be found on GitHub[<https://github.com/soleyhyman/peccary>] and builds can be found on the PyPI project page.[<https://pypi.org/project/peccary/>] The astrophysical simulations in Section <ref> were created using High Performance Computing (HPC) resources supported by the University of Arizona TRIF, UITS, and Research, Innovation, and Impact (RII) and maintained by the UArizona Research Technologies department. SOH and KJD would like to acknowledge support for the above resources. KJD also acknowledges support provided by the Heising Simons Foundation grant # 2022-3927. galpy <cit.> aasjournal
http://arxiv.org/abs/2407.11948v1
20240716174237
Rethinking Transformer-based Multi-document Summarization: An Empirical Investigation
[ "Congbo Ma", "Wei Emma Zhang", "Dileepa Pitawela", "Haojie Zhuang", "Yanfeng Shu" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Minimally Entangled Typical Thermal States for Classical and Quantum Simulation of Gauge Theories at Finite Temperature and Density Thomas Iadecola July 22, 2024 ==================================================================================================================================== § ABSTRACT The utilization of Transformer-based models prospers the growth of multi-document summarization (MDS). Given the huge impact and widespread adoption of Transformer-based models in various natural language processing tasks, investigating their performance and behaviors in the context of MDS becomes crucial for advancing the field and enhancing the quality of summary. To thoroughly examine the behaviours of Transformer-based MDS models, this paper presents five empirical studies on (1) measuring the impact of document boundary separators quantitatively; (2) exploring the effectiveness of different mainstream Transformer structures; (3) examining the sensitivity of the encoder and decoder; (4) discussing different training strategies; and (5) discovering the repetition in a summary generation. The experimental results on prevalent MDS datasets and eleven evaluation metrics show the influence of document boundary separators, the granularity of different level features and different model training strategies. The results also reveal that the decoder exhibits greater sensitivity to noises compared to the encoder. This underscores the important role played by the decoder, suggesting a potential direction for future research in MDS. Furthermore, the experimental results indicate that the repetition problem in the generated summaries has correlations with the high uncertainty scores. § INTRODUCTION The innovation and contemporary developments of Transformer architecture <cit.> thrives multi-document summarization (MDS) <cit.>. This motivates us to study the behaviors of the Transformer structure MDS models. Through these analyses, we aim to provide a thorough understanding of MDS and its intricacies within the MDS model framework. We undertake a comprehensive investigation from five distinct perspectives covering the Transformer-based MDS model design pipeline: (1) Document input perspective: we conduct experiments to quantitatively assess the impact of document boundary separators from a standpoint of document input; (2) Transformer structure perspective: we explore the effectiveness of different mainstream Transformer structures; (3) The significance of encoder and decoder perspective: we design empirical studies by adding noises on top of the encoder and decoder; (4) Training strategy perspective: we restructure the source documents and include self-supervised learning; (5) Summary generation perspective, we explore the uncertainties when repetition problems occur in the summary generation process. The primary distinction between SDS and MDS lies in the variance of source document numbers. One straightforward way that convert MDS to SDS is concatenating text spans and processing them as a flat sequence <cit.>. One way to aid the models in detecting and modeling document-to-document relationships in one flat sequence is to utilize document boundary separators <cit.>. However, there is a notable gap in the current literature regarding a qualitative and quantitative examination of the influence of document boundary separators. This absence of exploration serves as the driving force behind our initiative to investigate whether these separators contribute to enhanced model performance and foster awareness of document boundaries within the feature space of MDS models. Through experiments conducted on three distinct Transformer structures, we discerned that the impact of document boundary separators varies among models with differing hierarchies. Uncertainty analysis is a pivotal approach employed in the examination and assessment of generation systems <cit.> which can serve as an important indicator to show how the model performs during the summary generation. We then investigate the variation of summary prediction uncertainty by exploring the relations between separators and the predictive uncertainty of the structures. Certainly, measuring uncertainty in the context of summarization can provide insights into how the presence of document boundary separators affects the behavior of Transformer-based models and their summarization outcomes. By quantifying uncertainty through the entropy calculations, we gain a deeper understanding of the level of confidence or ambiguity the model has in its generated summaries. Instead of simply concatenating all the input documents into a flat sequence and applying SDS models, the hierarchical Transformer structure <cit.> has been proposed to specifically solve MDS tasks. This structure has been used for encoding multiple documents in a hierarchical manner, enabling the capture of cross-document relations through the utilization of an attention mechanism. The hierarchical Transformer structure contains a low-level Transformer that encodes tokens and a high-level Transformer that is used to encode coarser-grained textual units. This motivates us to further explore the influence of different hierarchies on MDS performances. We explore the effect of different granularity of high-level Transformer on the performance of MDS models. In this paper, we consider sentence-level and document-level features as different granularities. Based on the empirical studies, our findings indicate that for MDS tasks involving relatively short documents, flat Transformer models are a suitable choice. Also, the hierarchical structure prefers higher granularity in high-level Transformer structures. In addition to exploring the hierarchical structure of Transformer-based MDS models, we explore the Transformer's internal structure. Based on the existing Transformer-based MDS methods, we find that many of the MDS models focus on modifying the components of encoder <cit.> and fewer works pay attention to ameliorating the decoder <cit.> to cater the requirements for MDS tasks. This motivates us to explore the robustness of the encoder and decoder towards interference under the same noise conditions. Therefore, we add Gaussian noises at the parameter space of the encoder or decoder to fulfill this purpose. The experimental results indicate that the decoder exhibits greater sensitivity compared to the encoder in MDS scenarios. This finding underscores the need for increased attention to decoder enhancements in future research within the MDS community. Based on the analysis of Transformer-based MDS models, we also pay attention to exploring different training strategies for further enhancing the performance of MDS models. Different training strategies offer unique approaches to utilize available data and optimize model performance. By investigating diverse training strategies, we aim to identify the most effective methods for training MDS models, leveraging the characteristics of the dataset and the summarization task at hand. These strategies involve using pseudo datasets, fine-tuning on original datasets, or a combining of both. To generate pseudo data, we treat individual documents in a document set as pseudo-summaries and create multiple sets of pseudo-document-summary pairs. We evaluate three training approaches: training exclusively on the pseudo dataset, mixing the pseudo dataset with the original dataset, and a two-step process of training on the pseudo dataset followed by fine-tuning on the original dataset. The experimental results demonstrate that the pretrain-finetune strategy consistently outperformed the other training strategies, leading to improved summarization quality. The analysis of feature distributions further supported this finding, highlighting the alignment between the finetuned model and the baseline model. These results provide valuable insights into the effectiveness of the pretrain-finetune approach in enhancing summarization performance. The findings of this study can guide future research and development in the field of abstractive summarization, emphasizing the importance of training strategies for achieving higher-quality summaries. Moreover, while the different Transformer structures and training strategies demonstrated variations in performances, an observation is the presence of repetitive patterns in the generated summaries, indicating a potential issue that needs to be addressed in abstractive summarization systems. Liu et al. <cit.> gave two possible reasons behind the repetition problem in abstractive summarization: (1) attending to the same location in the source and (2) attending to similar but different sentences in the source. In this paper, we explore the cause of repetitive problems in abstractive summarization by examining predictive uncertainty. We quantify uncertainty scores at each time slot during the summary generation process. The analysis aims to observe how the uncertainty score changes when repetition phenomena occur, allowing us to identify positions where uncertainty is localized in repetitive behavior. The analysis reveals that as the model generates repetitive sentences or words, the uncertainty score rises, pointing out decreased confidence and increased uncertainty regarding the appropriateness and relevance of repeated elements in the summary. Understanding this relationship allows us to develop strategies to mitigate repetition and improve the quality of generated summaries. § METHODOLOGY We introduce how to design the MDS experiments from the following angles: input data, Transformer structures, training strategies and summary generation. Therefore, we design five experiments to evaluate the behaviors of Transformer-based MDS models: (1) the measurable impact of document boundary separators; (2) the effectiveness of different Transformer structures; (3) the sensitivity of the encoder and decoder; (4) different training strategies; (5) repetition in document generation. §.§ The Measurable Impact of Document Separators We modify the source documents instead of the summarization models to the format of: 𝒟 = {𝐝^1, 𝐬, 𝐝^2, 𝐬, ..., 𝐬, 𝐝^N }, where N is the number of documents in a document set 𝒟, the superscript 𝐝^n represents the n-th document in the set, and 𝐬 denotes the special tokens. We investigate different Transformer models on two MDS datasets and eleven evaluation metrics to explore the impact of the document boundary separators qualitatively and quantitatively. We analyze and compare the prediction uncertainty from different datasets and different formats of source documents by inspecting entropy values during summary generation. We aim to understand how decisions by adding document boundary separators are reflected in the model’s uncertainty. In the generation process, each predictive position 𝐗_i has an outcome probabilistic distribution 𝐱_i1,..., 𝐱_im, m is the number of a corpus pool. We use entropy as an uncertainty measurement which can be calculated as follows: H(𝐗_i)=-∑_j=1^mP(𝐱_ij) logP(𝐱_ij) Because the size of the corpus pool is large and the prediction distribution is usually long-tailed <cit.>, we sort the prediction distribution 𝐗_i in descending order and get a minimal set of tokens where the sum prediction values are larger than 0.95, and then normalize the distribution. We calculate the entropy value based on the new distribution P'(𝐱_ij). The utilization of entropy as a measure allows us to gauge the distribution of probabilities across different tokens within the predictive positions of the summaries. Higher entropy values indicate a wider spread of probabilities, suggesting that the model is less certain about the most appropriate token to choose. Conversely, lower entropy values suggest that the model is more confident in its token predictions. The quantification of uncertainty through entropy measurements and its qualitative analysis enables us to assess how the introduction of document boundary separators influences the performance of the summaries generated by Transformer-based models. This holistic approach helps us unravel the nuanced impact of document boundary separators on the MDS process and gain valuable insights into the behavior of these models in handling multiple document inputs. §.§ The Effectiveness of Different Transformer Structures Transformer structures have become an essential component of many state-of-the-art natural language processing models. However, the design of the Transformer architecture can vary dramatically, and different structures may impact the performance of the model on different tasks. In this study, we aim to evaluate the effectiveness of different Transformer structures for MDS tasks. Specifically, we focus on two types of structures: flat Transformer and hierarchical Transformer. The flat Transformer consists of a single layer of self-attention and feed-forward neural network layers that process the input tokens sequentially. In contrast, the hierarchical Transformer has a more complex structure, where the input tokens are first grouped into sentences or documents, and then processed by local and global Transformer layers. To explore the hierarchical Transformer structure, we investigate two different granularities of high-level Transformer: sentence-level and document-level. Building on the work of Liu <cit.>, we make modifications to the local Transformer layers to encode individual documents. The global Transformer layers are then able to exchange information at the sentence or document level. Our analysis is motivated by the need to better understand how different Transformer structures can impact the performance of MDS models. By comparing the performance of the flat Transformer and hierarchical Transformer structures, we aim to identify which structure is more effective for multiple document summarization data. §.§ The Sensitivity of Encoder and Decoder In summarization tasks, the encoder plays a crucial role in extracting representations from the input text, while the decoder is responsible for generating the output summary, which requires producing coherent and meaningful language. Given the intricate nature of summary generation, the decoder's role demands fine-grained control and precision, making it potentially more sensitive than the encoder. To explore the sensitivity of the encoder-decoder in Transformer-based summarization models, we add Gaussian noise at the parameter space of the encoder or decoder. We devise this experiment based on the intuition that a module (whether it's the encoder or decoder) exhibits varying sensitivity to noise, thereby signifying the differing degrees of importance each module holds for overall performance. Formally, we have: z=f(x;Θ + α n), n∼ N(μ, δ) where f(· ) is the component in Transformer; Θ is the parameters in f(· ); n represents Gaussian noise; μ, δ are mean and variance in the Gaussian noise, α is the weighted factor. §.§ Different Training Strategies In this study, we aim to investigate the impact of different training strategies on Transformer models for abstractive summarization. While we have previously examined the components of Transformer models, the specific influence of training strategies remains unexplored. Our objective is to identify the most effective training strategies by leveraging the inherent characteristics of MDS datasets, without the need for external data sources. To create pseudo data utilizing the characteristic MDS, we adopt a straightforward approach. We treat one document from a given document set as a pseudo-summary while considering the remaining documents as input documents. This process is iterated, systematically selecting each document in the set as a pseudo-summary, until all input documents have served as pseudo-summaries. Consequently, we generate multiple sets of pseudo-document-summary pairs, which we refer to as pseudo-MDS dataset. The original MDS dataset is denoted as the original dataset in the subsequent analysis. To evaluate the effectiveness of different training strategies, we design three distinct approaches. Firstly, we train the MDS model exclusively on the pseudo dataset. Secondly, we mix the pseudo dataset with the original dataset, creating a comprehensive mega dataset, on which the MDS model is trained. Lastly, we employ a two-step process, initially training the model on the pseudo dataset and subsequently fine-tuning it on the original dataset. §.§ Repetition in Document Generation For abstractive MDS, a persistent challenge arises from the inclination of models to produce repetitive sentences or words during the summarization process. This tendency creates a loop that is difficult to break, hampering the generation of accurate summaries. To analyze what may cause repetitive problems, we delve into an analysis of prediction uncertainty, examining uncertainty scores throughout the generation process and localizing uncertainty to certain positions in a repetition behavior. To quantify uncertainty, we employ Equation <ref>, which calculates the uncertainty score for each time slot during the summarization generation. By applying this equation, we obtain a measure of uncertainty that corresponds to the level of doubt or ambiguity associated with the generated output. The analysis focuses on observing how the uncertainty score evolves in response to the occurrence of repetition phenomena. § EMPIRICAL STUDIES AND ANALYSES §.§ Settings for Empirical Studies We evaluate the performance of three Transformer models: Vanilla Transformer (VT) <cit.>, Vanilla Transformer with copy mechanism (VTC), and modified Hierarchical Transformer (HT) <cit.>. These models are assessed on two widely used MDS datasets: Multi-XScience <cit.> and Multi-News <cit.>. To comprehensively analyze their performance, we employ eleven evaluation metrics: ROUGE <cit.> including ROUGE-1 (R-1), ROUGE-2(R-2), ROUGE-L (R-L), ROUGE-SU (R-SU), ROUGE-WE (R-WE) <cit.>, BLEU <cit.>, S3 <cit.> including pyramid (pyr) and responsiveness (resp) scores, BertScore (BS) <cit.>, Relevance (Rel) <cit.>, Redundancy(Red) <cit.>. §.§ Impact of Document Separators We investigate the VT, VTC, and HT models on both datasets and report the eleven evaluation metrics to explore the impact of the document boundary separators. From Table <ref>, interestingly, we find that adding separators reduces models' performance in half of the cases (3 out of 6). For example, model VT with separators performs relatively worse on Multi-News (the results of 8 evaluation metrics are worse among 11 evaluation metrics); model VTC performs relatively worse on both Multi-XScience (the results of 9 evaluation metrics are worse among 11 evaluation metrics) and Multi-News (the results of 8 evaluation metrics are worse among 11 evaluation metrics) when with separators. These results indicate input documents with separators are not very helpful for flat Transformer models. However, we can perceive that the HT model achieves better performance on both datasets with document boundary separators. Another interesting finding is the most commonly used ROUGE, in a few cases, shows the opposite result from other evaluation metrics. For instance, on the Multi-XScience dataset, the VT (with document boundary separators) shows better ROUGE results than VT (without document boundary separators) but contradicts the results on “R-WE", “BLEU”, “S3”, ”BertScore", “Redundancy” and “Relevance". It indicates that the ROUGE-centric evaluation system needs to be updated and the measurement of summarization can not rely solely on ROUGE. We also discover the relations between document boundary separators and token uncertainty scores. Figure <ref> shows the uncertainty scores of generated tokens of VTC models on both datasets. Surprisingly, the figure reflects that separators are associated with high uncertainty score actions which means the separators increase the predictive uncertainty of models. Possible because the separators have no semantic relations with the source documents and separators may be regarded as noise to increase the predictive uncertainty. The median uncertainty score of the Multi-News is larger than the Multi-XScience aligning with the size of datasets. §.§ Quantitative Performance on Different Transformer Structures We investigate (1) the effectiveness of different Transformer architectures: flat Transformer (VT,VTC) and hierarchical Transformer (HT); (2) the influences of different granularities within hierarchical Transformer structure. The results are also found in Table <ref>. In most evaluation metrics, the HT model can not achieve as good results as two flat Transformer models on both datasets. The two potential reasons are: (1) the pipeline of the HT model is longer than the flat Transformer models which makes the HT model hard to train. (2) the Multi-XScience and Multi-New datasets are not long document summarization datasets. The average document length of Multi-XScience and Multi-New are 778.08 and 2103.49. From the experimental results, we can conclude that the HT model is more suitable for lengthy documents, implying that flat Transformer models are a good choice for tasks with shorter documents. As mentioned in Section <ref>, to evaluate the influences of different granularities within the hierarchical Transformer structure, we modify the local Transformer layers to encode individual documents. Figure <ref> shows the performances of document-level and sentence-level HT models. All the metrics are showing better performances with the document-level HT compared to the sentence-level HT as the green line exceeds the boundary of the orange line in every dimension (redundancy is the lower the better). The apparent trend implies that a higher level of granularity is more favorable for the hierarchical Transformer structure. §.§ Quantitative Performance on the Sensitivity of Encoder and Decoder To investigate the hypothesis in section <ref>, we select the VTC model as the foundation for evaluating the effectiveness of the encoder-decoder structure on the Multi-XScience and Multi-News datasets. By examining Table <ref>, we observe large differences in performance when introducing noise to the encoder and decoder in highly noisy scenarios (with α=1e-1 and α=1e-2). Specifically, in noisy conditions, we find that adding noise to the decoder has a more substantial impact on performance compared to adding noise to the encoder. However, as the noise levels decreased, the performance gaps between the two approaches narrowed. This observation supports our initial hypothesis that the decoder is more sensitive than the encoder. The potential reasons are: (1) errors or inaccuracies in the decoder can have a cascading effect on subsequent tokens generated during decoding. This error propagation phenomenon can make the decoder more sensitive to small perturbations, as any mistakes or noise introduced during decoding can amplify and affect the overall quality of the generated summary; (2) Transformer-based models often employ an attention mechanism that allows the decoder to focus on different parts of the encoded input during the decoding process. The decoder's sensitivity is crucial in effectively attending to relevant information, and even slight perturbations in the encoded input can impact the attention weights and subsequently influence the decoding process. Consequently, it underscores the crucial role played by the decoder in summarization tasks. These findings shed light on the high importance of the decoder's contribution to the overall summarization process. §.§ Quantitative Performance of Different Training Strategies The experimental results presented in Table <ref> provide an overview of the performance of the VTC model trained using different pretraining strategies on the Multi-XScience and Multi-News datasets. In the table, the VTC is trained on the original document set and golden summary pairs. The “finetune" strategy refers to the training of the model on the pseudo dataset (introduced in Section <ref>) first and then fine-tuning on the original dataset. The “self-supervised" strategy denotes training the VTC model exclusively on the pseudo dataset. The “mix" strategy illustrates training the model using a combination of the pseudo dataset and the original dataset. By comparing the results obtained from these different training strategies, we aim to identify the most effective approach for each dataset. For the Multi-XScience, the results show that the VTC (pretrain-finetune) strategy outperforms the VTC trained on the original dataset across most metrics, indicating the effectiveness of the pretrain-finetune strategy in improving summarization quality. In contrast, the VTC (self-supervised) exhibits lower performance compared to the VTC (pretrain-finetune), suggesting that just self-supervised training is less effective for this dataset. Similarly, for the Multi-News dataset, the results imply the VTC model achieves good performance across all metrics, with higher scores on the VTC (pretrain-finetune) strategy, showcasing improved summarization quality. Conversely, the VTC (self-supervised) and VTC (mix) strategy yields lower performance compared to the other strategies. The comparison of these different training strategies reveals that the pretrain-finetune approach consistently leads to better summarization performance compared to the baseline VTC model and other training strategy, highlighting its effectiveness in improving summarization quality. To find the potential reason why the finetune strategy works well, we visualize the feature distributions of three training strategies: VTC, VTC (self-supervised) VTC (finetune) using Principal Component Analysis (PCA) as illustrated in Figure <ref>. For the Multi-News, the features come from the encoder of the VTC (self-supervised) and the VTC (finetuning) exhibits overlapping, while maintaining distance from the plain VTC. In contrast, for the Multi-XScience, the VTC (finetune) is more similar to the plain VTC but still noticeably distinct from the VTC (self-supervised). This observation is consistent with the performance results presented in Table <ref>. In the case of the Multi-XScience, finetuning the model after self-supervised training significantly improves the model's performance compared to the VTC. However, when the model is only pretrained using self-supervised learning, it performs worse than the VTC. This discrepancy can be attributed to the fact that the features of the finetuned model closely align with the VTC model's distribution since both models possess better representations for the final prediction. Conversely, for the Multi-News, the finetuned model exhibits only marginal improvements over the VTC. This observation also explains the overlap between features from the finetuned model and the self-supervised model, as finetuning adjusts the feature distribution towards the `genuine' distribution, albeit to a limited extent. §.§ The Relation Between Repetition and Uncertainty We examine the correlation between repetition and uncertainty in the process of generating summaries. To assess uncertainty, we compute a score for each token generated. Two summaries are presented: one featuring repetition and the other as a standard summary without repetition. The outcomes are depicted in Figure <ref>. The X-axis represents token indexes, while the Y-axis illustrates uncertainty scores for each token. In summary #1, where no repetitions occur, the uncertainties of tokens remain within a “normal" range. This suggests that the model successfully avoids repetitive patterns, resulting in lower uncertainty scores throughout the summary generation process. Conversely, in summaries #2, we observe a distinct pattern. As the repetition of tokens or phrases begins, the uncertainty scores escalate rapidly. By comparing uncertainty scores across different time slots, we gain insights into the relationship between repetition and uncertainty in abstractive summarization. When a repetition phenomenon occurs, we observe notable changes in the uncertainty score, indicating a correlation between the two factors. Specifically, as the model generates repetitive sentences or words, the uncertainty score tends to increase. This increase in uncertainty suggests that the model becomes less confident and more uncertain about the appropriateness or relevance of the repeated elements within the summary. By understanding this relationship, we can devise strategies to mitigate repetition and subsequently enhance the quality of generated summaries. By reducing uncertainty through the minimization of repetition, we pave the way for more accurate and reliable abstractive summarization. § CONCLUSION AND DISCUSSION This study attempts to empirically examine the influences on Transformer behaviors from five important perspectives: document boundary separators, Transformer structures, the sensitivity of encoder-decoder architecture, training strategies, and the relationship between repetition and uncertainty in generated summaries. We first explore the impact of separators on two flat Transformer and one hierarchical Transformer structure. Experiments indicate that adding separators makes hierarchical Transformers aware of document boundaries, unlike flat Transformers. This suggests that for models handling complex structures, separators can enhance performance. The necessity of adopting separators should be considered depending on the Transformer structure applied. The Transformer structure exploring experiments demonstrate that a higher level of granularity is favorable for the hierarchical Transformer structure. The experiments also demonstrate the simple structure, flat Transformer, has been able to show better performance on the Multi-XScience and Multi-News datasets than the complicated hierarchical Transformer structure. The flat Transformer models are sufficient for MDS tasks with relatively short length of documents. Furthermore, adding noise to the decoder affects performance more than adding noise to the encoder. This sensitivity is likely due to error propagation during decoding and the attention mechanism's dependence on accurate encoding. These results emphasize the decoder's crucial role in producing high-quality summaries and its significant impact on the summarization process. The pretrain-finetune strategy that trains the model on the pseudo labels first and then fine-tuning it on the original dataset consistently leads to improved summarization performance when compared to other training strategies. This finding highlights the effectiveness of the pretrain-finetune strategy in enhancing MDS model performance. Moreover, the analysis of the relations between repetition and uncertainty provides valuable insights into improving the quality of generated summaries. The findings suggest that as repetition occurs in the summaries, there is a noticeable increase in uncertainty scores. By recognizing this relationship, strategies can be developed to mitigate repetition and reduce uncertainty, ultimately enhancing the overall quality of abstractive summaries. These insights contribute to the advancement of abstractive summarization techniques and open avenues for further research in improving the reliability and effectiveness of summary generation. We also point out the possible exploring direction for future MDS work: (1) evaluate the generated summaries from multiple evaluations; (2) add the higher level of granularity information into the models; (3) investigate the MDS method for particularly long input documents; (4) pay more attention to the decoder when designing the Transformer-based summarization models; (5) try to reduce the Sudden sharp increase and high uncertainty score during the summary generation process. § LIMITATIONS The original Hierarchical Transformer (HT) model is trained on four GPUs (NVIDIA TITAN Xp) for 500, 000 steps, but with an unspecified batch-size. In order to keep a fair comparison and consider the limitation of our computation resource, all the models reported in the paper are trained on the same one GPU, which in turn influences the setting of batch-size. It may effect the performance of HT model. § APPENDIX §.§ Implementation Details The training of all models begins with an initial learning rate of 2. An initial warm-up phase spans the first 8,000 steps, followed by a subsequent multi-step learning rate reduction. During the training process, a batch size of 4,096 is utilized, and the optimization is performed for 20,000 steps using the Adam optimizer. A dropout rate of 0.2 is employed to enhance model robustness. All experiments are conducted on a single NVIDIA 3090 GPU with one Intel i9-10900X CPU. The operating environment is provided by Ubuntu 22.04.3 LTS. §.§ Summarization Models Vanilla Transformer (VT) <cit.> is a sequence-to-sequence model that is proposed for machine translation task. It is subsequently generalized in various tasks of NLP due to its strong performance <cit.>. Vanilla Transformer with Copy Mechanism (VTC)[We implement the VT and VTC based on https://github.com/Alex-Fabbri/Multi-News/tree/master/code/OpenNMT-py-baselines.]. This variant has a mechanism to copy the attention distribution that one of the randomly chosen attention heads from the encoder side into the decoder so that the generated text becomes less repetitive and less factually inaccurate. Hierarchical Transformer (HT) <cit.> proposed a hierarchical attention structure to attend long sequences effectively and capture cross-paragraph contextual relationships. The local Transformer layers encode individual paragraphs and global Transformer layers exchange paragraph-level information from local layers across paragraphs. §.§ Datasets The empirical studies are based on two widely used MDS datasets: Multi-XScience <cit.> and Multi-News <cit.>. Multi-XScience contains data from scientific articles. The task of this dataset is to generate the related work section of a target paper based on its abstract and the abstracts of the articles it refers to. Multi-News collects news articles from the site "newser.com." Each set of source documents has a professionally written summary and the task is to generate that summary based on the sources. Table <ref> describes the statistics of these two datasets, including the size of the train, test, and validation set, the average document length, and the average summary length. §.§ Data Processing For Multi-XScience and Multi-News datasets, the source documents are separated by a special token named “story_separator_special_tag”. The length of the input documents is restricted to 1024 tokens. In each document set, the number of tokens for one document is 1024/N, where N is the number of documents in a document set. For some shorter documents, the documents repeat themselves to fill the 1024 token quota. In the Multi-XScience dataset, the citations in the sources and targets are replaced by a common token `@cite'. §.§ Evaluation Metrics ROUGE[The parameters of ROUGE are -c 95 -2 -1 -U -r 1000 -n 4 -w 1.2 -a -m.] Recall-Oriented Understudy for Gisting Evaluation <cit.> is a set of evaluation metrics for comparing the overlapping textual units between generated summaries and golden summaries, including ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-L (R-L), ROUGE-SU (R-SU). R-1 and R-2 measure the overlapping unigrams and bigrams respectively while R-L identifies the longest co-occurring sequence of n-grams. R-SU is calculated as a statistic to measure the co-occurrence of unigram and skip-bigram. ROUGE-WE (R-WE) <cit.> is a variant of the ROUGE metric which replaces the hard lexical matching in ROUGE-N with a soft matching based on the cosine similarity of word embeddings. The soft matching in ROUGE-WE provides a more forgiving evaluation by not strictly requiring exact lexical matches, thus allowing for variations in word order and phrasing. BLEU BiLingual Evaluation Understudy <cit.> introduces a brevity penalty term and computes the geometric average of the modified n-gram precision. S3 <cit.> is a model-based metric that considers the features from other evaluation metrics, including R-N, R-L, R-WE and JS-divergence, to produce pyramid (pyr) and responsiveness (resp) scores. BertScore (BS)[The model type of BertScore is bert-base-uncased.] <cit.> measures the soft overlap of the token BERT embeddings from the machine-generated summaries and golden summaries. Relevance (Rel) <cit.> calculates cross-entropy over individually constructed probability distributions for a summary S and a source D using their own semantic units ω: Relevance(S,D)= ∑_ω _i P_S(ω _i) . log(P_D(ω _i)), where probability distributions of summary and source document are given by P_S and P_D respectively. Redundancy(Red) <cit.> evaluates the quality of the accumulation of information in the candidate summaries: Redundancy(S)= ∑_ω _i P_S (ω _i) . log(P_S(ω _i)). §.§ Visualization on the Impact of Document Separators We compare and analyze the embedding space of the tokens after they feed into the encoder with and without document separators by t-SNE visualization (Figure <ref>). After token representations feed into the hierarchical Transformer encoder, the cluster boundaries of documents with separators are easier to be identified in the embedding space. Different from the hierarchical Transformer model, these two flat Transformer models have difficulties to distinguish the document cluster boundaries in the embedding space when the token representations after feed into Transformer encoder. Potentially, the hierarchical Transformer prefers more structural information of documents to compose the final summaries, while the flat Transformer does not.
http://arxiv.org/abs/2407.12643v1
20240717151333
Numerical simulation of a helium Plasma-Material Interaction experiment in GyM linear device through SOLPS-ITER and ERO2.0 codes
[ "F. Mombelli", "G. Alberti", "E. Tonello", "C. Tuccari", "A. Uccello", "C. Baumann", "X. Bonnin", "J. Romazanov", "M. Passoni" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
]Numerical simulation of a helium Plasma-Material Interaction experiment in GyM linear device through SOLPS-ITER and ERO2.0 codes ^1 Politecnico di Milano, Department of Energy, Milan, 20133, Italy ^2 Ecole Polytechnique Fédérale de Lausanne, Swiss Plasma Center, Lausanne, 1015, Switzerland ^3 Istituto per la Scienza e Tecnologia dei Plasmi, Consiglio Nazionale delle Ricerche, Milan, Italy ^4 Forschungszentrum Jülich GmbH, Institut für Energie- und Klimaforschung – Plasmaphysik, Partner of the Trilateral Euregio Cluster (TEC), 52425 Jülich, Germany ^5 ITER Organization, 13067 St Paul Lez Durance Cedex, France ^6 See the author list of A. Uccello et al 2023 Front. Phys. 11:1108175. fabio.mombelli@polimi.it § ABSTRACT Learning how to safely handle Plasma-Material Interaction (PMI) is a key challenge towards the commercialisation of energy from nuclear fusion. In this respect, linear plasma devices are ideal experimental testbeds, and numerical codes play a crucial complementary role. In this paper, a numerical investigation of PMI-relevant helium plasma experimental discharges in GyM linear device is presented, in which SOLPS-ITER and ERO2.0 codes are coupled for plasma background generation and material erosion investigation respectively, with the aim to support the interpretation and complement the available experimental dataset. On the plasma side, simulated profiles are validated against experimental data to provide a realistic plasma background, and the role of He metastable states is assessed for the first time in SOLPS simulations. On the material side, the erosion and deposition effects due to the introduction of the sample-holder in the simulation volume are investigated, now considering also the real stainless steel composition as wall material. § INTRODUCTION The path towards the profitable implementation of a magnetic confinement nuclear fusion reactor, which sees ITER as a crucial milestone <cit.>, raises the compelling need to address the topic of Plasma-Material Interaction (PMI) <cit.>. On one hand, the build-up of plasma particle and heat fluxes towards solid plasma-facing components (PFC) results in their erosion and surface modification, threatening their properties and integrity <cit.>. On the other, the transport of eroded impurities into the plasma may result in excessive fuel dilution and enhanced radiation losses, while their deposition may contribute to tritium retention and promote dust formation <cit.>. Coherently, the investigation of PMI has gained high priority within the European fusion research programme coordinated by EUROFusion Consortium <cit.>: the Work Package Plasma-Wall Interaction and Exhaust (WP PWIE) is meant to pursue this objective exploiting both ad-hoc experimental and numerical methods. Although several PMI-relevant experimental campaigns have been carried out in tokamaks <cit.>, particle and heat fluences to PFCs in present-day devices are orders of magnitude lower than those expected in future reactors due to generally low pulse duration <cit.>. For steady-state ITER operation, ion fluxes ≤ 10^25 ions· m^-2s^-1 and heat fluxes of 10 MW· m^-2 are expected onto the divertor tiles <cit.>, to be integrated over the long duration of the plasma discharges (∼ 1000 s) <cit.>. Linear plasma devices (LPDs), thanks to their simple, cost-effective and flexible design, have thus been widely exploited to fill such experimental gaps, appearing as ideal testbeds for PMI phenomena <cit.>. Among the several LPDs which operate worldwide, the present paper considers GyM linear device, hosted at Istituto per la Scienza e Tecnologia dei Plasmi - Consiglio Nazionale delle Ricerche (ISTP - CNR), Milan, whose schematic is visible in figure <ref> and whose characteristics are comprehensively detailed in reference <cit.>. GyM LPD generates both hydrogenic and non-hydrogenic plasma <cit.>: namely, the relevance of helium (He) plasma to fusion research arises from the fact that He ashes will always be present in the thermonuclear plasma as reaction products, assessing its role in PMI is therefore crucial. Besides the experimental investigation, numerical codes offer the opportunity to interpret the experimental outcomes, gaining insight into the underlying physical mechanisms. Once validated, codes also acquire the potential to predict relevant scenarios which are not experimentally accessible. Moreover, the flexibility of LPD configures them as ideal testbeds for the development and validation of numerical tools <cit.>. It is in the groove of the numerical analysis that this contribution places itself. Two state-of-the-art codes are exploited for the present modelling activity, i.e. SOLPS-ITER <cit.> and ERO2.0 <cit.>. The first one couples a 2D multi-fluid edge plasma solver (B2.5) <cit.> with a 3D kinetic Monte-Carlo code simulating the transport of neutral species (EIRENE) <cit.>. Conversely, ERO2.0 is a 3D Monte-Carlo code simulating the material erosion by a plasma, the transport and deposition of the released impurities, both at a global and at a local microscopic scale <cit.>. Although the SOLPS package is optimised for the simulation of the Scrape-Off Layer of toroidal devices, it has already been applied to several LPDs <cit.>. Concerning the specific case of non-hydrogenic plasma in GyM, SOLPS was successfully applied for both argon (Ar) <cit.> and He plasmas <cit.>. ERO2.0 has also been applied to the study of PMI in LPDs <cit.>, while a morphology analysis at the microscale of samples exposed to a He plasma in GyM was presented in <cit.>. More recently, a novel comprehensive numerical analysis of PMI-relevant He plasma discharges was presented, in which a coupling procedure between SOLPS-ITER and ERO2.0 codes was implemented <cit.>. In such work, the He plasma was firstly generated with SOLPS, yielding a 2D distribution of electron and ion quantities. Such distributions were eventually exploited as inputs, or background, for the ERO2.0 simulation of the global erosion of GyM internal walls, impurity transport and deposition as a function of the wall material and bias voltage applied, setting a one-way coupling between the two codes <cit.>. This work represented a first step towards the integrated plasma-material modelling of a realistic PWI experiment in GyM, yet it left several points open for further studies. Indeed, neither the plasma profiles nor the erosion outcomes were validated against experimental data. Also, neither the plasma nor the material simulations accounted for the presence of a central sample-holder and its manipulator within the geometry of the device. Finally, the erosion simulations considered the walls of the device to be made of either iron (Fe) or copper (Cu), rather than of realistic stainless steel (SS), since the ERO2.0 database did not include the sputtering yield for a He plasma on SS steel at that time. The present work intends to extend the analysis initiated in <cit.>, i.e. to present a SOLPS-ITER and ERO2.0 coupled numerical investigation of plasma and material global erosion in GyM, yet partially overcoming the limitations associated to the previous study. To this purpose, realistic PMI experimental scenarios featuring He plasma discharges have been considered as a reference and exploited for the validation of SOLPS numerical results on the plasma side. For the first time, the effect of including long-lived metastable states among the neutral population of a pure He plasma in SOLPS runs is assessed. Hence, an optimised He plasma background has been used as an input for ERO2.0 simulations. In the step of defining GyM geometry for ERO2.0, a sample-holder in which samples may be arranged for exposure to plasma has been included. Erosion, transport and deposition of impurities originating both from the walls of the device and from the sample-holder have been evaluated as a function of the bias voltage applied and the wall material, now including the SS composition. This work is articulated in four main sections as follows: in section <ref>, the reference experimental framework on which the paper is based is described. In section <ref>, the setup and major results of the SOLPS-ITER numerical investigation of the He plasma are discussed. Similarly does section <ref> as concerns the ERO2.0 material erosion simulation. In section <ref>, the main outcomes of the work are recapped, and possible continuations are discussed. § REFERENCE EXPERIMENTAL FRAMEWORK A series of six PMI-relevant GyM discharges are considered as a reference experimental framework for the validation of plasma simulations. They all feature He as the main plasma species, with a He puffing strength of 42 sccm provided through the inlet valve at one of the bases of the SS cylindrical vessel (radius and length of R_0=0.125 m and L_0=2.11 m). Two turbo-molecular pumps guarantee a pumping speed of 500 L/s each, and external power is supplied by one of the two available ECRH gyrotron sources at 2.45 GHz and 3 kW nominal power ("SM" source with reference to <cit.>, connected to port 2U in figure <ref>). The axial magnetic field is generated by the surrounding coils, and the six experiments are categorised and labelled into three couples according to the intensity of the current flowing in such coils (I_coil), of 560 A, 600 A and 640 A respectively. Experimentally, the coil current configuration of 600 A is the one usually exploited for conducting PMI experiments in GyM <cit.>. Conversely, the values of 560 A and 640 A coil current are the minimum and the maximum ones allowing to have the resonant surfaces located inside the vacuum vessel: experiments at these I_coil were conducted for plasma characterisation through spectroscopic techniques and Langmuir probes rather than for PMI. For each value of I_coil, two experiments were executed, in the presence and absence of a sample-holder (SH) in which samples for PMI studies could be arranged (see figure <ref> below). The electron density n_e and temperature T_e radial profiles were acquired by three Langmuir probes (LP) located at axial positions 3U, 4U and 5U respectively (see figure <ref>). When present, the SH was inserted on the axis of the cylinder by means of a manipulator in correspondence of axial position 4U, thus replacing the corresponding LP. At present, the only available experimental data concern plasma parameters: no measurements of the global erosion and deposition within the GyM vessel are available at this stage. Figure <ref> shows the experimental n_e and T_e radial profiles for the three couples of discharges at the positions of LPs. In all cases, the electron density is in the order of ∼ 10^16 m^-3, yet higher in the 600 A case, with relative maxima at R = 0 (centre of the machine) and R ≃ 6 cm. The electron temperature is in the order of ∼ 5-10 eV. Comparing homologous discharges in presence and absence of SH (dotted and solid lines of the same colour respectively, in figure <ref>), the only relevant difference observed concerns the n_e profiles measured by LP 5U, which display a remarkable decrease at values of the radial coordinate ≲ 3 cm, i.e. at the centre of the device, when the SH is inserted. Indeed, LP 5U is situated behind the SH, i.e. on the opposite side of the SH with respect to the positions of the gas puff valve and ECRH resonant surface (see figure <ref>): the suppression of n_e profiles observed is thus reasonably due to the shadowing effect associated to the presence of the SH itself. For all plasma experiments, the values of neutral pressure measured by the two hot-cathode gauges at sections 3 and 5 of the device lie in the range of 0.09-0.1 Pa: this is essentially independent of the magnetic field configuration but is rather fixed by the volume of the vessel once the gas puff and pumping speed are set. As concerns the setup of plasma simulations described in section <ref>, the possible presence of the central SH is neglected and more details about this methodological choice are presented in subsection <ref>. On the contrary, the presence of the SH is included in the global erosion simulation, as it will be analysed in section <ref>. § SOLPS-ITER PLASMA SIMULATION The present section focuses on the numerical investigation of GyM plasma. In subsection <ref> the simulation setup and main modelling hypotheses are discussed, while the outcomes of the analysis are summarised in <ref>. The latter is further articulated into <ref> about a parametric sensitivity scan, <ref> in which the model is validated against experimental data and <ref>, in which the effect of including He metastable states in SOLPS runs is assessed and compared with respect to a point model for GyM plasma <cit.>. §.§ Simulation setup The first step towards the SOLPS-ITER plasma simulation is the construction of the computational mesh, which is done for linear machines starting from the reconstruction of magnetic equilibria for each discharge, according to the procedure described in detail in reference <cit.>. The resulting B2.5 and EIRENE meshes, structured quadrangular and unstructured triangular respectively, are plotted in figure <ref> for the plasma at I_coil = 600 A. Since the magnetic configuration is almost purely axial for all cases, no significant differences in the appearance of the computational meshes are observed for the three different I_coil. The computational meshes displayed cover only one half of the physical domain and cylindrical symmetry around the axis of the machine is assumed. Also note that, while the EIRENE neutral mesh covers the entire volume of the device upon symmetrization, the B2.5 field-aligned plasma mesh extends between the two bases (targets) of the cylinder axially, and to the first point of tangency radially, i.e. not to the physical lateral wall of the device. All plasma simulations include He+ and He++ as ion species and neutral ground state He, the standard SOLPS reaction database for a pure He case is implemented. However, for the sole simulations discussed in subsection <ref>, long-lived singlet and triplet He metastable states are also included among neutral species in addition to the ground state. For such runs, a wider reaction database is coherently implemented as described in more detail in <ref>. A gas puff strength of 1.88 × 10^19 neutral He atoms/s is set to match the experimental puffing conditions. All the wall surfaces but those corresponding to the turbomolecular pumps are considered as saturated with He atoms, their particle absorption probability is thus set as p_a = 0. On the contrary, a particle absorption probability at EIRENE pumping surfaces of p_a = 0.013 is set to mimic the experimental pumping speed[The particle absorption probability, or albedo, at the pumping surfaces p_a is set to model the effect of turbomolecolar pumps. Its value is related to the turbo-pump speed S (Ls^-1) by: S = A × p_a × 3.638 ×√(T/m), where T(K) is temperature and m(amu) is the mass of the neutral species. In the formula, A(cm^2) is the effective area of the pumping surface: since axial symmetry is assumed in EIRENE, the area of each pumping surface to be considered for the computation of p_a is A = 2 π r_p L_p, where r_p and L_p are the distance from the axis and the width of the pumping surfaces <cit.>.]. Drift effects are totally neglected. The cross-field transport is modelled as diffusive, which requires the specification of ion diffusivity D_n and ion/electron heat diffusivities χ_i,e, treated as free input parameters. Note that, depending on I_coil, the axial position of the resonant surface - i.e. the locus where the electron cyclotron frequency matches the gyrotron frequency of 2.45 GHz and energy is favourably absorbed by electrons, which happens for a magnetic field intensity of 87.5 mT - is displaced (see figure <ref>). Therefore, the external power delivered to the plasma is modelled as an energy source for electrons distributed around the axial position z_res of the ECRH resonance. In all simulations, the power density axial profile is modelled as a Gaussian centered at z_res with fixed width σ=0.1 m. Although the nominal power of the ECRH source is set, the efficiency in the absorption of energy by the electron population is unknown. Therefore, the actual amount of power P_ext absorbed by the plasma (i.e. the 3D integral of the power density distribution) is treated as a free input parameter. In the lack of experimental data or first-principle simulations of plasma-microwave interaction, the radial distribution of external power density source is also treated as a further degree of freedom. The modelling approach devised in this work includes either constant or Gaussian radial distributions, or a linear combination of the two (figure <ref>). Again, any radial distribution of the transport coefficients or power source is specified only on one half of the computational domain (i.e. for R>0) and cylindrical symmetry is assumed. As concerns the setup of boundary conditions, the sheath boundary conditions are enforced at the bases of the cylinder. On the symmetry axis of the cylinder, zero particle and energy fluxes, as well as zero parallel velocity gradient for the parallel momentum equation and zero current for the potential equation are imposed. On the lateral boundary, zero parallel velocity gradient and zero current are fixed, together with a leakage condition prescribing particle and energy outflows suppressed by a factor α_leak=1 × 10^-3 with respect to the corresponding flows at the plasma sheath entrance. Although some experimental setups described in section <ref> include the presence of a sample-holder along the plasma stream for the exposure of selected samples for PMI experiments, the plasma simulations performed, described and validated against experimental data in the following sections do not account for its presence. This choice is dictated by the limitations of the B2.5 mesh generator adopted and technical restrictions in SOLPS-ITER when modelling linear geometries, which require the specification of no more than two targets where the contact between the plasma and the solid boundary occurs. This constraint prevents the straightforward construction of a regular plasma computational mesh covering the entire volume of the device and including the sample holder. This is possible in principle, as demonstrated by the work of Rapp et al. <cit.>, where modifications to the SOLPS5.0 code were made to accommodate such a geometry, but is beyond the scope of the present paper. A possible straight-forward alternative to the code modification would be to restrict the B2.5 computational mesh to the sole portion of the volume of the device between one of the two bases and one of the two faces of the SH including the gas puff valve and the ECRH resonant surface. However, this approach would exclude the portion behind the SH, where - according to the experimental data discussed in <ref> - the only significant differences with respect to the plain scenario are expected, and therefore it is disregarded. In summary, as long as an overall plasma analysis is concerned, the presence of the SH is neglected. However, it will be taken into account properly in the following investigation of material erosion (section <ref>). §.§ Results §.§.§ Parametric scan As a first outcome of the numerical investigation of the He plasma in GyM, a sensitivity scan of the code's free parameters (enumerated in <ref>, i.e. particle and heat anomalous cross-field transport coefficients, external power source absorbed by the plasma) is presented. In each column of the left portion of figure <ref>, the impact of tuning one input parameter at the time is assessed for the I_coil = 600 A scenario on the electron density and temperature radial profiles evaluated at the axial position of LP 4U (similar results hold for 560 A and 640 A scenarios as well). The values assumed by the parameter being scanned are reported in legend, while the values kept fixed for the other input parameters are listed at the bottom of each column. * Particle diffusivity D_n. An increase in the anomalous cross-field particle diffusion coefficient (figure <ref>a) reflects in a smoothing of the electron density gradient from the periphery of the device inward, and in a corresponding peaking of the electron temperature profile for D_n ≲ 5 m^2/s. For D_n ≳ 5 m^2/s, an increase in D_n translates the profiles to lower densities and higher temperatures respectively at constant shape. In figure, the outcome of a simulation with D_n ∝ r^2 ranging between 1 m^2/s and 10 m^2/s - which will be used for achieving a better agreement with experimental data in section <ref> - is also shown. * Heat diffusivities χ. A change in the ion and electron heat diffusivities (figure <ref>b) has negligible impact over both electron and temperature profiles, which is consistent with the fact that GyM plasma conditions - characterised by low densities and small parallel temperature gradients - fall into the sheath-limited regime, dominated by convection rather than conduction <cit.>. * External power source P_ext. An increase in the strength of the external power source (figure <ref>c) results in an overall increase of the electron density profile at constant electron temperature. This is coherent with the GyM plasma being a low density one: the extra power supplied is spent to increase the plasma ionization fraction rather than to heat up electrons. Moreover, at fixed total amount of external power supplied, the modulation of the radial distribution of the power density source - according to a linear combination of constant and Gaussian functions - results in a local increase of electron density and temperature in correspondence of the peaks of the distribution. According to figure <ref>, for the sole 640 A coil current scenario, two resonant surfaces along the cylinder axis - and two corresponding values of z_res - may be identified in sectors 2 and 6 of the device (with reference to figure <ref>): in principle, the absorption of external energy occurs at both axial positions. The right portion of figure <ref>d compares the electron density and temperature profiles yielded by three simulations for which the same amount of input total power (i.e. P_ext = 200 W) is split in different ratios between the two resonant surfaces (0-200 W, 100-100 W, 200-0 W, where the first number is the amount of external power deposited at sector 2 and the second at sector 6). The temperature profiles are coincident, and only limited difference in the magnitude of electron density profiles at constant shape is observed: n_e evaluated at the axial position of LP 4U increases as the amount of external power deposited at the second resonance in sector 6 increases from 0 to 200 W. This is true not only at the axial position of LP 4U, but at all axial positions. For simplicity and similarity with the other coil current scenarios, in the continuation of the analysis, the total amount of power - treated as a totally free parameter - will be modelled as absorbed only at the resonance in sector 2, i.e. on the gas puff side of the device. §.§.§ Validation In the present subsection, a validation of the numerical results against experimental data is proposed. This is meant (i) to provide a realistic plasma background for the eventual ERO2.0 erosion analysis, (ii) to gain physical insight on the possible mechanisms responsible for the observed experimental profiles, mostly in relation to the different magnetic field configurations taken into account, (iii) to complement the dataset and enrich the characterisation of GyM He plasma. The parametric scan presented in subsection <ref> pointed out that, among the plasma parameters for which experimental data are available, electron density profiles are the most sensitive to a change in the input parameters. On the contrary, the values of electron temperature - especially at the centre of the device - are scarcely dependent on the free parameters and are rather fixed by the throughput strength, i.e. the overall balance of gas puffing and pumping. Operatively, an optimum set of input parameters (cross-field transport coefficients, external power absorbed by the plasma and radial distribution of the external power density) is identified for each I_coil scenario to provide a satisfactory qualitative and quantitative reproduction of the experimental density profiles, and the temperature ones are obtained accordingly. In all three cases, a parabolic profile is imposed for particle diffusivity (i.e. D_n ∝ r^2), such that D_n(R=0)∼ 1 m^2/s and D_n(R=R_0)∼ 10 m^2/s, at fixed and radially uniform heat diffusivity χ_i,e=1 m^2/ s. The magnitude of the external power input required to match the experimental profiles for the three coil current intensity cases needs to be different (P_ext = 200 W, 300 W and 250 W respectively). Also, seeking for a satisfactory agreement with the density profiles - which all show relative maxima at R = 0 and R ≃ 6 cm (see <ref>) - a radial distribution for the external power density source is introduced (namely a linear combination of a constant and two Gaussian functions in the form: ρ(r)= N ×{ c_0 + ∑_i=1^2 c_i exp [- (r-r_i )^2/σ_i^2 ] }, with c_0, c_i, r_i, σ_i free parameters and N normalization factor to ensure that the volume integral of the power density distribution matches the desired external power P_ext). Physically, this suggests that the efficiency of ECRH-plasma coupling and energy transfer is largely dependent on the specific magnetic field configuration, which in turn can promote the absorption of energy at selected radial spots, where higher electron density is found. In the absence of specific experimental data, a first-principle numerical simulation of wave propagation in the GyM magnetized plasma by means of ad-hoc multiphysics codes could provide further physical insight, but this is beyond the scope of this paper. The results and a summary of the adopted parameters are visible in figure <ref>. The numerical temperature profiles are coincident at the three LP axial positions; however, while they are in good agreement with the corresponding experimental ones for the 640 A scenario, they tend to underestimate them in the 600 A and even more in the 560 A scenario. For the 600 A case, the dotted line in figure <ref> shows the reference plasma background used for the erosion analysis presented in section <ref>. This simulation uses the same anomalous transport coefficients enforced in the solid-line simulations (D_n ∝ r^2 in the range of 1-10 m^2/s, χ_i,e = 1.0 m^2/s), with a radially uniform external electron energy source of 400 W. The numerical results well reproduce the observed radial n_e and T_e overall trends, although the density peak at R ≃ 6 cm is not reproduced. However, note that the radial portion of plasma relevant to the ERO2.0 erosion study is the one striking the SH, which extends well below the radial position of the n_e peak. Therefore, these simplified numerical profiles will constitute the plasma background for the ERO2.0 erosion simulation discussed in section <ref>. By considering the latter simplified 600 A model with some detail from now on (similar considerations hold for the other scenarios as well), the 2D distributions of neutral He, He+, He++ densities and of neutral He pressure are plotted on a longitudinal section of the device. The density of neutral He is about three orders of magnitude greater than the density of He+, which is in turn five orders of magnitude greater than the one of He++, which determine an ionization fraction α≃ 0.002. The presence of He++ may be thus reasonably neglected in the low-density and low-temperature GyM plasma, contrary to the situation typical of tokamaks, where the higher ionization state is generally dominant <cit.>. The neutral pressure distribution is approximately uniform throughout the central volume of the device and is peaked at the bases of the cylinder, where the ions striking the solid surfaces are recycled as neutrals and puffing is performed. Also note that the He pressure provided by the model in correspondence of the pressure gauges is about 0.09 Pa: this value is totally compatible with the experimental one (see <ref>). §.§.§ Role of metastable states The He atom possesses two long-lived metastable (MS) states: both feature the 1s^12s^1 electronic configuration, and are distinguished by the spin orientation of the two electrons into singlet and triplet states, which will be labeled as He^*(1) (with lifetime τ≃ 20 ms) and He^*(3) (τ≃ 7800 s) respectively <cit.>. In low-temperature, low-density plasma conditions typical of LPDs, MS states and their transport within the chamber could affect the plasma properties. In this work, the possible relevance of including such He MS states among the plasma neutral populations is preliminarily investigated for the first time in a purely He plasma by means of ad-hoc SOLPS-ITER simulations. Operationally, this goal is achieved by splitting the neutral population treated by EIRENE into three separate species (the ground state He and the two MS states) and enlarging the reaction database to include the MS-resolved ones extracted from the AMJUEL database and reported in table <ref> <cit.>. Figure <ref> compares the electron density and temperature radial profiles at a fixed axial position yielded by two simulations, MS-unresolved and resolved respectively. The input parameters set in the two cases are those of the simplified model for the I_coil= 600 A case presented in <ref> (dotted red lines in figure <ref>: D_n ∝ r^2 in the range of 1-10 m^2/s, χ_i,e = 1.0 m^2/s, P_ext = 400 W uniformly distributed radially). The inclusion of MS neutral states yields a slightly higher electron density profile, acting (check the power scan in <ref>) as if a larger amount of power was supplied to the plasma: the presence of He MS species thus appears to promote ionization processes and increase the overall ionization fraction. Conversely, the two electron temperature profiles are practically indistinguishable. Also, the distribution of both MS species provided by Eirene is approximately uniform throughout the chamber. Nevertheless, in the absence of a proper spectroscopic diagnostics, the actual relevance of He MS states in GyM plasma goes undetected under the experimental point of view, and the resolved model may not be validated. As a partial compensation for such limitation, a comparison between the outcomes of the SOLPS-ITER runs of figure <ref> and the results of a simplified MS-resolved 0D plasma model is proposed. Such 0D model was developed in <cit.> extending the MS-unresolved counterpart firstly presented in <cit.>, and its main features are retraced in Appendix A. In the 0D model, the experimental values of gas puff and pumping speed (42 sccm and 500 Ls^-1 respectively) are set, a decay length for density of λ_n = 5 cm is set coherently with the choice made in reference <cit.>. Concerning the cross-field diffusion coefficient, the SOLPS simulations validated in section <ref> all assumed a radially variable profile. In a 0D treatment, any spatial dependence is suppressed, and the specification of a single value of diffusivity D_⊥ is required. Therefore, a scan in the value of the diffusion coefficient is made investigating the effect of its choice on the outcomes. Clearly, the strongly different hypotheses on which SOLPS and the 0D model are based make it impossible to perform a direct quantitative benchmark between the two. Most importantly, the 0D model includes one MS species only and is based on the ADAS database, while SOLPS treats both MS states and is based on the AMJUEL database. However, the global 0D model is a useful simplified tool to compare relative trends between MS-resolved and unresolved cases. The results of the comparison are summarised in figure <ref>. The bars report the values of the mean metastable and electron density and temperature for the two SOLPS simulations - MS-unresolved and resolved respectively - whose outcomes are plotted in figure <ref> according to the legend. Conversely, the line plot in the centre shows the same quantities as foreseen by the 0D point model as a function of the diffusion coefficient D_⊥ input parameter. As a first observation, both the SOLPS simulation and the 0D model - independently on the D_⊥ set - predict a negligible increase in the electron temperature T_e when turning from the unresolved to the resolved case. Also, the ratio between the mean MS density and the mean electron density foreseen by the 0D model increases from ∼ 20 % to ∼ 50 % as D_⊥ increases from 0.5 to 10 m^2/s: the ratio at D_⊥ = 5 m^2/s is ∼ 30 % and is consistent with the one predicted by the SOLPS simulation. Turning from the unresolved to the resolved model in SOLPS foresees an increase in the mean n_e of ∼ 20 %. As far as the point model is concerned, the magnitude and the sign of the variation in n_e following the introduction of the MS population appears to depend on D_⊥. For a choice of D_⊥ = 10 m^2/s, an increase of ∼ 20 % in n_e is observed, which is compatible with SOLPS results -. However, lower values of D_⊥ are associated to a decrease of n_e upon introduction of MS species. Overall, although no rigorous validation may be provided at this stage, one may argue that the current implementation of He MS populations in SOLPS-ITER is at least partially supported by the compatibility of its results with the ones of a much simpler 0D model. Although assessing the role of MS species in a pure He plasma is crucial for the characterisation of the plasma itself and a validation of the MS-resolved model is urgent, in the following erosion analysis we will continue to refer to the unresolved simulations presented in subsection <ref>, which were validated against experimental data. In the first instance, it appears reasonable to assume that the actual role of MS states in GyM plasma is scarcely relevant as long as the numerical investigation of PMI is concerned. In fact, since erosion results from the impact of energetic ions on solid surfaces, and since the external power source is also treated as a free parameter, once the numerical electron density profiles match the experimental ones, it makes little difference that this is achieved without MS states among the neutral population and slightly higher input power or with MS states and slightly lower input power respectively. § ERO2.0 GLOBAL EROSION SIMULATION The present section focuses on the material side, i.e. it presents the ERO2.0 simulations performed to assess the erosion of plasma-facing structures in GyM, of PMI-relevant tungsten (W)-based samples exposed to the plasma stream and the deriving deposition. The main goals, in this respect, are (i) the comparison between the outcome of simulations based on simplified assumptions for the wall material with respect to the full steel composition, (ii) the assessment of the effect related to SH presence on wall erosion and, conversely, (iii) the role of wall impurities on sample erosion. Note that, although no experimental data concerning material erosion are available for model validation at this stage, the results of this work will guide the design of an experiment devoted to the collection of erosion-deposition data, to be performed by placing collectors at selected spots of the vessel. §.§ Simulation setup As far as the simulation setup is concerned, the 3D GyM wall geometry is reproduced (figure <ref>) and the 2D distribution of relevant plasma parameters (electron and ion density and temperature, plasma flow) yielded by the simplified SOLPS-ITER simulation at I_coil = 600 A - taken as a reference and discussed in detail in the second part of <ref> - is imported as a plasma background, assuming cylindrical symmetry. Note that, according to the observations made in <ref>, He+ may be safely considered as the only relevant ion species, thus excluding the He++ contribution from the input plasma background. Moreover, the aforementioned sample-holder is included in the definition of the simulation domain at fixed plasma background, figure <ref> showing its geometry and materials. In all simulations, sputtering yields are retrieved from the SDTrimSP database available in ERO2.0. The energy distribution of sputtered species is modelled as a Thompson one, while their angular distribution is assumed to be a function of the incidence angle of the impinging ion, with the forward direction with respect to it being favoured <cit.>. A first parameter which has been varied in the simulations is the wall material. Although the GyM vessel is made of AISI 304L SS, the previous study treated it as made of either pure Fe or pure Cu <cit.>. Such a choice was dictated by a lack of sputtering yield data for He ion species on chromium (Cr) and nickel (Ni), i.e. the main alloying elements of SS, and their similarity - in terms of physical properties - to Cu, for which a much wider database was available and thus taken as proxy <cit.>. Since then, the dataset has been enlarged by means of SDTrimSP simulations, and this work considers and compares pure Fe, pure Cu and realistic AISI 304L SS (73% Fe, 18% Cr, 9% Ni) as possible wall materials: the results of this investigation are presented in <ref>. Secondly, the possibility of setting a bias voltage (V_bias) to the SH is foreseen. Applying a tunable bias voltage to the SH enables to modulate the kinetic energy of the plasma ions impinging onto it. The application of a bias voltage in experiments influences the conditions in the sheath. However, since SOLPS-ITER background plasma only extends up to the sheath entrance, no influence of the bias voltage on the plasma parameters imported in ERO has been considered. For V_bias = 0, the energy of impinging ions depends solely on the plasma potential, which is about 20 V. Although the bias voltage applied is always negative (thus accelerating impinging ions), we will always refer to its absolute value for simplicity. The V_bias values applied to SH in this simulations reflect the ones usually set experimentally, in the range 0 - 320 V. As in experiments, the bias voltage is applied only to samples and molybdenum (Mo) mask. Also note that the tantalum (Ta) mask, being unbiased during experiments, is expected not to be eroded, and it has been assumed as made of W due to a lack of sputtering yield data for all material combinations with Ta and to reduce the number of different species in the simulations. The choice of W is justified by the high sputtering threshold for He on Ta (∼ 90 eV), which is comparable with the one on W (105-110 eV) <cit.>. §.§ ERO2.0 results §.§.§ Wall material analysis In this section, the effect of wall material choice on GyM wall erosion is investigated. Three different wall material compositions have been considered: (i) pure Cu, (ii) pure Fe and (iii) the real AISI 304L SS. For each one of these, five different values of V_bias have been set, namely 0, 50, 120, 200 and 320 V, in agreement with the ones commonly adopted in experiments. The dependence of the SDTrimSP sputtering yields used in this work on projectile energy and incidence angle is reported in figure <ref>. Due to the numerous combinations of projectile - target materials (36), only the most relevant ones are shown, namely He, as main plasma species, and Mo, as main eroded species, projectiles. For the SS case, according to the local incoming flux of He+ ions and eroded particles, the code evaluates the sputtering yield for each element separately, and the eroded flux is then weighted on the material composition. No composition evolution is considered in this first attempt. The estimated gross erosion of GyM walls, i.e. the number of sputtered atoms per unit area and time without considering their possible deposition, as a function of wall material with no bias voltage applied to the SH and expressed in [atoms · m^-2s^-1] is: (i) 7.92× 10^17 for pure Cu, (ii) 5.28× 10^16 for pure Fe, and (iii) 3.86× 10^16 for SS. For the case of SS wall, the overall gross erosion arises from the sum of the gross erosion individual contributions associated to each element in the SS composition. Pure Cu wall presents a severely higher gross erosion with respect to both pure Fe and SS, whose behaviour is comparable. This hierarchy is in accordance with the relative magnitude of the sputtering yields reported in figure <ref>, where Cu presents the highest erosion for both He and Mo projectiles. Note that the observed trend is unchanged also at a larger V_bias, for which a few percent increase in the global gross erosion amount is obtained for all wall materials investigated: this is consistent with the higher kinetic energy acquired by plasma ions impinging on the SH, which reflects in a higher amount of energy retained by Mo and W atoms eroded from the SH itself, eventually impinging on surrounding walls and fostering their erosion. The overall 30% increase in gross erosion for Fe with respect to SS walls may be ascribed to the presence of Cr in SS, which is characterized by the lowest sputtering yield. Indeed, about 80% of SS gross erosion is due to Fe, with a 10% contribution due to Cr and Ni each: the presence of Fe will thus reduce as the fluence increases and erosion proceeds, in favour of a Cr-enriched surface. This could further reduce the overall SS gross erosion. Considering GyM vessel as entirely made of of Fe is thus expected to yield a reasonable upper limit for the wall erosion, while limiting the complication in the interpretation of the simulation results deriving from the large number of projectile-target combinations for SS. Therefore, this modelling approach will be adopted in the continuation of this work. §.§.§ Effect of sample-holder on GyM wall erosion In the previous work <cit.>, a proof of concept of the coupling between SOLPS-ITER and ERO2.0 in a linear device (GyM) was presented. In this work, the objective is moving forward in the direction of modelling a real PMI experiment, thus including also the SH in ERO2.0 simulation domain. Due to its position in the middle of the device, impurities eroded from the SH are likely to impinge onto the lateral wall, which in turn may undergo erosion. Note that the magnetic field lines are parallel to the lateral wall itself: according to the leakage boundary conditions enforced in the setup of plasma simulations, the ion radial flux at the lateral boundary of the plasma computational mesh is 1/1000 of the corresponding one at the entrance of the sheath which builds up in front of a solid surface orthogonal to the magnetic field lines. Therefore, the lateral wall erosion is mainly due to impurities eroded elsewhere, with the SH possibly playing a major role as it is the only biased component during GyM experiments. All the other parts of GyM wall are not substantially influenced by the presence of the SH, thus will not be considered in this section: as a reference, at maximum V_bias, impurities eroded from the samples and SH are responsible for just ∼ 1 % of the gross erosion of the cylinder bases, which may be practically fully ascribed to the striking of plasma ions. Figure <ref> shows the distribution of the gross erosion of GyM lateral wall in different conditions, namely without SH and with SH inserted, at lowest (0 V) and highest (320 V) V_bias. Complementarily, figure <ref> presents the relative contributions of the different species (i.e. eroded impurities) to the erosion of the lateral wall as a function of the bias voltage applied to the SH. Again, He plasma ions are not included since they are not striking the lateral wall and causing its direct erosion. For V_bias = 0, the influence of the SH on lateral wall erosion is limited and concentrated in the proximity of the SH itself. In this condition, only Fe from the SH support and cylinder bases is eroded and may in turn erode the lateral wall, while Mo and W are not: the contribution from the Fe SH support (see figure <ref>) is then what differentiates the case with and without SH in figure <ref>. As V_bias is increased to 50 V, the kinetic energy of plasma ions overcomes the sputtering threshold for Mo. Eroded Mo impurities in turn retain sufficient energy to largely dominate the erosion of the lateral wall and increase its absolute value by one order of magnitude (figure <ref>). At higher and higher V_bias, the magnitude of the lateral wall erosion increases more and more and is firmly dominated by Mo. W eroded from samples plays a marginal role only for V_bias > 120 V, when the sputtering threshold for He on W is overcome. According to the colorbars in figure <ref> and the table in figure <ref>, the difference in both local and global gross erosion between the lowest and highest V_bias reaches a few orders of magnitude. §.§.§ Effect of eroded materials on sample erosion Once investigated the effect of SH inclusion on the erosion of GyM walls, the influence of wall impurities on the sample erosion is assessed. This is particularly relevant for the possible interpretation of experimental data, since both the increased erosion caused by wall impurities and their deposition on samples could influence experimental outcomes of mass loss measurements. Figure <ref> shows the contribution of the various impinging particles to the gross erosion of samples as a function of the bias voltage applied to SH. In this case, He plasma is included, since samples are directly eroded by it. As for lateral wall, only Fe components are eroded by He plasma with no bias, thus only Fe neutrals and ions possess enough energy to slightly contribute to sample erosion. For V_bias = 50 V, Mo mask starts to be eroded by the plasma, and it immediately becomes the major contributor to sample erosion. Indeed, both Mo eroded from the mask and W eroded from samples may be ionized in plasma, be captured by axial magnetic field and come back to sample-holder due to plasma flow. As soon as He plasma ions, whose flux onto the samples is Γ≃ 3.5 × 10^20 m^-2s^-1, are able to directly erode W samples (for V_bias≥ 120 V), they dominate sample erosion: this is consistent with the fact that the eroded impurities account for just ∼ 0.3 % of the total particle flux striking the samples, which is thus largely dominated by the plasma itself. However, it should be noted that around the sputtering threshold for He on W, i.e. for V_bias = 120 V, a 15-20% of W sample erosion is still caused by impurities eroded from Mo mask. The contribution due to eroded impurities then decreases to few percent at higher bias in favour of He. Also note that, at every step in the V_bias, the sample gross erosion increases by orders of magnitude. Finally, the fractional deposition of impurities on samples is evaluated in figure <ref>. For each V_bias investigated, the origin (i.e. the erosion location within the vessel) of the deposited species is displayed. For V_bias = 0, the main source of deposition is Fe eroded from the SH support, with a non-negligible contribution due to Fe eroded from bushings <cit.> around the SH (see figure <ref>). As it is evident from the first row of figure <ref>, the bases of the GyM wall are too far to significantly influence sample erosion/deposition. As V_bias increases, Mo coming from the mask plays a dominant role even at the highest V_bias, when its contribution remains higher than redeposition from samples themselves. Looking at table in figure <ref>, it should be noted that deposition on samples is well below the gross erosion for V_bias≥ 120 V, thus only marginally influencing net erosion measurements when bias is high enough for He ions to cause W sputtering. On the contrary, for V_bias < 120 V, samples are in net deposition conditions. § CONCLUSIONS AND FUTURE PERSPECTIVES The study discussed in this paper presents a comprehensive numerical investigation of Plasma-Material Interaction (PMI) in GyM linear plasma device, performed by coupling plasma solver SOLPS-ITER and erosion-deposition code ERO2.0, according to the procedure firstly described in reference <cit.>. To this purpose, few PMI-relevant pure He discharges are taken as a reference experimental framework for plasma background production. Based on a preliminary sensitivity scan, which points out that the tuning of SOLPS-ITER free parameters mostly affects the electron density rather than temperature radial profiles, a validation of the numerical plasma profiles against available experimental data is presented for three magnetic field configurations. Optimal input parameters allowing to reproduce experimental Langmuir probe profiles are identified, allowing (i) to recognise the strength and radial shape of the external power source as the major responsible for the difference in the profiles at different coil current intensity, (ii) to generate a realistic plasma background (at I_coil = 600 A) taken as a reference for eventual erosion investigation. Secondly, a novel investigation of the role of neutral long-lived He metastable states is carried out through proper SOLPS simulations. Primarily, the inclusion of MS populations in GyM conditions results in an overall increase of the electron density at constant external power source, which is coherent with the results yielded by the analytical global point model presented in Appendix A of this work for a suitable choice of diffusion coefficient. The outcomes of the reference MS-unresolved SOLPS run are taken as input for ERO2.0 studies of impurity erosion and deposition concerning both the walls of the device and the samples exposed to the plasma stream and arranged in a central sample-holder (SH), as a function of the wall material and of the bias voltage applied to the SH itself. With respect to the preliminary work presented in <cit.>, the present investigation is extended to account for the realistic SS composition of GyM vessel on one hand, and for the presence of the SH in the geometry on the other. The estimation of the gross erosion as a function of the wall material points out that SS behaviour is well approximated by that of Fe only, which is then taken as proxy for the sake of simplicity. Moreover, the SH - mainly through the impurities eroded from the Mo mask - turns out to dominate the erosion of the lateral wall of the device. As far as W-based samples exposed to the plasma stream are concerned, their erosion by He+ species is the dominant mechanism for sufficient bias voltage, although the Mo impurities eroded from the SH also contribute to the erosion of the samples themselves. Finally, the analysis of the impurities deposited on the samples suggests that they mainly originate from the Mo mask as the bias voltage exceeds the threshold for Mo sputtering by impinging plasma ions. Several continuations and future perspectives are envisaged for the present work. On the plasma side, this includes the application of the new Wide Grid SOLPS-ITER release to GyM geometry, possibly enabling to build a computational mesh accounting for the presence of the SH. Also, the investigation of the role of He metastable states is only at the beginning: for instance, further numerical campaigns and more refined theoretical models are required to assess their contributions in tokamaks, possibly supported by experimental data for validation and comparison purposes. Still on the plasma side, the first-principle modelling of wave transport in plasma through proper multiphysics code may shed light on the absorption of external power by electron cyclotron resonance at specific spots of the resonant surface, which in this work is taken as a purely free parameter and adapted to match the experimental profiles. On the material side, an experimental campaign is foreseen to fill the current lack of data concerning the erosion and deposition of both GyM vessel and samples. The experiment, whose design is supported by the ERO2.0 numerical simulations, will be carried out by placing deposition monitors at selected spots of the device - where higher erosion and deposition rates are expected - and collected data will in turn allow to validate the outcomes of the numerical simulations presented in this paper. A further possible continuation of the ERO2.0 numerical investigation is about assessing the effect of residual carbon or oxygen present in the GyM vacuum chamber on the erosion and deposition processes. Finally, the implementation of a back-coupling scheme between the two codes is of interest, such that the ERO2.0 impurity sources are provided as input to second-order SOLPS-ITER simulations. § MS-RESOLVED 0D MODEL FOR GYM PLASMA In this appendix, some details about the MS-resolved 0D model used in section <ref> are reported. Such analytical model is developed in <cit.> extending the MS-unresolved one presented in reference <cit.>, which is based on the volume averaging of SOLPS-ITER equations. GyM plasma is described by a set of space-independent variables, i.e. density n_p and temperature T_p of each one of the considered populations p. The MS-unresolved model of reference <cit.> included ground state (GS) He as the only neutral species and He+ as the only ion species, thus neglecting the presence of He++ ions and allowing to identify the ion and electron density n_He+=n_e to ensure neutrality. Moreover, the unresolved reaction rates were obtained by solving the ADAS radiative collisional model neglecting the lifetime of MS states. Conversely, a resolved treatment explicitly introduces a density balance equation for every MS species considered. For simplicity, the 0D model developed in <cit.> and hereby retraced still considers He+ as the only ion species and a single MS state He^* accounting for both the singlet and triplet states, which thus requires the introduction of a single additional density balance equation. Ion and neutral temperatures are assumed to be constant and equal to the room temperature of 0.025 eV. The reaction rates are extracted from the corresponding ADAS MS-resolved database. The equations of the resolved model are reported below, where the unknowns n_He+ = n_e are the densities of He+ ions and electrons respectively, n_He and n^* are the densities of the ground state neutral He and of the single MS species respectively, and T_e is the electron temperature. dn_He+/dt = R_iz n_e n_He + R^*_iz n_e n^* - R_rc n_e n_He+ - R^*_rc n_e n_He+ - Γ_i,wall n_He+ dn_He/dt = -R_iz n_e n_He + R_rc n_e n_He+ + X_(MS)→ (GS) n_e n^* - X_(GS)→ (MS) n_e n_He + Γ_n,recyc n_He+ - Γ_n,pump n_He + Γ_n,puff/V dn^*/dt = -R^*_iz n_e n^* + R^*_rc n_e n_He+ + X_(GS)→ (MS) n_e n_He - X_(MS)→ (GS) n_e n^* - Γ_n,pump n^* 3/2 n_e dT_e/dt = P_ext/V_e - E_iz R_iz n_e n_He - ( E_iz - E^*_iz) R^*_iz n_e n^* -E_n,rad R_n,rad n_e n_He - E^*_n,rad R^*_n,rad n_e n^* -E^*_iz X_(GS)→ (MS) n_e n_He - E_i,rad R_i,rad n_e n_He+ - Γ_e,wall T_e n_e - 3/22m_e/m_i R_i,el n_He+ n_e (T_e - T_i ) - 3/22m_e/m_n R_n,el(n_He + n^*) n_e (T_e - T_n ) - 3/2dn_He+/dt T_e The meaning of the coefficients which appear in the balance equations above is summarized in table <ref>. Note that it is assumed that puffing and plasma recycling only produce GS He atoms, while turbo-pumps act as sinks also for the MS population. The input parameters of the 0D model are the external power P_ext, the gas puff strength and pumping speed, which determine Γ_n,puff and Γ_n,pump according to the expressions valid for the unresolved point model contained in <cit.>. Moreover, since the expression for the ion-to-wall sink is retrieved from the integration of SOLPS-ITER equations, it is required to specify a radial decay length for the density λ_n - mimicking the decay boundary condition scheme - and a cross-field diffusion coefficient D_⊥ for its computation according to the formula derived in reference <cit.>. § ACKNOWLEDGEMENTS The authors would like to acknowledge Detlev Reiter, who provided an initial H+He MS-resolved Eirene model. G. Alberti and M. Passoni acknowledge funding from Eni SpA for the PhD program. This work has been carried out within the framework of the EUROfusion Consortium (WP-PWIE), partially funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 — EUROfusion). The Swiss contribution to this work has been funded by the Swiss State Secretariat for Education, Research and Innovation (SERI). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union, the European Commission, SERI or ITER Organization. Neither the European Union nor the European Commission nor SERI can be held responsible for them. § REFERENCE iopart-num
http://arxiv.org/abs/2407.12518v2
20240717121304
Inertial Methods with Viscous and Hessian driven Damping for Non-Convex Optimization
[ "Rodrigo Maulen-Soto", "Jalal Fadili", "Peter Ochs" ]
math.OC
[ "math.OC", "37N40, 46N10, 49M15, 65B99, 65K05, 65K10, 90B50, 90C26, 90C53" ]
Inertial Methods with Viscous and Hessian driven Damping for Non-Convex Optimization Rodrigo Maulen-Soto, Jalal Fadili, Peter Ochs July 22, 2024[The insight and motivation for the study of inertial methods with viscous and Hessian driven damping came from the inspiring collaboration with our beloved friend and colleague Hedy Attouch before his unfortunate recent departure. We hope this paper is a valuable step in honoring his legacy. ] ======================================================================================================================================================================================================================================================================================================================== § ABSTRACT In this paper, we aim to study non-convex minimization problems via second-order (in-time) dynamics, including a non-vanishing viscous damping and a geometric Hessian-driven damping. Second-order systems that only rely on a viscous damping may suffer from oscillation problems towards the minima, while the inclusion of a Hessian-driven damping term is known to reduce this effect without explicit construction of the Hessian in practice. There are essentially two ways to introduce the Hessian-driven damping term: explicitly or implicitly. For each setting, we provide conditions on the damping coefficients to ensure convergence of the gradient towards zero. Moreover, if the objective function is definable, we show global convergence of the trajectory towards a critical point as well as convergence rates. Besides, in the autonomous case, if the objective function is Morse, we conclude that the trajectory converges to a local minimum of the objective for almost all initializations. We also study algorithmic schemes for both dynamics and prove all the previous properties in the discrete setting under proper choice of the step-size. Optimization; Inertial gradient systems, Time-dependent viscosity, Hessian Damping, Convergence of the trajectory, Łojasiewicz inequality, KŁ inequality, Convergence rate, Asymptotic behavior. 37N40, 46N10, 49M15, 65B99, 65K05, 65K10, 90B50, 90C26, 90C53 § INTRODUCTION §.§ Problem statement Let us consider the minimization problem Pmin_x∈^d f(x), where the objective function f:^d→ satisfies the following standing assumptions: H_0 f∈ C^2(^d); inf f> -∞. Since the objective function is potentially non-convex, the problem (<ref>) is NP-Hard. Therefore, until now, we cannot propose an algorithm that takes polynomial time to ensure convergence to the global minimizer of (<ref>) in general. However, there are methods to ensure theoretical convergence to a local minimizer; in practice, there are techniques to (heuristically) converge to a global minimizer in some cases. Since the objective function is potentially non-convex, the problem (<ref>) is NP-Hard. However, there are tractable methods to ensure theoretical convergence to a critical point, or even to a local minimizer. In this regard, a fundamental dynamic to consider is the gradient flow system: GFẋ(t)+∇ f(x(t)) = 0, t>0; x(0) =x_0. For any bounded solution of (<ref>), using LaSalle's invariance principle, we check that lim_t→ +∞∇ f(x(t))=0. If d=1, any bounded solution of (<ref>) tends to a critical point. For d≥ 2 this becomes false in general, as shown in the counterexample by <cit.>. In order to avoid such behaviors, it is necessary to work with functions that present a certain structure. An assumption that will be central in our paper for the study of our dynamics and algorithms is that the function f satisfies the Kurdyka-Łojasiewicz (KŁ) inequality <cit.>, which means, roughly speaking, that f is sharp up to a reparametrization. The KŁ inequality, including in its nonsmooth version, has been successfully used to analyze the asymptotic behavior of various types of dynamical systems <cit.> and algorithms <cit.>[This list is by no means exhaustive.]. The importance of the KŁ inequality comes from the fact that many problems encountered in optimization involve functions satisfying such an inequality, and it is often elementary to check that the latter is satisfied; real semialgebraic/analytic functions <cit.>, functions definable in an o-minimal structure and more generally tame functions <cit.>. The dynamic (<ref>) is known to yield a convergence rate of 𝒪(t^-1) of the values in the convex setting (in fact even o(t^-1)). Second-order inertial dynamical systems have been introduced to provably accelerate the convergence behavior in the convex case. They typically take the form IGS_γẍ(t)+ γ (t)ẋ(t) +∇ f(x(t))=0, t>t_0, where t_0>0, γ: [t_0,+∞[ →_+ is a time-dependent viscosity coefficient. An abundant literature has been devoted to the study of the inertial dynamics (<ref>). The importance of working with a time-dependent viscosity coefficient to obtain acceleration was stressed by several authors; see <cit.>. In particular, the case γ(t)=α/t was considered by Su, Boyd, and Candès <cit.>, who were the first to show the rate of convergence 𝒪(t^-2) of the values in the convex setting for α≥ 3, thus making the link with the accelerated gradient method of Nesterov <cit.>. For α > 3, an even better rate of convergence with little-o instead of big-𝒪 can be obtained together with global convergence of the trajectory; see <cit.> and <cit.> for the discrete algorithmic case. Another remarkable instance of (<ref>) corresponds to the well-known Heavy Ball with Friction (HBF) method, where γ(t) is a constant, first introduced (in its discrete and continuous form) by Polyak in <cit.>. When f is strongly convex, it was shown that the trajectory converges exponentially with an optimal convergence rate if γ is properly chosen as a function of the strong convexity modulus. The convex case was later studied in <cit.> with a convergence rate on the values of only 𝒪(t^-1). When f is non-convex, HBF was investigated both for the continuous dynamics <cit.> and discrete algorithms <cit.>. However, because of the inertial aspects, (<ref>) may exhibit many small oscillations which are not desirable from an optimization point of view. To remedy this, a powerful tool consists in introducing into the dynamic a geometric damping driven by the Hessian of f. This gives the Inertial System with Explicit Hessian Damping which reads ISEHD ẍ(t)+γ(t)ẋ(t)+β(t)∇^2 f(x(t))ẋ(t)+∇ f(x(t))= 0, t>t_0; x(t_0)=x_0, ẋ(t_0)=v_0 , where γ,β:[t_0,+∞[→_+. (<ref>) was proposed in <cit.> (see also <cit.>). The second system we consider, inspired by <cit.> (see also <cit.> for a related autonomous system) is ISIHD ẍ(t)+γ(t)ẋ(t)+∇ f(x(t)+β(t)ẋ(t)) = 0, t>t_0; x(t_0)=x_0, ẋ(t_0)=v_0 . (<ref>) stands for Inertial System with Implicit Hessian Damping. The rationale behind the use of the term “implicit” comes from a Taylor expansion of the gradient term (as t → +∞ we expect ẋ(t) → 0) around x(t), which makes the Hessian damping appear indirectly in (<ref>). Following the physical interpretation of these two ODEs, we call the non-negative parameters γ and β the viscous and geometric damping coefficients, respectively. The two ODEs (<ref>) and (<ref>) were found to have a smoothing effect on the energy error and oscillations <cit.>. Moreover, in <cit.> they obtain fast convergence rates for the values for the two ODEs when γ(t)=α/t (α>3) and β(t)=β>0. However, the previous results are exclusive to the convex case. To the best of our knowledge, there is no analysis of (<ref>) and (<ref>) when the objective is non-convex. In this work, our goal is to fill this gap. §.§ Contributions In this paper, we analyze the convergence properties of (<ref>) and (<ref>) when f is non-convex, and when the time-dependent viscosity coefficient γ is non-vanishing and the geometric damping is constant, β(t)≡β≥ 0. We will also propose appropriate discretizations of these dynamics and establish the corresponding convergence guarantees for the resulting discrete algorithms. More precisely, our main contributions can be summarized as follows: * We provide a Lyapunov analysis of (<ref>) and (<ref>) and show convergence of the gradient to zero and convergence of the values. Moreover, assuming that the objective function is definable, we prove the convergence of the trajectory to a critical point (see Theorem <ref> and <ref>). Furthermore, when f is also Morse, using the center stable manifold theorem, we establish a generic convergence of the trajectory to a local minimum of f (see Theorem <ref> and <ref>). * We provide convergence rates of (<ref>) and (<ref>) and show that they depend on the desingularizing function of the Lyapunov energy (see Theorems <ref> and <ref>). * By appropriately discretizing (<ref>) and (<ref>), we propose algorithmic schemes and prove the corresponding convergence properties, which are the counterparts of the ones shown for the continuous-time dynamics. In particular, assuming that the objective function is definable, we show global convergence of the iterates of both schemes to a critical point of f. Furthermore, a generic strict saddle points (see Definition <ref>) avoidance result will also be proved (see Theorem <ref> and <ref>). * Convergence rates for the discrete schemes will also be established under the Łojasiewicz property where the convergence rate will be shown to depend on the exponent of the desingularizing function (see Theorem <ref> and <ref>). We will report a few numerical experiments to support the above findings. §.§ Relation to prior work Continuous-time dynamics. There is abundant literature regarding the dynamics (<ref>) and (<ref>), either in the exact case or with deterministic errors, but solely in the convex case; see <cit.>). Nevertheless, to the best of our knowledge, the non-convex setting was still open until now. Heavy ball-like methods. As mentioned before, the HBF method, was first introduced by Polyak in <cit.> where linear convergence was shown for f strongly convex. Convergence rates of HBF taking into account the geometry of the objective can be found in <cit.> for the convex case and <cit.> for the non-convex case. In <cit.>, the authors study the system ẍ(t)+G(ẋ(t))+∇ f(x(t))=0, where G:^d→^d is such that ⟨ G(v),v⟩≥ c‖ v‖^2 and ‖ G(v)‖≤ C‖ v‖, ∀ v∈^d, for some 0<c≤ C. They show that when f is real analytic (hence verifies the Łojasiewicz inequality), the trajectory converges to a critical point. To put it in our terms, the analysis is equivalent to study (<ref>) in the case where there exists c,C>0 such that 0<c≤γ(t)≤ C for every t ≥ 0 (see assumption (<ref>)), letting us to conclude as in <cit.>. We will extend this result to the systems (<ref>) and (<ref>) which will necessitate new arguments. HBF was also studied in the non-convex case by <cit.> where it was shown that it can be viewed as a quasi-gradient system. They also proved that a desingularizing function of the objective desingularizes the total energy and its deformed versions. They used this to establish the convergence of the trajectory for instance in the definable case. Global convergence of the HBF trajectory was also shown in <cit.> when f is a Morse function [It turns out that a Morse function satisfies the Łojasiewicz inequality with exponent 1/2; see <cit.>.]. Inertial algorithms. In the literature regarding algorithmic schemes that include inertia in the non-convex setting, we can mention: <cit.> which proposes a multi-step inertial algorithm using Forward-Backward for non-convex optimization. In <cit.> they study quasiconvex functions and use an implicit discretization of HBF to derive a proximal algorithm. In <cit.>, they introduce an inertial Forward–Backward splitting method, called iPiano-a generalization of HBF, they show convergence of the values and a convergence rate of the algorithm. Besides, in <cit.>, the author presents local convergence results for iPiano. Then, in <cit.> several abstract convergence theorems are presented to apply them to different versions of iPiano. In <cit.> they propose inertial algorithms of different nature (Tseng's type and Forward-Backward, respectively) for non-convex and non-smooth optimization. In the previous work, they assume KŁ inequality to conclude with some type of convergence (weak or strong) towards a critical point of the objective. Trap avoidance. In the non-convex setting, generic strict saddle point avoidance of descent-like algorithms has been studied by several authors building on the (center) stable manifold theorem <cit.> which finds its roots in the work of Poincaré. Genericity is in general either with respect to initialization or with respect to random perturbation. Note that genericity results for quite general systems, even in infinite dimensional spaces, is an important topic in the dynamical system theory; see <cit.> and references therein. First-order descent methods can circumvent strict saddle points provided that they are augmented with unbiased noise whose variance is sufficiently large in each direction. Here, the seminal works of <cit.> and <cit.> allow to establish that the stochastic gradient descent (and more generally the Robbins-Monro stochastic approximation algorithm) avoids strict saddle points almost surely. Those results were extended to the discrete version of HBF by <cit.> who showed that perturbation allows to escape strict saddle points, and it does so faster than gradient descent. In <cit.> and <cit.>, the authors analyze a stochastic version of HBF (in continuous-time), showing convergence towards a local minimum under different conditions on the noise. In this paper, we only study genericity of trap avoidance with respect to initialization. Recently, there has been active research on how gradient-type descent algorithms escape strict saddle points generically on initialization; see <cit.> and references therein. In <cit.>, the authors were concerned with HBF and showed that if the objective function is C^2, coercive and Morse, then generically on initialization, the solution trajectory converges to a local minimum of f. A similar result is also stated in <cit.>. The algorithmic counterpart of this result was established in <cit.> who proved that the discrete version of HBF escapes strict saddles points for almost all initializations. Our goal in this paper is to establish the same results for (<ref>) and (<ref>) and the corresponding algorithms. We would like to point out that the proof for the continuous-time case will necessitate a more stringent assumption on the class of functions, for instance that f is Morse, while this is not necessary for the discrete algorithms. §.§ Organization of the paper Section <ref> introduces notations and recalls some preliminaries that are essential to our exposition. Sections <ref> and <ref> include the main contributions of our paper, establishing convergence of the trajectory and of the iterates under KŁ inequality, convergence rates and trap avoidance results. Section <ref> is devoted to the numerical experiments. § NOTATION AND PRELIMINARIES We denote by _+ the set [0,+∞[. Moreover, we denote _+^* and ^* to refer to _+∖{0} and ∖{0}, respectively. The finite-dimensional space ^d (d∈^*) is endowed with the canonical scalar product ⟨·,·⟩ whose norm is denoted by ‖·‖. We denote ^n× d (n,d∈^*) the space of real matrices of dimension n× d. We denote by C^n(^d) the class of n-times continuously differentiable functions on ^d. For a function g:^d→ and a,b∈ such that a<b, we denote [a<f<b] to the sublevel set {x∈^d:a<f(x)<b}.For a differentiable function g:^d→, we will denote its gradient as ∇ g, the set of its critical points as: crit(g){u:∇ g(u)=0}, and when g is twice differentiable, we will denote its Hessian as ∇^2 g. For a differentiable function G:^d→^n we will denote its Jacobian matrix as J_G∈^n× d. Let A∈^d× d, then we denote λ_i(A)∈ℂ the i-th eigenvalue of A, when A is symmetric then the eigenvalues are real and we denote λ_min(A), λ_max(A) to be the minimum and maximum eigenvalue of A, respectively. If every eigenvalue of A is positive (resp. negative), we will say that A is positive (resp. negative) definite. We denote by I_d the identity matrix of dimensions d× d and 0_n× d the null matrix of dimensions n× d, respectively. The following lemma is essential to some arguments presented in this work: <cit.> Consider A,B,C,D ∈^d× d and such that AC=CA. Then ([ A B; C D ])=(AD-CB). And the following definitions will be important throughout the paper: Consider a function f∈ C^2(^d). We will say that x̂ is a local minimum (resp. maximum) of f if x̂∈crit(f), ∇^2 f(x̂) is positive (resp. negative) definite. If x̂ is a critical point that is neither a local minimum nor a local maximum, we will say that x̂ is a saddle point of f. Consider a function f∈ C^2(^d), we will say that x̂ is a strict saddle point of f if x̂∈crit(f) and λ_min(∇^2 f(x̂))<0. According to this definition, local maximum points are strict saddles. A function f∈ C^2(^d) will satisfy the strict saddle property if every critical point is either a local minimum or a strict saddle. This property is a reasonable assumption for smooth minimization. In practice, it holds for specific problems of interest, such as low-rank matrix recovery and phase retrieval. Moreover, as a consequence of Sard's theorem, for a full measure set of linear perturbations of a function f, the linearly perturbed function satisfies the strict saddle property. Consequently, in this sense, the strict saddle property holds generically in smooth optimization. We also refer to the discussion in <cit.>. A function f∈ C^2(^d) will be Morse if it satisfies the following conditions: * For each critical point x̂, ∇^2 f(x̂) is nonsingular. * There exists a nonempty set I⊆ and (x̂_k)_k∈ I such that crit(f)=⋃_k∈ I{x̂_k}. By definition, a Morse function satisfies the strict saddle property. Morse functions can be shown to be generic in the Baire sense in the space of C^2 functions; see <cit.>. For η>0, we consider κ(0,η){ψ: C^0([0,η[)∩ C^1(]0,η[)→_+, ψ'>0 on ]0,η[, ψ(0)=0, and ψ concave}. The concavity property of the functions in κ(0,η) is only required in the discrete setting. If f:^d→ is differentiable and satisfies the KŁ inequality at x̅∈^d, then there exists r,η>0 and ψ∈κ(0,η), such that ψ'(f(x)-f(x̅))‖∇ f(x)-∇ f(x̅)‖≥ 1, ∀ x∈ B(x̅,r)∩ [f(x̅)<f<f(x̅)+η]. A function f:^d→ will satisfy the Łojasiewicz inequality with exponent q ∈ ]0,1] if f satisfies the KŁ inequality with ψ(s)=c_0 s^1-q for some c_0>0. It remains now to identify a broad class of functions f that verifies the KŁ inequality. A rich family is provided by semi-algebraic functions, , functions whose graph is defined by some Boolean combination of real polynomial equations and inequalities <cit.>. Such functions satisfy the Łojasiewicz property with q ∈ [0, 1[ ∩ℚ; see <cit.>. An even more general family is that of definable functions on an o-minimal structure over , which corresponds in some sense to an axiomatization of some of the prominent geometrical and stability properties of semi-algebraic geometry <cit.>. An important result by Kurdyka in <cit.> showed that definable functions satisfy the KŁ inequality at every x̅∈^d. Morse functions verify the Łojasiewicz inequality with exponent q=1/2; see <cit.>. § INERTIAL SYSTEM WITH EXPLICIT HESSIAN DAMPING Throughout the paper we will consider (<ref>) and (<ref>) with t_0=0. Also we will assume that the viscous damping γ:_+→_+ is such that ∃ c, C > 0, c≤ C, and H_γ c≤γ(t)≤ C , ∀ t ≥ 0. Moreover, throughout this work, we consider a constant geometric damping, β(t) ≡β > 0. §.§ Continuous-time dynamics Let us consider (<ref>), as in <cit.>, we will say that x:_+→^d is a solution trajectory of (<ref>) with initial conditions x(0)=x_0, ẋ(0)=v_0, if and only if, x∈ C^2(_+;^d) and there exists y∈ C^1(_+;^d) such that (x,y) satisfies: ẋ(t)+β∇ f(x(t))-(1/β-γ(t))x(t)+1/βy(t) =0, ẏ(t)-(1/β-γ(t)-βγ'(t))x(t)+1/βy(t) =0, with initial conditions x(0)=x_0, y(0)=y_0 -β (v_0+β∇ f(x_0))+(1-βγ(0))x_0. §.§.§ Global convergence of the trajectory Our first main result is the following theorem. Assume that 0<β<2c/C^2, f:^d→ satisfies (<ref>), and γ∈ C^1(_+;_+) obeys (<ref>). Consider (<ref>) in this setting, then the following holds: * There exists a global solution trajectory x:_+→^d of (<ref>). * We have that ∇ f∘ x∈^2(_+;^d), and ẋ∈^2(_+;^d). * If we suppose that the solution trajectory x is bounded over _+, then lim_t→ +∞‖∇ f(x(t))‖=lim_t→ +∞‖ẋ(t)‖=0, and lim_t→ +∞ f(x(t)) exists. * In addition to <ref>, if we also assume that f is definable, then ẋ∈^1(_+;^d) and x(t) converges (as t→ +∞) to a critical point of f. The boundedness assumption in assertion <ref> can be dropped if ∇ f is supposed to be globally Lipschitz continuous. * We will start by showing the existence of a solution. Setting Z=(x,y), (<ref>) can be equivalently written as Ż(t)+∇𝒢(Z(t))+𝒟(t,Z(t))=0, Z(0)=(x_0,y_0), where 𝒢(Z):^d×^d→ is the function defined by 𝒢(Z)=β f(x) and the time-dependent operator 𝒟:_+×^d×^d→^d×^d is given by: 𝒟(t,Z) (-(1/β-γ(t))x+1/βy,-(1/β-γ(t)-βγ'(t))x+1/βy). Since the map (t,Z)↦∇𝒢(Z)+𝒟(t,Z) is continuous in the first variable and locally Lipschitz in the second (by hypothesis (<ref>) and the assumptions on γ), by the classical Cauchy-Lipschitz theorem, we have that there exists T_max > 0 and a unique maximal solution of (<ref>) denoted Z∈ C^1([0,T_max[;^d×^d). Consequently, there exists a unique maximal solution of (<ref>) x∈ C^2([0,T_max[;^d). Let us consider the energy function V:[0,T_max[→ defined by V(t)=f(x(t))+1/2‖ẋ(t)+β∇ f(x(t))‖^2. We will prove that it is indeed a Lyapunov function for (<ref>). We see that V'(t) =⟨∇ f(x(t)),ẋ(t)⟩+⟨ẍ(t)+βd/dt∇ f(x(t)),ẋ(t)+β∇ f(x(t))⟩ =⟨∇ f(x(t)),ẋ(t)⟩+⟨ -γ(t)ẋ(t)-∇ f(x(t)),ẋ(t)+β∇ f(x(t))⟩ =-⟨γ(t)ẋ(t),ẋ(t)⟩-β⟨γ(t)ẋ(t) ,∇ f(x(t))⟩-β‖∇ f(x(t))‖^2 ≤ -c‖ẋ(t)‖^2+β^2‖γ(t)ẋ(t)‖^2/2ε+ε‖∇ f(x(t))‖^2/2 ≤ -c‖ẋ(t)‖^2+β^2C^2‖ẋ(t)‖^2/2ε+ε‖∇ f(x(t))‖^2/2 , where the last bound is due to Young's inequality with ε>0. Now let ε=β^2C^2/c, then V'(t)≤ -c/2‖ẋ(t)‖^2 - β(1-β C^2/2c)‖∇ f(x(t))‖^2-δ_1(‖ẋ(t)‖^2+‖∇ f(x(t))‖^2). For δ_1minc/2,β(1-β C^2/2c)>0. We will now show that the maximal solution Z of (<ref>) is actually global. For this, we use a standard argument and argue by contradiction assuming that T_max < +∞. It is sufficient to prove that x and y have a limit as t → T_max, and local existence will contradict the maximality of T_max. Integrating inequality (<ref>), we obtain ẋ∈^2([0,T_max[;^d) and ∇ f∘ x∈^2([0,T_max[;^d), hence implying that ẋ∈^1([0, T_max[;^d) and ∇ f∘ x∈^1([0,T_max[;^d), which in turn entails that (x(t))_t∈ [0,T_max[ satisfies the Cauchy property whence we get that lim_t→ T_max x(t) exists. Besides, by the first equation of (<ref>), we have that lim_t→ T_max y(t) exists if lim_t→ T_max x(t), lim_t→ T_max∇ f(x(t)) and lim_t→ T_maxẋ(t) exist. We have already that the first two limits exist by continuity of ∇ f, and thus we just have to check that lim_t→ T_maxẋ(t) exists. A sufficient condition would be to prove that ẍ∈^1([0,T_max[;^d). By (<ref>) this will hold if ẋ,∇ f∘ x,(∇^2 f ∘ x)ẋ are in ^1([0,T_max[;^d). We have already checked that the first two terms are in ^1([0,T_max[;^d). To conclude, it remains to check that ∇^2 f∘ x∈^∞([0,T_max[;^d) and this is true since ∇^2 f is continuous, x is continuous on [0,T_max], and the latter is compact. Consequently, the solution Z of (<ref>) is global, thus the solution x of (<ref>) is also global. * Integrating (<ref>) and using that V is well-defined for every t > 0 and is bounded from below, we deduce that ẋ∈^2(_+;^d), and ∇ f∘ x∈^2(_+;^d). * We recall that we are assuming that sup_t ≥ 0‖ x(t)‖<+∞ and f ∈ C^2(^d), hence sup_t ≥ 0‖∇^2 f(x(t))‖<+∞. In turn, ∇ f is Lipschitz continuous on bounded sets. Moreover, as ẋ∈^2(_+;^d) and is continuous, then ẋ∈^∞(_+;^d). The last two facts imply that t↦∇ f(x(t)) is uniformly continuous. In fact, for every t,s ≥ 0, we have ∇ f(x(t)) - ∇ f(x(s))≤sup_τ≥ 0∇^2 f(x(τ))ẋ(τ)|t-s| . This combined with ∇ f∘ x∈^2(_+;^d) yields lim_t→ +∞∇ f(x(t))=0 . We also have that d/dt∇ f(x(t))=∇^2 f(x(t))ẋ(t), and thus (∇^2 f∘ x)ẋ∈^∞(_+;^d). We also have ∇ f∘ x ∈^∞(_+;^d) by continuity of ∇ f and boundedness of x. It then follows from (<ref>) that ẍ∈^∞(_+;^d). This implies that ẋ(t) - ẋ(s)≤sup_τ≥ 0ẍ(τ)|t-s| , meaning that t↦ẋ(t) is uniformly continuous. Recalling that ẋ∈^2(_+;^d) gives that lim_t→ +∞ẋ(t)=0. We have from (<ref>) that V is non-increasing. Since it is bounded from below, it has a limit, lim_t→ +∞V(t) exists and we will denote this limit by L̃. Recall from the definition of V that f(x(t))=V(t)-1/2‖ẋ(t)+β∇ f(x(t))‖^2. Using the above three limits we get lim_t→ +∞f(x(t))=lim_t→ +∞V(t) = L̃. * From boundedness of (x(t))_t ≥ 0, by a Lyapunov argument (see <cit.>, <cit.>), the set of its cluster points ℭ(x(·)) satisfies: ℭ(x(·)) ⊆crit(f); ℭ(x(·)) is non-empty, compact and connected; f is constant on ℭ(x(·)). We consider the function E: (x,v,w) ∈^3d↦ f(x)+1/2‖ v+w‖^2 . Since f is definable, so is E as the sum of a definable function and an algebraic one. Therefore, E satisfies the KŁ inequality <cit.>. Let ℭ_1=ℭ(x(·)) ×{0_d}×{0_d}. Observe that E takes the constant value L̃ on ℭ_1 and ℭ_1 ⊂crit(E). It then follows from the uniformized KŁ property <cit.> that ∃ r,η>0 and ∃ψ∈κ(0,η) such that for all (x,v,w)∈^3d verifying x ∈ℭ(x(·))+B_r,v∈ B_r,w∈ B_r (where B_r is the ^d-ball centered at 0_d with radius r) and 0<E(x,v,w)-L̃ <η, one has ψ'(E(x,v,w)-L̃)‖∇ E(x,v,w)‖≥ 1 . It is clear that V(t)=E(x(t),ẋ(t),β∇ f(x(t))), and that x^⋆∈crit(f) if and only if (x^⋆,0,0)∈crit(E). Let us define the translated Lyapunov function Ṽ(t)=V(t)-L̃. By the properties of V proved above, we have lim_t→ +∞Ṽ(t)=0 and Ṽ is non-increasing, and we can conclude that Ṽ(t)≥ 0 for every t > 0. Without loss of generality, we may assume that Ṽ(t)> 0 for every t > 0 (since otherwise Ṽ(t) is eventually zero and thus ẋ(t) is eventually zero in view of (<ref>), meaning that x(·) has finite length). This in turn implies that lim_t→ +∞ψ( Ṽ(t))=0. Define the constants δ_2=max4,1+4β^2 and δ_3=δ_1/√(δ_2). We have from (<ref>) that lim_t→ +∞(x(t),ℭ(x(·)))=0. This together with the convergence claims on ẋ, ∇ f(x) and Ṽ imply that there exists T > 0 large enough such that for all t ≥ T x(t)∈ℭ(x(·))+B_r, ‖ẋ(t)‖<r, β‖∇ f(x(t))‖<r, 0<Ṽ(t)<η, 1/δ_3ψ( Ṽ(t))<r/2√(2). We are now in position to apply (<ref>) to obtain ψ'( Ṽ(t))‖∇ E(x(t),ẋ(t),β∇ f(x(t)))‖≥ 1, ∀ t ≥ T. On the other hand, for every t ≥ T: -d/dtψ(Ṽ(t)) =ψ'( Ṽ(t))(-Ṽ'(t)) ≥ -Ṽ'(t)/‖∇ E(x(t),ẋ(t),β∇ f(x(t)))‖. Additionally, for every t > 0 we have the bounds -Ṽ'(t) ≥δ_1(‖ẋ(t)‖^2+‖∇ f(x(t))‖^2), ‖∇ E(x(t),ẋ(t),β∇ f(x(t)))‖^2 ≤δ_2(‖ẋ(t)‖^2+‖∇ f(x(t))‖^2). Combining the two previous bounds, then for every t > 0: ‖∇ E(x(t),ẋ(t),β∇ f(x(t)))‖≤√(δ_2/δ_1)√(-Ṽ'(t)). By (<ref>), for every t∈ [T,+∞[ -d/dtψ(Ṽ(t))≥√(δ_1/δ_2)-Ṽ'(t)/√(-Ṽ'(t))=√(δ_1/δ_2)√(-Ṽ'(t))≥δ_3√(‖ẋ(t)‖^2+‖∇ f(x(t))‖^2). Integrating from T to +∞, we obtain ∫_T^+∞√(‖ẋ(t)‖^2+‖∇ f(x(t))‖^2)dt≤1/δ_3ψ(V(T))<r/2√(2). And for every t∈ [t_k',t̅[, ‖ x(t)‖ ≤‖ x(t)-x(t_k')‖+‖ x(t_k')‖ < ‖∫_t_k'^t ẋ(s)ds‖+r/2√(2) <∫_t_k'^t ‖ẋ(s)‖ ds+r/2√(2) < ∫_t_k'^t̅√(‖ẋ(t)‖^2+‖∇ f(x(t))‖^2)dt+r/2√(2) < r/2√(2)+r/2√(2)=r/√(2). Taking limit when t→t̅, we have that ‖ x(t̅)‖<r/√(2), which contradicts the supposition of t̅<+∞. Then t̅=+∞, and ∫_t_k'^+∞‖ẋ(t)‖ dt≤∫_t_k'^+∞√(‖ẋ(t)‖^2+‖∇ f(x(t))‖^2)dt<r/2√(2), Thus ∫_T^+∞‖ẋ(t)‖ dt≤∫_T^+∞√(‖ẋ(t)‖^2+‖∇ f(x(t))‖^2)dt<r/2√(2), this implies that ẋ∈^1(_+;^d). Therefore x(t) has the Cauchy property and this in turn implies that lim_t→ +∞ x(t) exists, and is a critical point of f since lim_t→ +∞‖∇ f(x(t))‖=0. §.§.§ Trap avoidance In the previous section, we have seen the convergence of the trajectory to a critical point of the objective, which includes strict saddle points. We will call trap avoidance the effect of avoiding such points at the limit. If the objective function satisfies the strict saddle property (recall Definition <ref>) as is the case for a Morse function, this would imply convergence to a local minimum of the objective. The following theorem gives conditions to obtain such an effect. Let c>0, assume that 0<β<2/c and take γ≡ c. Suppose that f:^d→ satisfies (<ref>) and is a Morse function. Consider (<ref>) in this setting. If the solution trajectory x is bounded over _+, then the conclusions of Theorem <ref> hold. If, moreover, β≠1/c, then for Lebesgue almost all initial conditions x_0,v_0∈^d, x(t) converges (as t→ +∞) to a local minimum of f. Since Morse functions are C^2 and satisfy the KŁ inequality (see Remark <ref>), then all the claims of Theorem <ref>, and in particular <ref>[In fact, the proof is even more straightforward since the set of cluster points ℭ(x(·)) satisfies (<ref>) and the critical points are isolated; see the proof of <cit.>.], hold. As in <cit.>, we will use the global stable manifold theorem <cit.> to get the last claim. We recall that (<ref>) is equivalent to (<ref>), and that we are in the case γ(t)=c for all t, , ẋ(t)+β∇ f(x(t))-(1/β-c)x(t)+1/βy(t) =0, ẏ(t)-(1/β-c)x(t)+1/βy(t) =0, with initial conditions x(0)=x_0, y(0)=y_0 -β (v_0+β∇ f(x_0))+(1-β c)x_0. Let us consider F:^d×^d→^d×^d defined by F(x,y)=(-β∇ f(x)+(1/β-c)x-1/βy, (1/β-c)x-1/βy). Defining z(t)=(x(t),y(t)) and z_0=(x_0,v_0)∈^2d , then (<ref>) is equivalent to the Cauchy problem ż(t) =F(z(t)), z(0) =z_0 . We stated that when 0<β<2/c and f is definable (see the first claim above), then the solution trajectory z(t) converges (as t→ +∞) to an equilibrium point of F. Let us denote Φ(z_0,t), the value at t of the solution (<ref>) with initial condition z_0. Assume that ẑ is a hyperbolic equilibrium point of F (to be shown below), meaning that F(ẑ)=0 and that no eigenvalue of J_F(ẑ) has zero real part. Consider the invariant set W^s(ẑ)={z_0∈^2d:lim_t→ +∞Φ(z_0,t)=ẑ}. The global stable manifold theorem <cit.> asserts that W^s(ẑ) is an immersed submanifold of ^2d, whose dimension equals the number of eigenvalues of J_F(ẑ) with negative real part. First, we will prove that each equilibrium point of F is hyperbolic. We notice that the set of equilibrium points of F is {(x̂,(1-β c) x̂): x̂∈crit(f)}. On the other hand, we compute J_F(x,y)=[ -β∇^2 f(x)+(1/β-c)I_d -1/βI_d; (1/β-c)I_d -1/βI_d ]. Let ẑ=(x̂,(1-β c) x̂), where x̂∈crit(f). Then the eigenvalues of J_F(ẑ) are characterized by the roots in λ∈ℂ of ([ -β∇^2 f(x̂)+(1/β-c-λ)I_d -1/βI_d; (1/β-c)I_d -(λ+1/β)I_d ])=0. By Lemma <ref>, we have that (<ref>) is equivalent to ((1+λβ)∇^2 f(x̂)+(λ^2+λ c)I_d)=0. If λ=-1/β, then by (<ref>), β c=1, which is excluded by hypothesis. Therefore, -1/β cannot be an eigenvalue, λ≠ -1/β. We then obtain that (<ref>) is equivalent to (∇^2 f(x̂)+λ^2+λ c/(1+λβ)I_d)=0. It follows that λ satisfies (<ref>) if and only if λ^2+λ c/(1+λβ)=-η where η∈ is an eigenvalue of ∇^2 f(x̂). Equivalently, λ^2+(c+ηβ)λ+η=0. Let Δ_λ(c+ηβ)^2-4η. We distinguish two cases. * Δ_λ≥ 0: then the roots of (<ref>) are real and we rewrite (<ref>) as λ(λ+(c+ηβ))=-η, since η≠ 0 (because f is a Morse function), then λ≠ 0. * Δ_λ< 0: then (<ref>) has a pair of complex conjugate roots whose real part is -c+ηβ/2. Besides, Δ_λ=c^2+2βη c+η^2β^2-4η can be seen as a quadratic on c whose discriminant is given by Δ_c=4η. The fact that Δ_λ≤ 0 implies Δ_c ≥ 0, and thus η>0 (as η≠ 0 since f is Morse), therefore -c+ηβ/2<0. Overall, this shows that every equilibrium point of F is hyperbolic. Let us recall that crit(f)=⋃_k∈ I{x̂_k}. Thus, the set of equilibria of F is also finite and each one takes the form ẑ_k=(x̂_k,(1-β c)x̂_k). Since we have already shown that each solution trajectory x of (<ref>) converges towards some x̂_k, the following partition then holds ^d×^d=⋃_k∈ I W^s(ẑ_k). Let I^-={k∈ I: each eigenvalue of J_F(ẑ_k) has negative real part.}, and J I∖ I^-. Now, the global stable manifold theorem <cit.> allows to claim that W^s(ẑ_k) is an immersed submanifold of ^2d whose dimension is 2d when k∈ I^- and at most 2d-1 when k ∈ J. Let k∈ I^-, we claim that ∇^2 f(x̂_k) has only positive eigenvalues. By contradiction, let us assume that η_0<0 is an eigenvalue of ∇^2 f(x̂_k) (η_0=0 is not possible due to the Morse hypothesis). Each solution λ of (<ref>) is an eigenvalue of J_F(ẑ_k) and one of these solutions is -(c+ηβ)+√((c+ηβ)^2-4η_0)/2 which is positive since η_0<0. We then have -(c+η_0β)+√((c+η_0β)^2-4η_0)/2>-(c+η_0β)+|c+η_0β|/2≥ 0 , hence contradicting the assumption that k∈ I^-. In conclusion, the set of initial conditions z_0 such that Φ(z_0,t) converges to (x_b,(1-β c)x_b) (as t→ +∞), where x_b is not a local minimum of f is ⋃_k∈ J W^s(ẑ_k) which has Lebesgue measure zero. Therefore, due to the equivalence between (<ref>) and (<ref>) and that Morse functions satisfy the strict saddle property (see Remark <ref>), we indeed have that for almost all initial conditions x_0,v_0∈^d, the solution trajectory of (<ref>) will converge to a local minimum of f. §.§.§ Convergence rate When the objective function is definable, we now provide the convergence rate on the Lyapunov function E in (<ref>), hence f, and on the solution trajectory x. Consider the setting of Theorem <ref> with f being also definable and the solution trajectory x is bounded. Recall the function E from (<ref>), which is also definable, and denote ψ its desingularizing function and Ψ any primitive of -ψ'^2. Then, x(t) converges (as t→ +∞) to x_∞∈crit(f). Denote Ṽ(t) E(x(t),ẋ(t),β∇ f(x(t)))-f(x_∞). The following rates of convergence hold: * If lim_t→ 0Ψ (t)∈, we have E(x(t),ẋ(t),β∇ f(x(t))) converges to f(x_∞) in finite time. * If lim_t→ 0Ψ (t)=+∞, there exists some t_1 ≥ 0 such that Ṽ(t)=𝒪(Ψ^-1(t-t_1)) . Moreover, ‖ x(t)-x_∞‖=𝒪(ψ∘Ψ^-1(t-t_1)) . This proof is a generalization of <cit.> to the dynamics (<ref>). Let δ_0δ_2/δ_1, δ_3>0 and T > 0 for δ_1,δ_2,δ_3,T defined in the proof of Theorem <ref>. Using (<ref>) then (<ref>), we have for t>T d/dtΨ(Ṽ(t)) =Ψ'(Ṽ(t))Ṽ'(t) =-ψ'^2(Ṽ(t))Ṽ'(t) ≥δ_0 ψ'^2(Ṽ(t))‖∇ E(x(t),ẋ(t),β∇ f(x(t))‖^2 ≥δ_0. Integrating on both sides from T to t we obtain that for every t>T Ψ(Ṽ(t))≥δ_0(t-T)+Ψ(Ṽ(T)). Following the arguments shown in <cit.>, if lim_t→ 0Ψ (t)∈, then Ṽ(t) converges to 0 in finite time. Otherwise, we take the inverse of Ψ, which is non-increasing, on both sides of (<ref>) to obtain the desired bound. Finally, using (<ref>) we also have for every t>T ‖ x(t)-x_∞‖≤∫_t^+∞‖ẋ(s)‖ ds≤1/δ_3ψ(Ṽ(t)) ≤1/δ_3ψ∘Ψ^-1(δ_0(t-T)+Ψ(Ṽ(T))) . Observe that the convergence rate (<ref>) holds also on f(x(t))-f(x_∞) and ẋ(t)+∇ f(x(t))^2. We now specialize this to the Łojasiewicz case. Consider the setting of Theorem <ref> where now f satisfies the Łojasiewicz inequality with desingularizing function ψ_f(s)=c_f s^1-q, q ∈ [0,1[, c_f > 0. Then there exists some t_1 > 0 such that the the following convergence rates hold: * If q ∈ [0,1/2], then Ṽ(t)=𝒪exp(-(t-t_1)) and ‖ x(t)-x_∞‖=𝒪exp(t-t_1/2) . * If q ∈ ]1/2,1[, then Ṽ(t)=𝒪(t-t_1)^-1/2q-1 and ‖ x(t)-x_∞‖=𝒪(t-t_1)^-1-q/2q-1 . E is a separable quadratic perturbation of f. But a quadratic function is Łojasiewicz with exponent 1/2. It then follows from the Łojasiewicz exponent calculus rule in <cit.> that the desingularizing function of E is ψ_E(s)=c_E s^1-q_E for some c_E>0 and q_E=max{q,1/2}. Then, * If q ∈ [0,1/2] then q_E=1/2 and Ψ(s)=c_1^2/4ln(1/s). This implies that Ψ^-1(s)=4/c_1^2exp(-s). * If q ∈ ]1/2,1[ then q_E=q and Ψ(s)=c_1^2/4(2q-1)s^1-2q. This implies that Ψ^-1(s)=4(2q-1)/c_1^2s^-1/2q-1. We conclude in both cases by using Theorem <ref>. §.§ Algorithmic scheme Now we will consider the following finite differences explicit discretization of (<ref>) with step-size h>0 and for k≥ 1: x_k+1-2x_k+x_k-1/h^2+γ(kh)x_k+1-x_k/h+β∇ f(x_k)-∇ f(x_k-1)/h+∇ f(x_k)=0. Rearranging, this equivalently reads ISEHD-Disc y_k =x_k+α_k(x_k-x_k-1)-β_k(∇ f(x_k)-∇ f(x_k-1)), x_k+1 =y_k-s_k∇ f(x_k), with initial conditions x_0, x_1 ∈^d, where α_k1/1+γ_k h, γ_kγ(kh), β_kβ hα_k, s_k h^2α_k §.§.§ Global convergence and trap avoidance The following theorem summarizes our main results on the behaviour of (<ref>). Observe that as the discretization is explicit, we will need ∇ f to be globally Lipschitz continuous. Let f:^d→ be satisfying (<ref>) with ∇ f being globally L-Lipschitz-continuous. Consider the scheme (<ref>) with h>0, β≥ 0 and c≤γ_k≤ C for some c,C > 0 and all k ∈. Then the following holds: * If β+h/2<c/L, then (‖∇ f(x_k)‖)_k∈∈ℓ^2(), and (‖ x_k+1-x_k‖)_k∈∈ℓ^2(), in particular lim_k→ +∞‖∇ f(x_k)‖=0. * Moreover, if x_k is bounded and f is definable, then (‖ x_k+1-x_k‖)_k∈∈ℓ^1() and x_k converges (as k→ +∞) to a critical point of f. * Furthermore, if γ_k≡ c>0, 0<β<c/L, β≠1/c, and h<min(2(c/L-β),1/Lβ), then for almost all x_0, x_1∈^d, x_k converges (as k→ +∞) to a critical point of f that is not a strict saddle. Consequently, if f satisfies the strict saddle property then for almost all x_0, x_1∈^d, x_k converges (as k→ +∞) to a local minimum of f. When β=0, we recover the HBF method and the condition h<min(2(c/L-β),1/Lβ) becomes h<2c/L. * By definition of x_k+1 in (<ref>), for k∈^* x_k+1=_x∈^d1/2‖ x-(y_k-s_k∇ f(x_k))‖^2 . 1-strong convexity of x↦1/2‖ x-(y_k-s_k∇ f(x_k))‖^2 then yields 1/2‖ x_k+1-(y_k-s_k∇ f(x_k))‖^2≤1/2‖ x_k-(y_k-s_k∇ f(x_k))‖^2-1/2‖ x_k+1-x_k‖^2. Let α=1/1+Ch,α̅=1/1+ch, s=h^2α,s̅=h^2α̅, and thus for every k∈, α≤α_k≤α̅ and s≤ s_k≤s̅. Let also v_k x_k-x_k-1, z_kα_k v_k-β_k(∇ f(x_k)-∇ f(x_k-1)), then y_k=x_k+z_k. After expanding the terms of (<ref>) we have that ⟨∇ f(x_k),v_k+1⟩ ≤ -‖ v_k+1‖^2/s_k+1/s_k⟨ v_k+1,z_k⟩ ≤ -‖ v_k+1‖^2/s̅+1/h^2⟨ v_k+1,v_k⟩-β/h⟨ v_k+1,∇ f(x_k)-∇ f(x_k-1)⟩. By the descent lemma for L-smooth functions, we obtain f(x_k+1)≤ f(x_k)+⟨∇ f(x_k), v_k+1⟩+L/2‖ v_k+1‖^2. Using the bound in (<ref>), we get f(x_k+1)≤ f(x_k)+1/h^2⟨ v_k+1, v_k⟩-β/h⟨ v_k+1,∇ f(x_k)-∇ f(x_k-1)⟩-(1/s̅-L/2)‖ v_k+1‖^2. According to our hypothesis h<2c/L, so h<c+√(c^2+2L)/L and this implies that s̅<2/L. Using Young's inequality twice, for ε,ε'>0, the fact that ∇ f is L-Lipschitz, and adding ε+ε'/2‖ v_k+1‖^2 at both sides, then f(x_k+1)+ε+ε'/2‖ v_k+1‖^2 ≤ f(x_k)+ε+ε'/2‖ v_k‖^2 +(1/2[1/h^4ε+ε+β^2L^2/h^2ε'+ε']-(1/s̅-L/2))‖ v_k+1‖^2. In order to make the last term negative, we want to impose 1/2[1/h^4ε+ε+β^2L^2/h^2ε'+ε']<(1/s̅-L/2). Minimizing the left-hand side with respect to ε,ε'>0 we get ε=1/h^2, ε'=β L/h, and one can check that in this case, the condition (<ref>) becomes equivalent to β+h/2<c/L which is assumed in the hypothesis. Setting C_11/h^2+β L/h, δ1/s̅-L/2-1/h^2-β L/h>0, and defining V_k f(x_k)+C_1/2‖ v_k‖^2, we have for any k∈^* V_k+1≤ V_k -δ‖ v_k+1‖^2 . Clearly, V_k is non-increasing and bounded from below, hence lim_k→ +∞ V_k exists (say L̃). Summing this inequality over k, we have that (‖ v_k+1‖)_k∈∈ℓ^2() entailing that lim_k→ +∞‖ v_k‖=0. In turn,we have that lim_k→ +∞ f(x_k)=L̃. Embarking again from the update in (<ref>), we have s_k‖∇ f(x_k)‖ = ‖ x_k+1-y_k‖ ≤‖ x_k+1-x_k‖+‖ x_k-y_k‖ ≤‖ v_k+1‖+(α_k+β_k L)‖ v_k‖ ≤‖ v_k+1‖+‖ v_k‖, since α̅(1+β h L)<1 by hypothesis. Therefore ‖∇ f(x_k)‖^2≤δ_2(‖ v_k+1‖^2+‖ v_k‖^2), where δ_2=2/s^2. Consequently (‖∇ f(x_k)‖)_k∈∈ℓ^2(), which implies that lim_k→ +∞‖∇ f(x_k)‖=0. * If, moreover, (x_k)_k∈ is bounded, then the set of its cluster points ℭ(x_k) satisfies (see <cit.>): ℭ(x_k) ⊆crit(f); ℭ(x_k) is non-empty, compact and connected; f is constant on ℭ(x_k). Define E: (x,v) ∈^2d↦ f(x)+C_1/2‖ v‖^2 . Since f is definable, so is E as the sum of a definable function and an algebraic one, whence E satisfies the KŁ inequality. Let ℭ_1=ℭ((x_k)_k∈)×{0_d}. Since E|_ℭ_1=L̃, ∇ E|_ℭ_1=0, ∃ r,η>0, ∃ψ∈κ(0,η) such that for every (x,v) such that x∈ℭ((x_k)_k∈)+B_r,v∈ B_r, (where B_r is the ^d-ball centred at 0_d with radius r) and 0<E(x,v)-L̃<η, one has ψ'(E(x,v)-L̃)‖∇ E(x,v)‖≥ 1 . Let us define Ṽ_k=V_k-L̃, or equivalently Ṽ_k=E(x_k,v_k)-L̃. From (<ref>), Ṽ_k is a non-increasing sequence and its limit is 0 by definition of L̃. This implies that that Ṽ_k≥ 0 for all k ∈^*. We may assume without loss of generality that Ṽ_k > 0. Indeed, suppose there exists K ∈ such that Ṽ_K = 0, then the decreasing property (<ref>) implies that Ṽ_k = 0 holds for all k ≥ K. Thus v_k+1=0, or equivalently x_k=x_K, for all k ≥ K, hence x_k has finite length. Since lim_k→ +∞(x_k,ℭ((x_k)_k∈))=0, lim_k→ +∞‖ v_k‖=0, and lim_k→ +∞Ṽ_k=0, there exists K̃∈ such that for all k ≥K̃, x_k∈ℭ((x_k)_k∈)+B_r ; ‖ v_k‖<r ; 0<Ṽ_k<η. Then, by (<ref>), we have ψ'(Ṽ_k)‖∇ E(x_k,v_k)‖≥ 1, ∀ k≥K̃. By concavity of ψ and (<ref>), we have ψ(Ṽ_k)-ψ(Ṽ_k+1) ≥ -ψ'(Ṽ_k)(Ṽ_k+1-Ṽ_k) ≥δψ'(Ṽ_k)‖ v_k+1‖^2 ≥δ‖ v_k+1‖^2/‖∇ E(x_k,v_k)‖. On the other hand, ‖∇ E(x_k,v_k)‖≤δ_3(‖ v_k+1‖+‖ v_k‖) where δ_3=√(C_1^2+δ_2). Let us define for k∈^*, (Δψ)_kψ(Ṽ_k)-ψ(Ṽ_k+1) and δ_4=δ_3/δ. We then have for all k ≥K̃, ‖ v_k+1‖^2≤δ_4(Δψ)_k(‖ v_k‖+‖ v_k+1‖). Using Young's inequality and concavity of √(·), this implies that for every ε>0 ‖ v_k+1‖≤δ_4(Δψ)_k/√(2)ε+ε‖ v_k+1‖+‖ v_k‖/√(2). Rearranging the terms and imposing 0<ε<√(2) gives (1-ε/√(2)) ‖ v_k+1‖≤δ_4(Δψ)_k/√(2)ε+ε‖ v_k‖/√(2). Dividing by (1-ε/√(2)) on both sides, we get ‖ v_k+1‖≤δ_4(Δψ)_k/ε(√(2)-ε)+ε‖ v_k‖/√(2)-ε. Choosing now ε such that 0<ε<√(2)/2, we get that 0<ε/√(2)-ε<1. Since (Δψ)_k∈ℓ^1(^*) as a telescopic sum, we conclude that (‖ v_k‖)_k∈∈ℓ^1(^*). This means that x_k has finite length, hence is a Cauchy sequence, entailing that x_k has a limit (as k→ +∞) denoted x_∞ which is a critical point of f since lim_k→ +∞‖∇ f(x_k)‖=0. * If γ_k≡ c, we denote α_k≡α=1/1+ch, β_k≡β̃=β hα, s_k≡ s=h^2α. Let z_k=(x_k,x_k-1) for k≥ 1, and g:^d×^d→^d×^d defined by g:(x_+,x_-)↦ [(1+α)x_+-α x_- -(β̃+s)∇ f(x_+)+β̃∇ f(x_-),x_+] . (<ref>) is then equivalent to z_k+1=g(z_k). To complete the proof, we will capitalize on <cit.> which builds on the center stable manifold theorem <cit.>. For this, one needs to check two conditions: * (J_g(x_+,x_-))≠ 0 for every x_+,x_-∈^d. * Let 𝒜_g^⋆{(x,x)∈^2d: x∈crit(f), max_i |λ_i(J_g(x,x))|>1 }, 𝒳^⋆ be the set of strict saddle points of f, and 𝒳̂{(x,x)∈^2d:x∈𝒳^⋆}. One needs to check that 𝒳̂⊂𝒜_g^⋆. 𝒜_g^⋆ is the set of unstable fixed points. Indeed, the fixed points of g are of the form (x^⋆,x^⋆) where x^⋆∈crit(f). We first compute J_g(x_+,x_-), given by [ (1+α)I_d-(β̃+s)∇^2 f(x_+) -α I_d+β̃∇^2 f(x_-); I_d 0_d× d ] This is a block matrix that comes in a form amenable to applying Lemma <ref>. We then have (J_g(x_+,x_-))= (α I_d-β̃∇^2 f(x_-)). Since the eigenvalues of ∇^2 f(x_-) are contained in [-L,L], if α>Lβ̃, then α-β̃η≠ 0 for every eigenvalue η∈ of ∇^2 f(x_-). This implies that the first condition is satisfied, (J_g(x_+,x_-))≠ 0 for every x_+,x_-∈^d. The condition α>Lβ̃ in terms of h reads h<1/Lβ, since we already needed h<2(c/L-β), we just ask h to be less than the minimum of the two quantities. To check the second condition, let us take x a strict saddle point of f, x∈crit(f) and λ_min(∇^2 f(x))=-η < 0. To compute the eigenvalues of J_g(x,x) we consider ([ (1+α-λ)I_d-(β̃+s)∇^2 f(x) -α I_d+β̃∇^2 f(x); I_d -λ I_d ])=0. Again by Lemma <ref>, we get that ([ (1+α-λ)I_d-(β̃+s)∇^2 f(x) -α I_d+β̃∇^2 f(x); I_d -λ I_d ])= [(-λ(1+α)+λ^2)I_d+λ(β̃+s)∇^2 f(x)+α I_d-β̃∇^2 f(x)]= [(λ(β̃+s)-β̃)∇^2 f(x)+(λ^2-λ(1+α)+α)I_d] . We then need to solve for λ [(λ(β̃+s)-β̃)∇^2 f(x)+(λ^2-λ(1+α)+α)I_d]=0. If 0<β̃=s, then (<ref>) is equivalent to (∇^2 f(x)+λ^2-λ(1+α)+α/β̃I_d)=0. Therefore, for every eigenvalue η' of ∇^2 f(x), we have that λ^2-λ(1+α)+α/β̃=-η'. In particular, we can take η'=-η (the minimum eigenvalue, which is negative), solving the quadratic equation we get the biggest solution (the one with the plus sign) is bigger than one, in fact, (1+α)+√((1-α)^2+4ηβ̃)/2>1+α+|1-α|/2≥ 1, since 4ηβ̃>0, then we have ensured the second condition in this case. For the rest of the proof, we can consider β≠ s. If λ=β̃/β̃+s, then (<ref>) becomes (β̃/β̃+s)^2-(β̃/β̃+s)(1+α)+α=0. This implies that α=β̃/β̃+s, which in terms of β and c is equivalent to β c=1. But this case is excluded by hypothesis. Now we can focus on the case where λ≠β̃/β̃+s and we can rewrite (<ref>) as (∇^2 f(x)-λ^2-λ(1+α)+α/β̃-λ(β̃+s)I_d)=0. Therefore, as argued for the time-continuous dynamic, for every eigenvalue η'∈ of ∇^2 f(x), λ satisfies (<ref>) if and only if λ^2-λ(1+α)+α/β̃-λ(β̃+s)=η'. where η'∈ is an eigenvalue of ∇^2 f(x̂). Thus if η'=-η is negative, we have λ^2-λ((1+α)+η(β̃+s))+α+ηβ̃=0. We analyze its discriminant Δ_λ=((1+α)+η(β̃+s))^2-4(α+ηβ̃). After developing the terms we get that Δ_λ=α^2+2α(η(β̃+s)-1)+(η(β̃+s)+1)^2-4ηβ̃, which can be seen as a quadratic equation on α. We get that its discriminant Δ_α is -16η s, which is negative (since η, s are positive), thus the quadratic equation on α does not have real roots, implying that Δ_λ>0. We can write the solutions of (<ref>), λ=((1+α)+η(β̃+s))±√(Δ_λ)/2. Let us consider the biggest solution (the one with the plus sign) and let us see that λ>1, this is equivalent to ((1+α)+η(β̃+s)) + √(Δ_λ)/2>1, which in turn is equivalent to √(Δ_λ)>2-(1+α)-η(β̃+s). Squaring both sides of this inequality, we have [(1-α)-η(β̃+s)]^2 <Δ_λ =[(1+α)+η(β̃+s)]^2-4(α+ηβ̃). After expanding the terms, we see that the inequality is equivalent to 0<4η s, which is always true as η > 0. Consequently, λ>1 and in turn 𝒳̂⊂𝒜_g^⋆. We have then checked the two conditions <ref>-<ref> above. This entails that the invariant set {z_1∈^2d: lim_k→ +∞ g^k(z_1)∈𝒳̂} has Lebesgue measure zero. Equivalently, the set of initializations x_0,x_1∈^d for which x_k converges to a strict saddle point of f has Lebesgue measure zero. §.§.§ Convergence rate The following result provides the convergence rates for algorithm (<ref>) in the case where f has the Łojasiewicz property. The original idea of proof for descent-like algorithms can be found in <cit.>. Consider the setting of Theorem <ref>, where f also satisfies the Łojasiewicz property with exponent q∈ [0,1[. Then x_k → x_∞∈crit(f) as k→ +∞ at the rates: * If q ∈ [0,1/2] then there exists ρ∈ ]0,1[ such that ‖ x_k-x_∞‖=𝒪(ρ^k). * If q ∈ ]1/2,1[ then ‖ x_k-x_∞‖=𝒪k^-1-q/2q-1. Recall the function E from (<ref>). Since f satisfies the Łojasiewicz property with exponent q∈ [0,1[, and E is a separable quadratic perturbation of f, it follows from <cit.> that E has the Łojasiewicz property with exponent q_E = max{q,1/2}∈ [1/2,1[, there exists c_E>0 such that the desingularizing function of E is ψ_E(s)=c_Es^1-q_E. Let v_k=x_k-x_k-1 and Δ_k=∑_p=k^+∞‖ v_p+1‖. The triangle inequality yields Δ_k≥‖ x_k-x_∞‖ so it suffices to analyze the behavior of Δ_k to obtain convergence rates for the trajectory. Recall the constants δ,δ_3,δ_4>0 and the sequences (Δψ)_k and Ṽ_k defined in the proof of Theorem <ref>. Denote λ=ε/√(2)-ε∈ (0,1) and M=δ_4/ε(√(2)-ε) for 0<ε<√(2)/2. Using (<ref>), we have that there exists K̃∈ large enough such that for all k≥K̃ ‖ v_k+1‖≤λ‖ v_k‖+M(Δψ)_k. Recall that q_E ∈ [1/2,1[ (so 1-q_E/q_E≤ 1) and that lim_k→ +∞Ṽ_k=0. We obtain by induction that for all k≥K̃ ∑_p=k^+∞‖ v_p+1‖≤λ/1-λ‖ v_k‖+Mc_E/1-λṼ_k^1-q_E. Or equivalently, Δ_k≤λ/1-λ(Δ_k-1-Δ_k)+Mc_E/1-λṼ_k^1-q_E . Denoting c_2=(c_E(1-q_E))^1-q_E/q_E, then by (<ref>) and (<ref>) Ṽ_k^1-q_E ≤ c_2∇ E(x_k,v_k)^1-q_E/q_E ≤ c_2δ_3^1-q_E/q_E(‖ v_k‖+‖ v_k+1‖)^1-q_E/q_E ≤ c_2δ_3^1-q_E/q_E(Δ_k-1-Δ_k+Δ_k-Δ_k+1)^1-q_E/q_E = c_2δ_3^1-q_E/q_E(Δ_k-1-Δ_k+1)^1-q_E/q_E. Plugging this into (<ref>), and using that Δ_k → 0 and 1-q_E/q_E≤ 1, then there exists and integer K̃_1 ≥K̃ such that for all k ≥K̃_1 Δ_k≤λ/1-λ(Δ_k-1-Δ_k)^1-q_E/q_E+M_1/1-λ(Δ_k-1-Δ_k+1)^1-q_E/q_E , where M_1=c_Ec_2δ_3^1-q_E/q_E M. Taking the power q_E/1-q_E≥ 1 on both sides and using the fact that Δ_k+1≤Δ_k, we have for all k≥K̃_1 Δ_k^q_E/1-q_E≤ M_2(Δ_k-1-Δ_k+1) , where we set M_2=(1-λ)^-q_E/1-q_Emax(λ,M_1)^q_E/1-q_E, We now distinguish two cases: * q ∈ [0,1/2], hence q_E=1/2: (<ref>) then becomes Δ_k≤ M_2(Δ_k-1-Δ_k+1), with M_2=(1-λ)^-1max(λ,M_1) and M_1=c_E^2/2δ_3^2/δ. Using again that Δ_k+1≤Δ_k, we obtain that for k≥K̃_1 Δ_k≤M_2/1+M_2Δ_k-2, which implies Δ_k≤(M_2/1+M_2)^k-K̃_1/2Δ_K̃_1=𝒪(ρ^k), for ρ(M_2/1+M_2)^1/2∈ ]0,1[. * q ∈ ]1/2,1[, hence q_E=q: we define the function h:_+^*→ by h(s)=s^-q/1-q. Let R>1. Assume first that h(Δ_k)≤ Rh(Δ_k-1). Then from (<ref>), we get 1 ≤ M_2 (Δ_k-1-Δ_k+1)h(Δ_k) ≤ RM_2 (Δ_k-1-Δ_k+1)h(Δ_k-1) ≤ RM_2 ∫_Δ_k+1^Δ_k-1 h(s)ds ≤ RM_21-q/1-2qΔ_k-1^1-2q/1-q-Δ_k+1^1-2q/1-q . Setting ν=2q-1/1-q>0 and M_3=ν/RM_2>0, one obtains 0<M_3≤Δ_k+1^-ν-Δ_k-1^-ν. Now assume that h(Δ_k)>Rh(Δ_k-1). Since h is decreasing and Δ_k+1≤Δ_k, then h(Δ_k+1) > Rh(Δ_k-1). Set q=R^2q-1/q> 1, we directly have that Δ_k+1^-ν>qΔ_k-1^-ν. Since q-1>0 and Δ_k^-ν→+∞ as k→ +∞, there exists M_4>0 and a large enough integer K̃_2 ≥K̃_1 such that for every k≥K̃_2 that satisfies our assumption (h(Δ_k)>Rh(Δ_k-1)), we have 0<M_4≤Δ_k+1^-ν-Δ_k-1^-ν. Taking M_5=min(M_3,M_4), (<ref>) and (<ref>) show that for all k≥K̃_2 0<M_5≤Δ_k+1^-ν-Δ_k-1^-ν . Summing both sides from K̃_2 up to K-1 ≥K̃_2, we obtain M_5(K-K̃_2)≤Δ_K^-ν-Δ_K̃_2^-ν+Δ_K-1^-ν-Δ_K̃_2-1^-ν≤ 2(Δ_K^-ν-Δ_K̃_2-1^-ν). Therefore Δ_K^-ν≥Δ_K̃_2-1^-ν+M_5/2(K-K̃_2). Inverting, we get Δ_K≤[Δ_K̃_2-1^-ν+M_5/2(K-(K̃_2+1))]^-1/ν=𝒪(K^-1/ν). §.§.§ General coefficients The discrete scheme (<ref>) opens the question of whether we can consider α_k, β_k, s_k to be independent. Though this would omit the fact they arise from a discretization of the continuous-time dynamic (<ref>), hence ignoring its physical interpretation, it will gives us a more flexible choice of these parameters while preserving the desired convergence behaviour. Let f:^d→ be satisfying (<ref>) with ∇ f being globally L-Lipschitz-continuous. Consider (α_k)_k∈,(β_k)_k∈,(s_k)_k∈ to be three positive sequences, and the following algorithm with x_0,x_1∈^d: y_k =x_k+α_k(x_k-x_k-1)-β_k(∇ f(x_k)-∇ f(x_k-1)), x_k+1 =y_k-s_k∇ f(x_k). If there exists s̅>0 such that: * 0 < inf_k∈ s_k ≤sup_k∈ s_k ≤s̅ < 2/L; * sup_k∈(α_k+β_kL/s_k) < 1/s̅-L/2. Then the following holds: * (‖∇ f(x_k)‖)_k∈∈ℓ^2(), and (‖ x_k+1-x_k‖)_k∈∈ℓ^2(), and thus lim_k→ +∞‖∇ f(x_k)‖=0. * Moreover, if x_k is bounded and f is definable, then x_k+1-x_k∈ℓ^1() and x_k converges (as k→ +∞) to a critical point of f. * Furthermore, if α_k≡α,β_k≡β,s_k≡ s, then the previous conditions reduce to α+β L+sL/2<1. If, in addition, α≠β/β+s, and α> β L, then for almost all x_0, x_1∈^d, x_k converges (as k→ +∞) to a critical point of f that is not a strict saddle. Consequently, if f satisfies the strict saddle property, for almost all x_0, x_1∈^d, x_k converges (as k→ +∞) to a local minimum of f. If α_k,β_k,s_k are given as in (<ref>), α_k=1/1+γ_k h,β_k=β hα_k, s_k=h^2 α_k, then the requirements of Theorem <ref> reduce to β+h/2<c/L (recall that c is such that c≤γ_k). Adjusting equation (<ref>) to this setting, not using the dependent explicit forms of α_k,β_k, s_k, we get an analogous proof to the one of Theorem <ref>. We omit the details for the sake of brevity. § INERTIAL SYSTEM WITH IMPLICIT HESSIAN DAMPING §.§ Continuous-time dynamics We now turn to the second-order system with implicit Hessian damping as stated in (<ref>), where we consider a constant geometric damping, β(t) ≡β>0. We will use the following equivalent reformulation of (<ref>) proposed in <cit.>. We will say that x is a solution trajectory of (<ref>) with initial conditions x(0)=x_0, ẋ(0)=v_0, if and only if, x∈ C^2(_+;^d) and there exists y∈ C^1(_+;^d) such that (x,y) satisfies: ẋ(t)+x(t)-y(t)/β =0, ẏ(t)+β∇ f(y(t))+(1/β-γ(t))(x(t)-y(t)) =0, with initial conditions x(0)=x_0, y(0)=y_0 x_0+β v_0. §.§.§ Global convergence of the trajectory Our next main result is the following theorem, which is the implicit counterpart of Theorem <ref>. Let 0<β<2c/C^2, f:^d→ satisfying (<ref>), γ is continuous and satisfies (<ref>). Consider (<ref>) in this setting, then the following holds: * There exists a global solution trajectory x:_+→^d of (<ref>). * We have that ẋ∈^2(_+;^d), and ∇ f∘ (x+βẋ)∈^2(_+;^d). * If we suppose that the solution trajectory x is bounded over _+, then ∇ f∘ x∈^2(_+;^d), lim_t→ +∞‖∇ f(x(t))‖=lim_t→ +∞‖ẋ(t)‖=0, and lim_t→ +∞ f(x(t)) exists. * In addition to <ref>, if we also assume that f is definable, then ẋ∈^1(_+;^d) and x(t) converges (as t→ +∞) to a critical point of f. * We will start by showing the existence of a solution. Setting Z=(x,y), (<ref>) can be equivalently written as: Ż(t)+∇𝒢(Z(t))+𝒟(t,Z(t))=0, Z(0)=(x_0,y_0), where 𝒢(Z):^d×^d→ is the function defined by 𝒢(Z)=β f(y) and the time-dependent operator 𝒟:_+×^d×^d→^d×^d is given by: 𝒟(t,Z)=(x-y/β,(1/β-γ(t))(x-y)). Since the map (t,Z)↦∇𝒢(Z)+𝒟(t,Z) is continuous in the first variable and locally Lipschitz in the second (by (<ref>) and the assumptions on γ),we get from Cauchy-Lipschitz theorem that there exists T_max > 0 and a unique maximal solution of (<ref>) denoted Z∈ C^1([0,T_max[;^d×^d). Consequently, there exists a unique maximal solution of (<ref>) x∈ C^2([0,T_max[;^d). Let us consider the energy function V:[0,T_max[→ defined by V(t)=f(x(t)+βẋ(t))+1/2‖ẋ(t)‖^2. Proceeding as in the proof of Theorem <ref>, we prove it is indeed a Lyapunov function for (<ref>). Denoting δ_1minc/2,β(1-β C^2/2c)>0, we have V'(t)≤ -δ_1(‖ẋ(t)‖^2+‖∇ f(x(t)+βẋ(t))‖^2). We will now show that the maximal solution Z of (<ref>) is actually global. For this, we argue by contradiction and assume that T_max < +∞. It is sufficient to prove that x and y have a limit as t → T_max, and local existence will contradict the maximality of T_max. Integrating (<ref>), we obtain ẋ∈^2([0,T_max[;^d) and ∇ f∘ (x+βẋ)∈^2([0,T_max[;^d), which entails that ẋ∈^1([0, T_max[;^d) and ∇ f∘ (x+βẋ)∈^1([0,T_max[;^d), and in turn x(t)_t∈ [0,T_max[ satisfies the Cauchy property and lim_t→ T_max x(t) exists. Besides, by the first equation of (<ref>), we will have that lim_t→ T_max y(t) will exist if both lim_t→ T_max x(t) and lim_t→ T_maxẋ(t) exist. So we just have to check the existence of the second limit. A sufficient condition would be to prove that ẍ∈^1([0,T_max[;^d). By (<ref>) this will hold if ẋ,∇ f∘ (x+βẋ) are in ^1([0,T_max[;^d). But we have already shown these claims. Consequently, the solution Z of (<ref>) is global, and thus the solution x of (<ref>) is also global. * Integrating (<ref>), using that V is well-defined and bounded from below, we get that ẋ∈^2(_+;^d), and ∇ f(x(t)+βẋ(t))∈^2(_+;^d). * By assumption, sup_t > 0‖ x(t)‖<+∞. Moreover, since ẋ∈^2(_+;^d) and continuous, ẋ∈^∞(_+;^d) and then using that ∇ f is locally Lipschitz, we have ∫_t_0^+∞‖∇ f(x(t))‖^2 dt ≤ 2 ∫_t_0^+∞‖∇ f(x(t)+βẋ(t))-∇ f(x(t))‖^2 dt+2∫_t_0^+∞‖∇ f(x(t)+βẋ(t))‖^2 dt ≤ 2β^2L_0^2∫_t_0^+∞‖ẋ(t)‖^2 dt+2∫_t_0^+∞‖∇ f(x(t)+βẋ(t))‖^2 dt<+∞ , where L_0 is the Lipschitz constant of ∇ f on the centered ball of radius sup_t > 0x(t) + βsup_t > 0ẋ(t) < +∞. Moreover, for every t,s ≥ 0, ∇ f(x(t)) - ∇ f(x(s))≤ L_0 sup_τ≥ 0ẋ(τ)|t-s| . This combined with ∇ f∘ x∈^2(_+;^d) yields lim_t→ +∞∇ f(x(t))=0 . We also have that sup_t > 0‖∇ f(x(t)+βẋ(t))‖ ≤sup_t > 0(‖∇ f(x(t)+βẋ(t))-∇ f(0) ‖)+‖∇ f(0)‖ ≤ L_0sup_t > 0‖ x(t)‖ + L_0 βsup_t > 0‖ẋ(t)‖+‖∇ f(0)‖<+∞. Therefore, in view of (<ref>), we get that ẍ∈^∞(_+;^d). This implies that ẋ(t) - ẋ(s)≤sup_τ≥ 0ẍ(τ)|t-s| . Combining this with ẋ∈^2(_+;^d) gives that lim_t→ +∞ẋ(t)=0. From (<ref>), V is non-increasing, and since it is bounded from below, V(t) has a limit, say L̃. Passing to the limit in the definition of V(t), using that the velocity vanishes, gives lim_t→ +∞ f(x(t)+βẋ(t))=L̃. On the other hand, we have |f(x(t)+βẋ(t))-f(x(t))| = β∫_0^1 ∇ f(x(t)+sβẋ(t)ẋ(t) ds ≤β∫_0^1 ∇ f(x(t)+sβẋ(t) dsẋ(t) . Passing to the limit as t → +∞, the right hand side goes to 0 from the above limits on ∇ f(x(t)) and ẋ(t). We deduce that lim_t→ +∞ f(x(t))=L̃. * As in the proof for (<ref>), since (x(t))_t ≥ 0 is bounded, then (<ref>) holds. Besides, consider the function E: (x,v,w) ∈^3d↦ f(x+v)+1/2w^2 . Since f is definable, so is E. In turn, E satisfies has the KŁ property. Let ℭ_1=ℭ(x(·))×{0_d}×{0_d}. Since E|_ℭ_1=L̃, ∇ E|_ℭ_1=0, ∃ r,η>0, ∃ψ∈κ(0,η) such that for every (x,v,w)∈^3d such that x∈ℭ(x(·))+B_r,v∈ B_r,w∈ B_r and 0<E(x,v,w)-L̃<η, we have ψ'(E(x,v,w)-L̃)‖∇ E(x,v,w)‖≥ 1 By definition, we have V(t)=E(x(t),βẋ(t),ẋ(t). We also define Ṽ(t)=V(t)-L̃. By the properties of V above, we have lim_t→ +∞Ṽ(t)=0 and Ṽ is a non-increasing function. Thus Ṽ(t)≥ 0 for every t > 0. Without loss of generality, we may assume that Ṽ(t)> 0 for every t > 0 (since otherwise Ṽ(t) is eventually zero entailing that ẋ(t) is eventually zero in view of (<ref>), meaning that x(·) has finite length). Define the constants δ_2=2, δ_3=δ_1/√(2). In view of the convergence claims on ẋ and Ṽ above, there exists T > 0, such that for any t>T x(t)∈ℭ(x(·))+B_r, 0<Ṽ(t)<η, maxβ,1‖ẋ(t)‖<r, 1/δ_3ψ( Ṽ(t))<r/2√(2). The rest of the proof is analogous to the one of Theorem <ref>. Since ‖∇ E(x(t),βẋ(t),ẋ)‖^2≤δ_2(‖ẋ(t)‖^2+‖∇ f(x(t)+βẋ(t))‖^2), and ψ'(Ṽ(t))‖∇ E(x(t),βẋ(t),ẋ)‖≥ 1, ∀ t≥ T. We can lower bound the term -d/dtψ(Ṽ(t)) for t≥ T (as in (<ref>)) and conclude that ẋ∈^1(_+;^d), and that this implies that x(t) has finite length and thus has a limit as t→ +∞. This limit is necessarily a critical point of f since lim_t→ +∞‖∇ f(x(t))‖=0. §.§.§ Trap avoidance We now show that (<ref>) provably avoids strict saddle points, hence implying convergence to a local minimum if the objective function is Morse. Let c>0, 0<β<2/c and γ≡ c. Assume that f:^d→ satisfies (<ref>) and is a Morse function. Consider (<ref>) in this setting. If the solution trajectory x is bounded over _+, then the conclusions of Theorem <ref> hold. If, moreover, β≠1/c, then for almost all x_0,v_0∈^d initial conditions, x(t) converges (as t→ +∞) to a local minimum of f. Since Morse functions are C^2 and satisfy the KŁ inequality, and x is assumed bounded, then all the claims of Theorem <ref> hold. As in the proof of Theorem <ref>, we will use again the global stable manifold theorem to prove the last point. Since, γ(t)=c for all t, introducing the velocity variable v=ẋ, we have the equivalent phase-space formulation of (<ref>) ẋ(t) =v(t), v̇(t) =-cv(t)-∇ f(x(t)+β v(t)), with initial conditions x(0)=x_0, v(0)=v_0. Let us consider F:^d×^d→^d×^d defined by F(x,y)=(v, -cv -∇ f(x+β v)). Defining z(t)=(x(t),v(t)) and z_0=(x_0,v_0)∈^2d , then (<ref>) is equivalent to ż(t) =F(z(t)), z(0) =z_0 . We know from above that under our conditions, the solution trajectory z(t) converges (as t→ +∞) to an equilibrium point of F, and the set of equilibria is {(x̂,0): x̂∈crit(f)}. Following the same ideas as in the proof of Theorem <ref>, first, we will prove that each equilibrium point of F is hyperbolic. We first compute the Jacobian J_F(x,y)=[ 0_d × d I_d; -∇^2 f(x+β v) -c I_d-β∇^2 f(x+β v) ]. Let ẑ=(x̂,0), where x̂∈crit(f). Then the eigenvalues of J_F(ẑ) are characterized by the solutions on λ∈ℂ of ([ -λ I_d I_d; -∇^2 f(x̂) -(λ+c)I_d-β∇^2 f(x̂) ])=0. By Lemma <ref>, (<ref>) is equivalent to ((1+λβ)∇^2 f(x̂)+(λ^2+λ c)I_d)=0. This is the exact same equation as (<ref>). Thus the rest of the analysis goes as in the proof of Theorem <ref>. §.§.§ Convergence rate We now give asymptotic convergence rates on the objective and trajectory. Consider the setting of Theorem <ref> with f being also definable. Recall the function E from (<ref>), which is also definable, and denote ψ its desingularizing function and Ψ any primitive of -ψ'^2. Then, x(t) converges (as t→ +∞) to x_∞∈crit(f). Denote Ṽ(t) E(x(t),βẋ(t),ẋ(t))-f(x_∞). Then, the following rates of convergence hold: * If lim_t→ 0Ψ (t)∈, we have E(x(t),ẋ(t),β∇ f(x(t))) converges to f(x_∞) in finite time. * If lim_t→ 0Ψ (t)=+∞, there exists some t_1 ≥ 0 such that Ṽ(t)=𝒪(Ψ^-1(t-t_1)) Moreover, ‖ x(t)-x_∞‖=𝒪(ψ∘Ψ^-1(t-t_1)) Analogous to Theorem <ref>. When f has the Łojasiewicz property, we get the following corollary of Theorem <ref>. Consider the setting of Theorem <ref> where now f satisfies the Łojasiewicz inequality with desingularizing function ψ_f(s)=c_f s^1-q, q ∈ [0,1[, c_f > 0. Then there exists some t_1 > 0 such that: * If q ∈ [0,1/2], then Ṽ(t)=𝒪(exp(-(t-t_1))) and ‖ x(t)-x_∞‖=𝒪(exp(t-t_1/2)) * If q ∈ ]1/2,1[, then Ṽ(t)=𝒪((t-t_1)^-1/2q-1) and ‖ x(t)-x_∞‖=𝒪((t-t_1)^-1-q/2q-1) Analogous to Corollary <ref>. §.§ Algorithmic scheme In this section, we will study the properties of an algorithmic scheme derived from the following explicit discretization discretization of (<ref>) with step-size h>0 and for k≥ 1: x_k+1-2x_k+x_k-1/h^2+γ(kh)x_k+1-x_k/h+∇ f(x_k+βx_k-x_k-1/h)=0. This is equivalently written as ISIHD-Disc y_k =x_k+α_k(x_k-x_k-1), x_k+1 =y_k-s_k∇ f(x_k+β'(x_k-x_k-1)), with initial conditions x_0,x_1 ∈^d, where α_k1/1+γ_k h, s_k h^2α_k and β'β/h. §.§.§ Global convergence and trap avoidance We have the following result which characterizes the asymptotic behaviour of algorithm (<ref>), which shows that the latter enjoys the same guarantees as (<ref>) given in Theorem <ref>. We will again require that ∇ f is globally Lipschitz-continuous. Let f:^d→ satisfying (<ref>) with ∇ f being globally L-Lipschitz-continuous. Consider algorithm (<ref>) with h>0, β≥ 0 and c≤γ_k≤ C for some c,C > 0 and for every k∈. Then the following holds: * If β+h/2<c/L, then (‖∇ f(x_k)‖)_k∈∈ℓ^2(), and (‖ x_k+1-x_k‖)_k∈∈ℓ^2(), in particular lim_k→ +∞‖∇ f(x_k)‖=0. * Moreover, if x_k is bounded and f is definable, then (‖ x_k+1-x_k‖)_k∈∈ℓ^1() and x_k converges (as k→ +∞) to a critical point of f. * Furthermore, if γ_k≡ c>0, 0<β<c/L, β≠1/c, and h<min2(c/L-β),1/Lβ, then for almost all x_0, x_1 ∈^d, x_k converges (as k→ +∞) to a critical point of f that is not a strict saddle. Consequently, if f satisfies the strict saddle property, for almost all x_0, x_1∈^d, x_k converges (as k→ +∞) to a local minimum of f. * Let v_k x_k-x_k-1, α̅1/1+ch, α1/1+Ch, s̅=h^2α̅, s=h^2α, so α≤α_k≤α̅ and s≤ s_k≤s̅ for every k∈. Proceeding as in the proof of Theorem <ref>, we have by definition that for k∈^* x_k+1=_x∈^d1/2‖ x-(y_k-s_k∇ f(x_k+β'v_k))‖^2, and 1-strong convexity of x↦1/2‖ x-(y_k-s_k∇ f(x_k+β'v_k))‖^2 then gives 1/2‖ x_k+1-(y_k-s_k∇ f(x_k+β'v_k))‖^2≤1/2‖ x_k-(y_k-s_k∇ f(x_k+β'v_k))‖^2-1/2‖ x_k+1-x_k‖^2. Expanding and rearranging, we obtain ⟨∇ f(x_k+β'v_k),v_k+1⟩≤ -‖ v_k+1‖^2/s_k+1/h^2⟨ v_k,v_k+1⟩. Combining this with the descent lemma of L-smooth functions applied to f, we arrive at f(x_k+1) ≤ f(x_k)+⟨∇ f(x_k),v_k+1⟩+L/2‖ v_k+1‖^2 =f(x_k)+⟨∇ f(x_k)-∇ f(x_k+β'v_k),v_k+1⟩+⟨∇ f(x_k+β'v_k),v_k+1⟩+L/2‖ v_k+1‖^2 ≤ f(x_k)+(β'L+1/h^2)‖ v_k‖‖ v_k+1‖-(1/s̅-L/2)‖ v_k+1‖^2. Where we have used that the gradient of f is L-Lipschitz and Cauchy-Schwarz inequality in the last bound. Denote α̃=β'L+1/h^2. We can check that since h<2c/L, then 0<s̅<2/L and the last term of the inequality is negative. Using Young's inequality we have that for ε>0: f(x_k+1) ≤ f(x_k)+α̃^2/2ε‖ v_k‖^2+ε‖ v_k+1‖^2/2-(1/s̅-L/2)‖ v_k+1‖^2. Or equivalently, f(x_k+1)+α̃^2/2ε‖ v_k+1‖^2≤ f(x_k)+α̃^2/2ε‖ v_k‖^2+[ε/2+α̃^2/2ε-(1/s̅-L/2)]‖ v_k+1‖^2. In order to make the last term negative, we impose ε/2+α̃^2/2ε<1/s̅-L/2 . Minimizing for ε at the left-hand side we obtain ε=α̃ and the condition to satisfy is s̅<2/2α̃+L. Recalling the definitions of s̅,α̃,β', this is equivalent to h^2/1+ch<2/2(Lβ/h+1/h^2)+L 2Lβ h+2+Lh^2<2+2ch . Simplifying, this reads β+h/2<c/L, which is precisely what we have assumed. Let δ=(1/s̅-L/2)-α̃>0, then f(x_k+1)+α̃/2‖ v_k+1‖^2≤ f(x_k)+α̃/2‖ v_k‖^2-δ‖ v_k+1‖^2. Toward our Lyapunov analysis, define now V_k=f(x_k)+α̃/2‖ v_k‖^2 for k∈^*. In view of (<ref>), V_k obeys V_k+1≤ V_k - δ‖ v_k+1‖^2. and thus V_k is non-increasing. Since it is also bounded from below, V_k converges to a limit, say L̃. Summing (<ref>) over k∈^*, we get that (‖ v_k+1‖)_k∈∈ℓ^2(), hence lim_k→ +∞‖ v_k‖=0. Besides, since α̅ < 1 ‖∇ f(x_k+β'v_k)‖=1/s_k‖ x_k+1-y_k‖ ≤1/s(‖ x_k+1-x_k‖+‖ x_k-y_k‖) ≤1/s(‖ v_k+1‖+α̅‖ v_k‖) ≤1/s(‖ v_k+1‖+‖ v_k‖) , which implies ‖∇ f(x_k+β'v_k)‖^2≤δ_2(‖ v_k+1‖^2+‖ v_k‖^2), where δ_2=2/s^2. Consequently (‖∇ f(x_k+β'v_k)‖)_k∈∈ℓ^2(), and ‖∇ f(x_k)‖^2 = 2(‖∇ f(x_k)-∇ f(x_k+β'v_k)‖^2+‖∇ f(x_k+β'v_k)‖^2) ≤ 2(L^2β'^2‖ v_k‖^2+‖∇ f(x_k+β'v_k)‖^2). Thus, (‖∇ f(x_k)‖)_k∈∈ℓ^2(), hence lim_k→ +∞‖∇ f(x_k)‖=0. * When x_k is bounded and f is definable, we proceed analogously as in the proof of Theorem <ref> to conclude that (‖ v_k‖)_k∈∈ℓ^1(^*), so x_k is a Cauchy sequence which implies that it has a limit (as k→ +∞) denoted x_∞, which is a critical point of f since lim_k→ +∞‖∇ f(x_k)‖=0. * When γ_k≡ c, we let α_k≡α=1/1+ch,s_k≡ s=h^2α. Let z_k=(x_k,x_k-1), and g:^d×^d→^d×^d defined by g:(x_+,x_-)↦ [(1+α)x_+-α x_- -s∇ f(x_+ +β'(x_+-x_-)),x_+] . (<ref>) is then equivalent to z_k+1=g(z_k). To conclude, we will again use <cit.>, similarly to what we did in the proof of of Theorem <ref>, by checking that: * (J_g(x_+,x_-))≠ 0 for every x_+,x_-∈^d. * 𝒳̂⊂𝒜_g^⋆, where 𝒜_g^⋆{(x,x)∈^2d: x∈crit(f), max_i |λ_i(J_g(x,x))|>1 } and 𝒳̂{(x,x)∈^2d:x∈𝒳^⋆}, with 𝒳^⋆ the set of strict saddle points of f. The Jacobian J_g(x_+,x_-) reads [ (1+α)I_d-s(1+β')∇^2 f(x_+ +β'(x_+-x_-)) -α I_d+sβ'∇^2 f(x_+ +β'(x_+-x_-)); I_d 0_d× d ] . This is a block matrix, where the bottom-left matrix commutes with the upper-left matrix (since is the identity matrix), then by Lemma <ref> (J_g(x_+,x_-))= (α I_d-β's∇^2 f(x_+ +β'(x_+-x_-))). Since the eigenvalues of ∇^2 f(x_+ +β'(x_+-x_-)) are contained in [-L,L]. It is then sufficient that α>β'Ls to have that η-α/β's≠ 0 for every eigenvalue η≠ 0 of ∇^2 f(x_+ +β'(x_+-x_-)). This means that under α>β'Ls, condition <ref> is in force. Requiring α>β'Ls is equivalent to h<1/Lβ, and since we already need h<2(c/L-β), we just ask h to be less than the minimum of the two quantities. Let us check condition <ref>. Let x be a strict saddle point of f, x ∈crit(f) and λ_min(∇^2 f(x))=-η < 0. To characterize the eigenvalues of J_g(x,x) we could use Lemma <ref> as before, however, we will present an equivalent argument. Let η_i ∈, i=1,…,d, be the eigenvalues of ∇^2 f(x). By symmetry of the Hessian, it is easy to see that the 2d eigenvalues of J_g(x,x) coincide with the eigenvalues of the 2 × 2 matrices [ (1+α)-s(1+β')η_i -α+sβ'η_i; 1 0 ] . These eigenvalues are therefore the (complex) roots of λ^2 - λ(1+α)-s(1+β')η_i + α - sβ'η_i = 0 . If λ=β'/β'+1, then (<ref>) becomes (β'/β'+1)^2-(β'/β'+1)(1+α)+α=0. This implies that α=β'/β'+1, or equivalently 1/1+ch=β/β+h. But this contradicts our assumption that β c≠ 1, and thus this case cannot occur. Let us now solve (<ref>) for η_i=-η. Its discriminant is Δ_λ=α^2+2α(η s(1+β')-1)+(η s(1+β')+1)^2-4η sβ', which can be seen as a quadratic equation in α whose discriminant Δ_α=-16η s. Since Δ_α < 0 (recall that η, s > 0). Therefore the quadratic equation on α does not have real roots implying that Δ_λ>0. We can then write the solutions of (<ref>), λ=(1+α)+η s(1+β')±√(Δ_λ)/2. Let us examine the largest solution (the one with the plus sign) and show that actually λ>1. Simple algebra shows that this is equivalent to verifying that Δ_λ>(2-(1+α)-η s(1+β'))^2. or, equivalently, (1-α)^2+η^2s^2(1+β')^2-2(1-α)η s(1+β') <Δ_λ =(1+α)^2+η(s-β')^2-4(α-ηβ'). Simple algebra again shows that this inequality is equivalent to 0<4η s, which is always true as η > 0. We have thus shown that 𝒳̂⊂𝒜_g^⋆. Overall, we have checked the two conditions <ref>-<ref> above. Therefore the invariant set {z_1∈^2d: lim_k→ +∞ g^k(z_1)∈𝒳̂} has Lebesgue measure zero. This means that the set of initializations x_0,x_1∈^d for which x_k converges to a strict saddle point of f has Lebesgue measure zero. §.§.§ Convergence rate The asymptotic convergence rate of algorithm (<ref>) for Łojasiewicz functions is given in the following theorem. This shows that (<ref>) enjoys the same asymptotic convergence rates as (<ref>). Consider the setting of Theorem <ref>, where f also satisfies the Łojasiewicz property with exponent q∈ [0,1[. Then x_k → x_∞∈crit(f) as k→ +∞ at the rates: * If q ∈ [0,1/2] then there exists ρ∈ ]0,1[ such that ‖ x_k-x_∞‖=𝒪(ρ^k). * If q ∈ ]1/2,1[ then ‖ x_k-x_∞‖=𝒪k^-1-q/2q-1. Since the Lyapunov analysis of (<ref>) is analogous to that of (<ref>) (though the Lyapunov functions are different), the proof of this theorem is similar to the one of Theorem <ref>. §.§.§ General coefficients As discussed for the explicit case, the discrete scheme (<ref>) rises from a discretization of the ODE (<ref>). However, the parameters α_k, s_k are linked to each other. We now consider (<ref>) where α_k, s_k are independent. Though this would hide somehow the physical interpretation of these parameters, it allows for some flexibility in their choice while preserving the convergence behaviour. Let f:^d→ be satisfying (<ref>) with ∇ f being globally L-Lipschitz-continuous. Consider (α_k)_k∈,(β_k)_k∈,(s_k)_k∈ to be three positive sequences, and the following algorithm with x_0,x_1∈^d: y_k =x_k+α_k(x_k-x_k-1), x_k+1 =y_k-s_k∇ f(x_k+β_k(x_k-x_k-1)). If there exists s̅>0 such that: * 0 < inf_k∈ s_k ≤sup_k∈ s_k≤s̅<2/L; * sup_k∈(β_kL+α_k/s_k) < 1/s̅-L/2. Then the following holds: * (‖∇ f(x_k)‖)_k∈∈ℓ^2(), and (‖ x_k+1-x_k‖)_k∈∈ℓ^2(), hence lim_k→ +∞‖∇ f(x_k)‖=0. * Moreover, if x_k is bounded and f is definable, then (‖ x_k+1-x_k‖)_k∈∈ℓ^1() and x_k converges (as k→ +∞) to a critical point of f. * Furthermore, if α_k≡α,β_k≡β,s_k≡ s, then the previous conditions reduce to α+sL(β+1/2)<1. If, in addition, α≠β/β+1, α> β L s, then for almost all x_0,x_1∈^d, x_k converges (as k→ +∞) to a critical point of f that is not a strict saddle. Consequently, if f satisfies the strict saddle property, for almost all x_0, x_1∈^d, x_k converges (as k→ +∞) to a local minimum of f. Adjusting equation (<ref>) to our setting (not using the dependency of α_k, s_k) we get an analogous proof to the one of Theorem <ref>. We omit the details. § NUMERICAL EXPERIMENTS Before describing the numerical experiments, let us start with a few observations on the computational complexity and memory storage requirement of (<ref>) and (<ref>). The number of gradient access per iteration is the same for Gradient Descent (GD), discrete HBF, (<ref>) and (<ref>) is the same (one per iteration). However, the faster convergence (in practice) of inertial methods comes at the cost of storing previous information. For the memory storage requirement per iteration, GD stores only the previous iterate, the discrete HBF and (<ref>) store the two previous iterates, while (<ref>) additionally stores the previous gradient iterate as well. This has to be kept in mind when comparing these algorithms especially for in very high dimensional settings. We will illustrate our findings with two numerical experiments. The first one is the optimization of the Rosenbrock function in ^2, while the second one is on image deblurring. We will apply the proposed discrete schemes (<ref>) and (<ref>) and compare them with gradient descent and the (discrete) HBF. We will call ‖∇ f(x_k)‖ the residual. §.§ Rosenbrock function We will minimize the classical Rosenbrock function, , f: (x,y) ∈^2 ↦ (1-x)^2+100(y-x^2)^2, with global minimum at (x^⋆,y^⋆)=(1,1). We notice that its global minimum is the only critical point. Therefore this function is Morse and thus satisfies the Łojasiewicz inequality with exponent 1/2 (see Remark <ref>). Consider (<ref>) and (<ref>) with the Rosenbrock function as the objective, γ:_+→_+ satisfying (<ref>) (0<c≤γ(t)≤ C<+∞), and 0<β<2c/C^2. By Theorems <ref>-<ref> for (<ref>), and Theorems <ref>-<ref> for (<ref>), we get that the solution trajectories of these dynamics will converge to the global minimum eventually at a linear rate. Due to the low dimensionality of this problem, we could use an ODE solver to show numerically these results. However, we will just the iterates generated by our proposed algorithmic schemes (<ref>) and (<ref>). Although the gradient of the objective is not globally Lipschitz continuous, our proposed algorithmic schemes worked very well for h small enough. This suggests that we may relax this hypothesis in future work, as proposed in <cit.> for GD. We applied (<ref>) and (<ref>) with β∈{0.02, 0.04}, γ(t)≡γ_0=3, h=10^-3 and initial conditions x_0=(-1.5,0), x_1=x_0. We compared our algorithms with GD and HBF (with the same initial conditions) after 2*10^4 iterations: GD x_k+1=x_k-h^2/1+γ_0 h∇ f(x_k), and HBF y_k =x_k+1/1+γ_0 h(x_k-x_k-1), x_k+1 =y_k-h^2/1+γ_0 h∇ f(x_k). The behaviour of all algorithms is depicted in Figures <ref> and <ref>. We can notice that the iterates generated by (<ref>) and (<ref>) oscillate much less towards the minimum than (<ref>), and this damping effect is more notorious as β gets larger. In the case β=0.02, we observe there are still some oscillations, which benefit the dynamic generated by (<ref>) more than the one generated by (<ref>). However, we have the opposite effect in the case β=0.04, where the oscillations are more damped. These three methods ((<ref>), (<ref>), (<ref>)) share a similar asymptotic convergence rate, which is linear as predicted (recall f is Łojasiewicz with exponent 1/2), and they are significantly faster than GD. §.§ Image Deblurring In the task of image deblurring, we are given a blurry and noisy (gray-scale) image b∈^n_x× n_y of size n_x× n_y. The blur corresponds to a convolution with a known low-pass kernel. Let A:^n_x× n_y→^n_x× n_y be the blur linear operator. We aim to solve the (linear) inverse problem of reconstructing u^⋆∈^n_x× n_y from the relation b=Au̅+ξ, where ξ is the noise, that is additive pixel-wise, has 0-mean and is Gaussian. Through this experiment, we used n_x=n_y=256. In order to reduce noise amplification when inverting the operator A, we solve a regularized optimization problem to recover u^⋆ as accurately as possible. As natural images can be assumed to be smooth except for a (small) edge-set between objects in the image, we use a non-convex logarithmic regularization term that penalizes finite forward differences in horizontal and vertical directions of the image, implemented as linear operators K_x,K_y:^n_x× n_y→^n_x× n_y with Neumann boundary conditions. In summary, we aim to solve the following: min_u∈^n_x× n_y f(u), f(u)1/2‖ Au-b‖^2+μ/2∑_i=1^n_x∑_j=1^n_ylog (ρ+(K_x u)_i,j^2+(K_y u)_i,j^2), where μ,ρ are positive constants for regularization and numerical stability set to 5 · 10^-5 and 10^-3, respectively. f definable as the sum of compositions of definable mappings, and ∇ f is Lipschitz continuous. To solve the above optimization problem, we have used (<ref>) and (<ref>) with parameters β=1.3, γ_k ≡ 0.25, h=0.5, and initial conditions x_0=x_1=0_n_x× n_y. We compared both algorithms with the baseline algorithms (<ref>), (<ref>) (with the same initial condition). All algorithms were run for 250 iterations. The results are shown in Figure <ref> and Figure <ref>. In Figure <ref>, the original image u̅ is shown on the left. In the middle, we display the blurry and noise image b. Finally, the image recovered by (<ref>) is shown on the right. In Figure <ref>, see that the residual plots of (<ref>) and (<ref>) overlap. Again, as expected, the trajectory of (<ref>) and (<ref>) has much less oscillation than (<ref>) which is a very desirable feature in practice. At the same time, (<ref>) and (<ref>) see to convergence faster, though (<ref>) eventually shows a similar convergence rate. Again, GD is the slowest. Overall, (<ref>) and (<ref>) seem to take the best of both worlds: small oscillations and a faster asymptotic convergence rate. In terms of the resolution of the final image returned by (<ref>) and (<ref>), there are no significant differences to the eye. However, since the one generated by (<ref>) (and also by (<ref>) and (<ref>)) has a faster convergence rate than (<ref>), we will prefer the former method to obtain a lower residual (a “better” solution) for practically any number of iterations. § CONCLUSION AND PERSPECTIVES We conclude that: * Under definability conditions on the objective and suitable conditions on γ, we obtain convergence of the trajectory of (<ref>) and (<ref>). Besides, in the autonomous setting, and under a Morse condition, the trajectory almost surely converges to a local minimum of the objective. * We obtain analogous properties for the respective proposed algorithmic schemes (<ref>) and (<ref>). * The inclusion of the term β helps to reduce oscillations towards critical points, and, when chosen appropriately, without reducing substantially the speed of convergence of the case β=0, the one for Heavy Ball with Friction method. * The selection of β is important. If it is chosen too close to zero, it may not significantly reduce oscillations. Conversely, if it is chosen too large (even within theoretical bounds), the trade-off for reduced oscillations might be a worse convergence rate. Several open problems are worth investigating in the future: * We could adapt (<ref>) and its properties to the Bregman case. For a differentiable function g:dom(g)→, we define D_g:dom(g)×int.dom(g)→ as: D_g(x,y) g(x)-g(y)-⟨∇ g(y),x-y⟩. We consider a function ϕ:dom(ϕ)→ such that: * is differentiable; * is μ- strongly convex with respect to some norm; * there exists L_f>0 such that L_fϕ-f is a convex function. And we consider the algorithm: ISEHD-Disc-Breg y_k =x_k+α_k(x_k-x_k-1)-β_k(∇ f(x_k)-∇ f(x_k-1)), x_k+1 =_x∈^d⟨∇ f(x_k),x-y_k⟩+1/s_kD_ϕ(x,y_k). We notice that we recover (<ref>) when ϕ(x)=‖ x‖_2^2/2. * Replacing the global Lipschitz continuity assumption on the gradient in Theorems <ref> and <ref> with a local Lipschitz continuity assumption as proposed in <cit.>. * Proposing discrete schemes of (<ref>) and (<ref>) with a variable stepsize, which can be computed, for instance, by backtracking. * Extending our results to (non-euclidian) Bregman geometry. We could adapt (<ref>) and its properties to the Bregman case. For a differentiable function g:dom(g)→, we define D_g:dom(g)×int.dom(g)→ as: D_g(x,y) g(x)-g(y)-⟨∇ g(y),x-y⟩. We consider a function ϕ:dom(ϕ)→ such that: * is differentiable; * is μ- strongly convex with respect to some norm; * there exists L_f>0 such that L_fϕ-f is a convex function. And we consider the algorithm: ISEHD-Disc-Breg y_k =x_k+α_k(x_k-x_k-1)-β_k(∇ f(x_k)-∇ f(x_k-1)), x_k+1 =_x∈^d⟨∇ f(x_k),x-y_k⟩+1/s_kD_ϕ(x,y_k). We notice that we recover (<ref>) when ϕ(x)=‖ x‖_2^2/2. Besides, by the first-order condition ∇ϕ(x_k+1)=∇ϕ(y_k)-s_k∇ f(x_k), and using the three-points identity (see <cit.>), we get: s_k⟨∇ f(x_k),x_k+1-x_k⟩=D_ϕ(x_k,y_k)-D_ϕ(x_k+1,y_k)-D_ϕ(x_k,x_k+1). Defining α(ϕ) the symmetry coefficient of ϕ as α(ϕ)inf_x,y∈^d{D_ϕ(x,y)/D_ϕ(y,x): x≠ y}∈ [0,1], we can get the following: s_k⟨∇ f(x_k),x_k+1-x_k⟩ ≤ -(1+α(ϕ))D_ϕ(x_k+1,x_k)+⟨∇ϕ(y_k)-∇ϕ(x_k), x_k+1-x_k⟩ Now, we use the extended Descent Lemma (see <cit.>), that states that the fourth assumption on f and ϕ is equivalent to D_f(·,·)≤ L_f D_ϕ(·,·). And we get f(x)-f(y)-⟨∇ f(y),x-y⟩ f(x_k+1)≤ f(x_k)-(1+α(ϕ)/s-L_f)D_ϕ(x_k+1,x_k)+1/s_k⟨∇ϕ(y_k)-∇ϕ(x_k),x_k+1-x_k⟩. I would like to have ‖∇ϕ(x_k+z_k)-∇ϕ(x_k)‖^2≤ s_k^2Λ D_ϕ(x_k,x_k-1). and define V_k=f(x_k)+C_1D_ϕ(x_k,x_k-1) Consider the setting of Theorem <ref> in the case where q=1/2. Let M_1=M_1(h)(μ_0^2/2+4(1/h^2+β L/h))(1/h^2+β L/h)^2+2(1+Ch)^2/h^4/c/h-L/2-β L/h, then the optimal convergence rate is Δ_k=𝒪(ρ_⋆^k), where ρ_⋆(1/1+[4M_1^⋆+2√(2M_1^⋆(2M_1^⋆+1))]^-1)^1/2, M_1^⋆=M_1(h^⋆), and h^⋆ is the minimizer of M_1. We can optimize this convergence rate, recalling the constants defined in the proof of Theorem <ref>, M_2=2/√(2)-2εmaxε,M_1/ε for M_1=c_1^2δ_3^2/2δ, and c_1=√(μ_0^2+8C_1). We will optimize M_32/√(2)-2ε(ε+M_1/ε) since it is easier to handle and M_2≤ M_3. We see that M_3(ε) is a convex function on ]0,√(2)/2[, the condition M_3'(ε)=0 gives us that ε^⋆=√(M_1(2M_1+1))-√(2)M_1 is the minimizer of M_3(ε) in the desired interval and M_3(ε^⋆)=4M_1+2√(2M_1(2M_1+1)). The next step is to minimize M_1=M_1(h)(μ_0^2/2+4(1/h^2+β L/h))(1/h^2+β L/h)^2+2(1+Ch)^2/h^4/c/h-L/2-β L/h, which is convex in the interval ]0,2(c/L-β)[, the condition M_1'(h)=0 gives us that h^⋆ is the positive solution of a fourth-order equation. a_0h^4+a_1h^3+a_2h^2+a_3h+a_4=0, where a_0 = a_1 = a_2 = a_3 = a_4 =. Then, the optimal convergence rate is Δ_k=𝒪(ρ_⋆^k), where ρ_⋆(1/1+[4M_1^⋆+2√(2M_1^⋆(2M_1^⋆+1))]^-1)^1/2, and M_1^⋆=M_1(h^⋆). 0<Lh<2(c-β L), and that M_1(h) explodes to +∞ at the bord ers of the interval ]0,2(c/L-β)[. So, Lh≈ c-β L and a_0h^3≈ (c-β L)(2C^2+β^2L^2)h^2, and the approximate solution will be the positive solution of a_1'h^2+a_2h+a_3=0, where a_1'=3L(β L+2C), h̃=-(6L-4(c-β L)(β L+2C))+√((6L-4(c-β L)(β L+2C))^2+108L(β L+2C)(c-β L))/6L(β L+2C), <cit.> Consider A,B,C,D ∈^d× d and such that AC=CA. Then ([ A B; C D ])=(AD-CB). unsrt
http://arxiv.org/abs/2407.13583v1
20240718152533
Emergent modified gravity: Polarized Gowdy model on a torus
[ "Martin Bojowald", "Erick I. Duque" ]
gr-qc
[ "gr-qc" ]
Emergent modified gravity: Polarized Gowdy model on a torus Martin Bojowald[e-mail address: bojowald@psu.edu] and Erick I. Duque [e-mail address: eqd5272@psu.edu] Institute for Gravitation and the Cosmos, The Pennsylvania State University, 104 Davey Lab, University Park, PA 16802, USA § ABSTRACT New covariant theories of emergent modified gravity exist not only in spherically symmetric models, as previously found, but also in polarized Gowdy systems that have a local propagating degree of freedom. Several explicit versions are derived here, depending on various modification functions. These models do not have instabilities from higher time derivatives, and a large subset is compatible with gravitational waves and minimally coupled massless matter fields travelling at the same speed. Interpreted as models of loop quantum gravity, covariant Hamiltonian constraints derived from the covariance conditions found in polarized Gowdy systems are more restricted than those in spherical symmetry, requiring new forms of holonomy modifications with an anisotropy dependence that has not been considered before. Assuming homogeneous space, the models provide access to the full anisotropy parameters of modified Bianchi I dynamics, in which case different fates of the classical singularity are realized depending on the specific class of modifications. § INTRODUCTION The canonical formulation of spherically symmetric general relativity has recently been shown <cit.> to allow a larger class of modifications than is suggested by the more common setting of covariant action principles. In this framework of emergent modified gravity, it is possible to couple perfect fluids <cit.> as well as scalar matter <cit.> to the new space-time geometries, including local degrees of freedom in the latter case. Here, we show that it is also possible to extend spherical symmetry to a polarized Gowdy symmetry that includes local gravitational degrees of freedom. This extension makes it possible to study properties of gravitational waves in this new set of covariant space-time theories. Building on previous canonical developments, starting with the classic <cit.> and using more recent contributions <cit.>, emergent modified gravity constructs consistent gravitational dynamics and corresponding space-time geometries by modifying the Hamiltonian constraint of general relativity and implementing all covariance conditions. A candidate for the spatial metric of a space-time geometry is provided by the structure function in the Poisson bracket of two Hamiltonian constraints, which is required to be proportional to the diffeomorphism constraint as one of the consistency conditions. Canonical gauge transformations of the candidate spatial metric must then agree with coordinate transformations in a compatible space-time geometry, forming the second consistency condition that had been formulated for the general case and analyzed for the first time in <cit.>. These constructions allow for the possibility that the spatial metric (or a triad) is not one of the fundamental fields of a phase-space formulation. It is derived from Hamilton's equations generated by the constraints and not presupposed, giving it the status of an emergent geometrical object. This feature is the main difference with standard action principles in metric or other formulations and makes this approach to modified gravity more general than previous constructions. Examples of new physical implications include the possibility of non-singular black-hole solutions <cit.>, covariant MOND-like effects <cit.>, and new types of signature change <cit.>. The constructions of the present paper lead to the first model of emergent modified gravity that does not obey spherical symmetry. Nevertheless, spherical symmetry can be realized as a special case, allowing us to draw conclusions about how generic specific features seen in the more symmetric context are within a broader setting. An example of interest is the form of holonomy-type modifications that are often of used in order to model potential effects from loop quantum gravity. The specific form of these modifications within a consistent and covariant set of equations is restricted compared with what had been assumed previously in loop constructions. The extension to polarized Gowdy models performed here shows that compatible modifications require significant deviations from what might be suggested by loop quantum gravity. In particular, the holonomy length for strictly periodic modifications of the extrinsic-curvature dependence does not directly depend on the volume or area of a symmetry orbit (all of space in a homogeneous cosmological model or a sphere at constant radius in black-hole models), but rather on its anisotropy paremeters. General covariance therefore rules out the possibility that the holonomy length decreases as space or a spherical orbit expands, which would be a prerequisite to a nearly constant discreteness scale that does not increase to macroscopic sizes as the universe expands. Nevertheless, additional modification functions can be used in order to implement a dynamical suppression of holonomy modifications on classical scales, as discussed in detail in <cit.> for spherically symmetric models. The traditional picture of models of loop quantum gravity therefore has to be corrected in order to be compatible with a consistent space-time geometry. Emergent modified gravity guides the way to a new understanding by a systematic classification of possible space-time modifications in canonical form. In addition, the new gravitational models found here are important in their own right because they have covariant equations with modifications that do not require higher-derivative terms and corresponding instabilities <cit.>. They are therefore potential alternatives to general relativity that could be used in comparisons with observations, provided the symmetry assumptions can be relaxed further. Polarized Gowdy symmetries constitute a first step in this direction, giving access to some properties of gravitational waves. In particular, we show that there is a class of modifications that implies the same propagation speed for gravitational waves and massless scalar matter travelling on the same background. Unlike spherically symmetric models, which have a spatially homogeneous subset of Kantowski-Sachs models with a single anisotropy parameter, polarized Gowdy models give full access to the Bianchi I model with two anisotropy parameters. It is therefore possible to perform a more complete analysis of the big-bang singularity, which may be avoided depending on the type of modifications used. As a characteristic property, the classical Kasner exponents are preserved at large volume, and a non-singular transition from collapse to expansion happens at the same time for all three spatial directions. We will present a detailed analysis of these questions in Section <ref>, after a brief review of canonical and emergeny modified gravity in Section <ref> and their application to polarized Gowdy models in Sections <ref> and <ref> with a summary of different classes of modifications in Section <ref>. Implications for covariant holonomy modifications in models of loop quantum gravity can be found in Section <ref>. § CLASSICAL THEORY The classical polarized Gowdy system <cit.> is defined by space-time line elements of the form d s^2 = - N^2 d t^2 + q_θθ ( dθ + N^θ d t )^2 + q_x x d x^2 + q_y y d y^2 with functions N, N^θ and q_ab depending only on t and θ. All three spatial coordinates x, y and θ take values in the range [ 0 , 2 π) for the torus model with spatial slices Σ≅ T^3 = S^1 × S^1 × S^1. Equivalently, the spatial metric components q_ab can be parameterized by q_θθ = E^x E^y/ε , q_x x = E^y/E^xε , q_y y = E^x/E^yε using the components E^x, E^y and ε of a densitized triad E_i^a σ_i∂/∂ x^a= εσ_3∂/∂θ+ E^xσ_1∂/∂ x+ E^yσ_2 ∂/∂ y with Pauli matrices σ_i. For some purposes, it is conventional to write the metric in the diagonal case (N^θ=0) in the form d s^2 = e^2 a ( - d T^2 + dθ^2 ) + T ( e^2 W d x^2 + e^-2 W d y^2 ) with a new time coordinate T. This conventional metric is associated to the canonical metric in a gauge defined by ε = T and N=√(q_θθ), identifying N^2=q_θθ =: e^2 a and W = ln√(E^y / E^x). If E^x=E^y or W=0, the geometry has an additional rotational symmetry. §.§ Canonical formulation The densitized-triad components are canonically conjugate to components of extrinsic curvature, implying canonical pairs (K_x , E^x), (K_y , E^y), and (𝒜 , ε) and the symplectic structure Ω = 1/κ̃∫ dθ( d K_x ∧ d E^x + d K_y ∧ d E^y + dε∧ d𝒜) with κ̃ = κ / (4 π^2) = 2 G / π in terms of Newton's constant G. We will work in units such that κ̃=1. The classical Hamiltonian and diffeomorphism constraints with a cosmological constant are <cit.> H = - 1/√(E^x E^y ε)[ - ε E^x E^y Λ + K_x E^x K_y E^y + ( K_x E^x + K_y E^y ) ε𝒜 + 1/2ε^2/E^x E^y (E^x)' (E^y)' + 1/2ε/E^x (E^x)' ε' + 1/2ε/E^y (E^y)' ε' - 1/4ε^2/(E^y)^2 ((E^y)')^2 - 1/4ε^2/(E^x)^2((E^x)')^2 - 1/4 (ε')^2 - εε”] = - 1/√(E^x E^y ε)( - ε E^x E^y Λ + E^x K_x E^y K_y + (E^x K_x + E^y K_y) ε𝒜) - 1/41/√(E^x E^y ε)( (ε')^2 - 4 (ε (ln√(E^y/E^x) )')^2 ) + (√(ε)ε'/√(E^x E^y))' , and H_θ = E^x K_x' + E^y K_y' - 𝒜ε' where the primes are θ derivatives. The smeared constraints have Poisson brackets { H_θ [N^θ] , H_θ [M^θ] } = - H_θ [ M^θ (N^θ)' - N^θ (M^θ)'] , { H [N] , H_r [M^θ] } = - H[M^θ N'] , { H [N] , H[M] } = - H_θ[ q^θθ( M N' - N M' )] of hypersurface-deformation form, with structure function q^θθ = ε/(E^x E^y) directly given by the inverse metric component in the inhomogeneous direction. From general properties of canonical gauge systems <cit.> it then follows that the gauge transformations for the lapse function N and shift vector N^ϑ are given by δ_ϵ N = ϵ̇^0 + ϵ^θ N' - N^θ (ϵ^0)' and δ_ϵ N^θ = ϵ̇^θ + ϵ^θ (N^θ)' - N^θ (ϵ^θ)' + q^θθ(ϵ^0 N' - N (ϵ^0)' ) . In the classical theory it is clear that the inverse of the structure function, q_θθ=1/q^θθ, obeys a covariance condition as a component of the space-time metric. More generally <cit.>, covariance conditions can be directly formulated for phase-space functions such as a structure function in a modified theory. They implement the general condition that gauge transformations of any candidate space-time metric component, generated by the canonical constraints, must be of the form of a Lie derivative by a space-time vector field. Using explicit expressions for gauge transformations generated by the constraints on a given phase space, this general condition can be written as a set of partial differential equations that the constraints have to obey. These covariance conditions, derived in <cit.> for spherical symmetry, are more complicated for polarized Gowdy models because the phase space is larger and the line element is a more complicated function of the phase-space variables. For the homogeneous component q_x x we obtain the conditions 1/E^y( ∂ H/∂ K_y' - 2 (∂ H/∂ K_y”)' ) - 1/E^x( ∂ H/∂ K_x' - 2 (∂ H/∂ K_x”)' ) + 1/ε( ∂ H/∂𝒜' - 2 (∂ H/∂𝒜”)' ) |_ O.S. = 0 and 1/E^y∂ H/∂ K_y” - 1/E^x∂ H/∂ K_x” + 1/ε∂ H/∂𝒜”|_ O.S. = 0 , where “O.S.” indicates that the equations are required to hold on-shell, when constraints and equations of motion are satisfied. For our modified constraints, we assume spatial derivatives up to second order; otherwise there would be additional terms in these equations. The x ↔ y exchange symmetry of the constraint allows us to simplify these on-shell conditions to ∂ H/∂𝒜' = ∂ H/∂𝒜” = 1/E^y∂ H/∂ K_y' - 1/E^x∂ H/∂ K_x' = 1/E^y∂ H/∂ K_y” - 1/E^x∂ H/∂ K_x” = 0 , which is clearly satisfied by the classical constraint even off-shell. The same condition is obtained from the other homogeneous component, q_y y. For the inhomogeneous component, the covariance condition reads ∂({ q^θθ , H[ϵ^0] })/∂ (ϵ^0)'|_O.S. = ∂({ q^θθ , H[ϵ^0] })/∂ (ϵ^0)”|_O.S. = … = 0 , which is also satisfied by the classical constraint because it does not contain any derivatives of K_x, K_y, or 𝒜 that would introduce a dependence of the Poisson brackets on spatial derivatives of ϵ^0 upon integrating by parts. For this result, it is important to use the classical property that the structure function q^θθ is independent of the canonical variables conjugate to the triad components. This property is no longer required in emergent modified gravity. The gauge transformations of lapse and shift, (<ref>) and (<ref>), and the realization of the covariance conditions, (<ref>) and (<ref>), ensure that the space-time line element (<ref>) is invariant, or the space-time metric g_μν is covariant in the sense that the canonical gauge transformations of the metric reproduce space-time diffeomorphisms on-shell: We have δ_ϵ g_μν|_O.S. = ℒ_ξ g_μν|_O.S. , where the gauge functions, (ϵ^0,ϵ^θ), on the left-hand side are related to the 2-component vector generator, ξ^μ = (ξ^t,ξ^θ), of the diffeomorphism on the right-hand side by ξ^μ = ϵ^0 n^μ + ϵ^θ s^μ = ξ^t t^μ + ξ^θ s^μ with components ξ^t = ϵ^0/N , ξ^θ = ϵ^θ - ϵ^0/N N^θ . §.§ New variables It is convenient to perform the canonical transformation P_W = K_x E^x - K_y E^y , W = ln√(E^y/E^x) a = √(E^x E^y) , K = K_x E^x + K_y E^y/√(E^x E^y) with W and K as the configuration variables, and P_W̅ and a̅ their respective conjugate momenta. The canonical pair (𝒜,ε) is left unchanged by this transformation. The diffeomorphism constraint in these variables is form-invariant, H_θ = E^x K_x' + E^y K_y' - 𝒜ε' = 1/2 E^x (P_W + K √(E^x E^y)/E^x)' + 1/2 E^y (- P_W + K √(E^x E^y)/E^y)' - 𝒜ε' = a K' + P_WW' - 𝒜ε' , while the Hamiltonian constraint (<ref>) reads H = - 1/√(E^x E^y ε)( - ε E^x E^y Λ + E^x K_x E^y K_y + (E^x K_x + E^y K_y) ε𝒜) - 1/41/√(E^x E^y ε)( (ε')^2 - 4 (ε (ln√(E^y/E^x) )')^2 ) + (√(ε)ε'/√(E^x E^y))' = - √(ε)[ a( - Λ + K^2/4 ε - 1/4 εP_W^2/a^2 + K 𝒜/a) - ε(W')^2/a - 1/4 ε(ε')^2/a + a' ε'/a^2 - ε”/a] = - √(ε)[ a( - Λ + K^2/4 ε + K 𝒜/a) - 1/4 ε(ε')^2/a + a' ε'/a^2 - ε”/a] + √(q^θθ)/2( P_W^2/2 ε + 2 ε (W')^2 ) in these new variables. The first parenthesis resembles the Hamiltonian constraint of a spherically symmetric model, while the last term expresses W in the form of a scalar field. This relationship will be discussed in more detail in Section <ref>. The space-time metric d s^2 = - N^2 d t^2 + q_θθ ( dθ + N^θ d t )^2 + q_x x d x^2 + q_y y d y^2 now has the spatial components q_θθ = a^2/ε , q_x x = e^2 Wε , q_y y = e^- 2 Wε . The new variables therefore closely resemble the conventional choice used in (<ref>). §.§ Symmetries and observables Given a potentially large class of modifications, it is useful to impose guiding principles such as the preservation of important symmetries of the classical system. For the models considered here, there are discrete as well as continuous symmetries. §.§.§ Discrete symmetry The constraints (<ref>) are symmetric under the exchange E^x ↔ E^y , K_x ↔ K_y, while the full line element (<ref>) has the same symmetry provided the coordinates are exchanged too, x ↔ y. The complete discrete transformation is then given by E^x ↔ E^y , K_x ↔ K_y , x ↔ y . This important symmetry implies the existence of an x-y plane of wave fronts, in which the two independent directions are interchangeable (while we do not have isotropy in this plane unless E^x=E^y). The modified theory should therefore retain this symmetry as an important characterization of the polarized Gowdy system. In the new variables, the discrete transformation takes the form P_W→ - P_W ,W̅→ - W̅ , x ↔ y , which is a symmetry of the system (<ref>)–(<ref>). §.§.§ Continuous symmetries and related observables Field observable: Another advantage of the new variables is that the constraint (<ref>) is manifestly invariant under the transformation W̅→W̅ + ω where ω is a constant. Therefore, the phase-space functional G [ω] = ∫ dθ ω P_W̅ , is a symmetry generator: {G[ω] , H[N]} = {G[ω] , H_θ[N^θ]} = 0 where we neglect boundary terms. This property in turn implies that G[ω] is a conserved global charge because Ġ[ω] = {G[ω] , H[N] + H_θ[N^θ]} = 0. Furthermore, as discussed in <cit.>, the boundary terms that survive under the transformation of the local charge take the form Ġ = - ∂_a J^a, which takes the form ∂_μ J^μ = ∇_μ J^μ = 0 of a covariant conservation law for a space-time densitized 4-current with components J^t = G = P_W̅ , J^a = - ( N ∂ H/∂W̅')' = - 2 ε^3/2W'/a . Mass observable: In the limit of P_W̅=W̅= 0, the expression ℳ = √(ε)/2( K^2 - (ε'/2 a̅)^2 + Λε/3) is a Dirac observable. §.§ Analogy with spherical symmetry In the new variables, the constraint (<ref>) is close to the spherically symmetric constraint coupled to a scalar field. In this subsection we will point out in detail how the two models are related. In a spherically symmetric model, the space-time line element can always be written as d s^2 = - N^2 d t^2 + q_x x^ sph ( d x + N^x d t )^2 + q_ϑϑ^ sph dΩ^2 where dΩ^2= dϑ^2+sin^2ϑ dφ^2 in spherical coordinates. As initially developed for models of loop quantum gravity <cit.>, it is convenient to parameterize the metric components q_xx^ sph and q_ϑϑ^ sph as q_x x^ sph = (E^φ)^2/E^x , q_ϑϑ^ sph = E^x , where E^x and E^φ are the radial and angular densitized-triad components, respectively. We assume E^x>0, fixing the orientation of space. The canonical pairs for spherically symmetric classical gravity are given by (K_φ , E^φ) and (K_x , E^x) where 2K_x and K_φ are components of extrinsic curvature. We have a further canonical pair (ϕ,P_ϕ) if scalar matter is coupled to the gravitational system. The basic Poisson brackets are given by { K_x (x) , E^x (y)} = { K_φ(x) , E^φ (y) } = {ϕ(x) , P_ϕ (y) } = δ (x-y) . (Compared with other conventions, our scalar phase-space variables are divided by √(4π), absorbing the remnant of a spherical integration. We use units in which Newton's constant, G, equals one. This convention is formally different from what we are using in Gowdy models, where 2G/π equals one. The discrepancy is necessary in order to take into account the difference in coordinate areas for the symmetry orbits, given by 4π^2 in the toroidal Gowdy model and 4π in spherical symmetry, as well as the varying multiplicity of independent degrees of freedom in the homogeneous directions.) The Hamiltonian constraint is given by H^ sph = - √(E^x)/2[ E^φ( 1/E^x + K_φ^2/E^x + 4 K_φK_x/E^φ) - 1/4 E^x((E^x)')^2/E^φ + (E^x)' (E^φ)'/(E^φ)^2 - (E^x)”/E^φ] + 1/2( √(q^xx_ sph)/E^x P_ϕ^2 + E^x √(q^xx_ sph) (ϕ')^2 + √(q_xx^ sph) E^x V (ϕ) ) , with a scalar potential V(ϕ) (or 1/2V(ϕ), depending on conventions), and H_x^ sph = E^φ K_φ' - K_x (E^x)' + P_ϕϕ' , is the diffeomorphism constraint. The primes denote derivatives with respect to the radial coordinate x, which is unrelated to the coordinates of the Gowdy model. These constraints are first class and have Poisson brackets of hypersurface-deformation form, { H_x^ sph [N^x] , H_x^ sph[M^x] } = H_x^ sph [N^x M^x' - N^x' M^x] , { H^ sph [N] , H_x^ sph [M^x] } = - H^ sph [M^r N'] , { H^ sph [N] , H^ sph[M] } = H_x^ sph[ q^x x_ sph( N M' - N' M )] with the structure function q^x x_ sph = E^x/(E^φ)^2 equal to the inverse radial component of the space-time metric. The off-shell gauge transformations for lapse and shift δ_ϵ N = ϵ̇^0 + ϵ^r N' - N^r (ϵ^0)' , δ_ϵ N^r = ϵ̇^r + ϵ^r (N^r)' - N^r (ϵ^r)' + q^x x_ sph(ϵ^0 N' - N (ϵ^0)' ) together with the realization of covariance conditions for space-time, ∂ H^ sph/∂ K_x'|_O.S. = ∂ H^ sph/∂ K_x”|_O.S. = … = 0 and ∂({ q^x x_ sph , H^ sph[ϵ^0] })/∂ (ϵ^0)'|_O.S. = ∂({ q^x x_ sph , H^ sph[ϵ^0] })/∂ (ϵ^0)”|_O.S. = ⋯ = 0 , which have been derived in <cit.> and are clearly satisfied, ensures that the line element (<ref>) is invariant. Its coefficients then form a covariant metric tensor in the sense that its canonical gauge transformations reproduce space-time diffeomorphisms on-shell: δ_ϵ g_μν|_O.S. = ℒ_ξ g_μν . The gauge functions (ϵ^0,ϵ^r) on the left-hand side are related to the 2-component vector generator ξ^μ = (ξ^t,ξ^r) of the diffeomorphism on the right-hand side by ξ^μ = ϵ^0 n^μ + ϵ^x s^μ = ξ^t t^μ + ξ^x s^μ with ξ^t = ϵ^0/N , ξ^x = ϵ^x - ϵ^0/N N^x . In addition, the realization of the covariance conditions for matter <cit.>, ∂ H^ sph/∂ P_ϕ' = ∂ H^ sph/∂ P_ϕ” = ⋯ = 0 , ensures that the matter field transforms as a space-time scalar in the sense that its canonical gauge transformations reproduce space-time diffeomorphisms on-shell: δ_ϵϕ|_O.S. = ℒ_ξϕ . Finally, we note that the spherically symmetric system in the absence of a scalar potential permits the global symmetry generator G^ sph [α] = ∫ d x α P_ϕ , with constant α. The gravitational mass observable is ℳ^ sph = √(E^x)/2( 1 + K_φ^2 - ((E^x)'/2 E^φ)^2 - Λ/3 E^x ) , which is a Dirac observable in the vacuum limit, ϕ = P_ϕ = 0. We are now ready to identify the analog relationship between the Gowdy and the spherically symmetric models. By inspection, we find that relabeling the canonical pairs according to (𝒜 , ε) → (K_x , E^x) , (K, a) → (K_φ , E^φ) , (W, P_W) → (ϕ , P_ϕ) turns the Gowdy constraints (<ref>) and (<ref>) into H = - √(E^x)[ E^φ( K_φ^2/4 E^x + K_φK_x/E^φ) - 1/4 E^x((E^x)')^2/E^φ + (E^φ)' (E^x)'/(E^φ)^2 - (E^x)”/E^φ] + √(q^θθ)/4 E^x P_ϕ^2 + E^x √(q^θθ) (ϕ')^2 , and H_θ = E^φ K_φ' - K_x (E^x)' + P_ϕϕ' , respectively, and the Gowdy metric components (<ref>) become q_θθ = (E^φ)^2/E^x , q_x x = e^2 ϕ E^x , q_y y = e^- 2 ϕ E^x . Up to a few numerical factors, all the terms in the Gowdy constraint (<ref>) match those of the spherically symmetric constraint (<ref>) except for the first and last terms of the latter: The inverse triad 1/E^x and the scalar potential V do not appear in the former. In the general modified constraints of the spherically symmetric system <cit.> these two terms are just the classical limits of modification functions that are in principle allowed to be different from what the classical dynamics requires. (The scalar potential may always be set equal to zero in order to define a specific model, while the 1/E^x-term is a special case of the dilaton potential that would be a free function of E^x if the spherically symmetric model were generalized to 2-dimensional dilaton gravity.) We thus conclude that the modified Gowdy constraint is equivalent to the spherically symmetric one up to the choice of modification functions. In arriving at this conclusion, we have implicitly assumed that all of the conditions imposed in <cit.> to obtain the general constraints apply to the Gowdy system as well. We now show that this is indeed the case. The conditions for the modified theory considered in <cit.> are the following ones. 1) Anomaly-freedom, 2) covariance conditions, 3) existence of a conserved matter-current, and 4) existence of a vacuum mass observable. Anomaly-freedom of the Gowdy model takes exactly the same form as in spherical symmetry because the structure function of the former, (<ref>), is equivalent to that of the latter, (<ref>). The covariance conditions of the Gowdy system, (<ref>)–(<ref>), are also equivalent to the spherically symmetric ones, (<ref>) and (<ref>), upon using the analog identification (<ref>). Finally, the Gowdy symmetry generator (<ref>) is identical to the spherically symmetric one (<ref>) under the same identification, while the Dirac observables (<ref>) and (<ref>) are identical up to one term that in the modified theory is given by the classical limit of a modification function. Therefore, all the classes of general modified constraints obtained in <cit.> are also the results of applying these conditions to the Gowdy system, if we only invert the correspondence. In fact, there is one additional condition of the Gowdy system that the spherically symmetric one does not have: The discrete symmetry discussed in Section <ref>. In this sense the Gowdy system is more restricted than the spherically symmetric one. Therefore, we can simply take the final results of <cit.> and impose the discrete symmetry on them. § LINEAR COMBINATION OF THE CONSTRAINTS Before discussing general modifications, an interesting restricted case is given by linear combinations of the classical constraints with suitable phase-space dependent coefficients. By construction, this class of theories preserves the classical constraint surface but modifies gauge transformations and the dynamics, implying a non-classical emergent space-time metric if the covariance conditions are fulfilled. §.§ Anomaly-free linear combination We define a new candidate for the Hamiltonian constraint as H^(new) = B H^(old) + A H_θ with suitable phase-space functions A and B, using the original constraints H^(old) and H_θ of the classical theory and keeping the latter unchanged. We consider the phase-space dependence B= B(K , ε , W). (For more details about the individual steps, see <cit.>.) The Leibniz rule allows us to reduce the new bracket {H^( new)[ϵ_1],H^( new)[ϵ_2]} to Poisson brackets of the old constraints with the functions A and B. Using the derivative terms of the classical constraints, Poisson brackets relevant for the anomaly-freedom and covariance conditions can be expanded by finitely many terms with different orders of θ-derivatives of the gauge functions. For instance, we can write { B , H^(old) [ϵ̅^0] }|_O.S. = Bϵ̅^0 + B^θ∂_θϵ̅^0 |_O.S. with B^θ = √(ε)ε'/a^2∂ B/∂ K . Anomaly freedom of the new constraints, using hypersurface-deformation brackets for the old constraints, then requires A = - B^θ = - √(ε)ε'/a^2∂ B/∂ K because any term in {H^( new)[ϵ_1],H^( new)[ϵ_2]} that is not proportional to the diffeomorphism constraint must cancel out. Similarly, we can write { A , H^(old) [ϵ̅^0] } = A^0 ϵ̅^0 + A^θ∂_θϵ̅^0 in which anomaly-freedom together with (<ref>) implies A^θ = - ε/a^2( K ∂ B/∂ K + (ε')^2/a^2∂^2 B/∂ K^2) . Using this new function, the bracket {𝒜^θ , H^(old) [ϵ̅^0] } = Λ^0 ϵ̅^0 + Λ^θ∂_θϵ̅^0 requires Λ^θ = - ε^3/2ε'/a^4( ∂ B/∂ K + 3 K ∂^2 B/∂ K^2 + (ε')^2/a^2∂^3 B/∂ K^3) . The new structure function, q^θθ_ (new) = B^2 q^θθ + B A^θ , follows from collecting all terms in the Poisson bracket of two Hamiltonian constraints that can contribute to the diffeomorphism constraint. §.§ Covariant modified theory Using the new structure function as an inverse spatial metric, the covariance condition is given by 𝒞 ≡ Λ^θ - B^-1 B^θ A^θ|_O.S. = - ε^3/2ε'/a^4( ∂ B/∂ K + 3 K ∂^2 B/∂ K^2 + (ε')^2/a^2∂^3 B/∂ K^3) + ε^3/2ε'/a^4 B^-1∂ B/∂ K( K ∂ B/∂ K + (ε')^2/a^2∂^2 B/∂ K^2) = 0 . We separate this condition into derivative terms, 𝒞 = 𝒞_εε' + 𝒞_εεε (ε')^3 , which must vanish individually. The equation 𝒞_ε = 0 implies K ( ∂ B/∂ K)^2 + B (K ∂^2 B/∂ K^2 - ∂ B/∂ K) = 0 and is solved by B = c_1 √(c_2 ± K^2) . The equation 𝒞_εεε = 0 implies B ∂^3 B/∂ K^3 + 3 ∂ B/∂ K∂^2 B/∂ K^2 = 0 and is solved by B= c̃_1 √(c̃_2 ± K^2 + c̃_3 K) . In these solutions, c_i and c̃_i are free functions of W and ε. Their mutual consistency requires B_s (K , W , ε) = λ_0 √( 1 - s λ^2 K^2) , which then implies A_s = λ_0 √(ε)ε'/a^2s λ^2 K/√( 1 - s λ^2 K^2) such that q^θθ_(new) = λ_0^2 ( 1 + s λ^2/1- s λ^2 K^2(ε')^2/a^2) ε/a^2 . There are two remaining free functions, λ_0 = λ_0 (W , ε) and λ = λ(W , ε), and we have separated the sign s=±1 from the original solution, (<ref>). Reality requires that 1 - s λ^2 K^2 ≥ 0, which may place an upper bound on K depending on s and λ. Finally, the discrete symmetry requires that both modification functions are even in W: λ_0 (W , ε) = λ_0 (-W , ε) and λ (W , ε) = λ (-W , ε). Since we now have complete solutions for A and B, we can derive the modified Hamiltonian constraint from (<ref>): H^ (new) = - λ_0 √(ε)√( 1 - s λ^2 K^2)[ a( K^2/4 ε - 1/4 εP_W^2/a^2 + 𝒜/a K ) - ε(W')^2/a - 1/4 ε(ε')^2/a + a' ε'/a^2 - ε”/a - ε'/a^2s λ^2 K/1 - s λ^2 K^2( a K' + P_WW' - 𝒜ε' ) ] . The case s=1, together with a reality condition for the constraint, implies a curvature bound K<1 / λ. The case s=-1, implies a possibility of signature change where q^xx_( new) changes sign. (The inverse spatial metric is then determined by the absolute value of (<ref>).) §.§.§ Canonical transformations For the case s = 1, a natural canonical transformation is K →sin (λ K)/λ , a→a/cos (λ K) W→W , P_W→ P_W - a/cos (λ K)∂/∂W(sin (λ K)/λ) ε→ε , 𝒜→𝒜 + a/cos (λ K)∂/∂ε(sin (λ K)/λ) under which the modified Hamiltonian constraint becomes H^ (c) = - λ_0 √(ε)[ a( 1/4 εsin^2 (λ K)/λ^2 - 1/4 εcos^2 (λ K) ( P_W/a - ∂lnλ/∂W K + tan (λ K)/λ∂lnλ/∂W)^2 + sin (2 λ K)/2 λ( 𝒜/a + ∂lnλ/∂ε K - tan (λ K)/λ∂lnλ/∂ε) ) - ε(W')^2/acos^2 (λ K) + a' ε'/a^2cos^2 (λ K) - λ^2 sin (2 λ K)/2 λ( P_W/a - ∂lnλ/∂W K ) W' ε'/a - ( cos^2 (λ K)/4 ε - λ^2 sin (2 λ K)/2 λ( 𝒜/a + ∂lnλ/∂ε K ) ) (ε')^2/a - ε”/acos^2 (λ K) ] . A second canonical transformation, K →λ/λ K , a→λ/λa W→W , P_W→ P_W + λ/λa∂lnλ/∂W K ε→ε , 𝒜→𝒜 - λ/λa∂lnλ/∂ε K with constant λ, renders the modified Hamiltonian constraint periodic in K: H^ (cc) = - λ_0λ/λ√(ε)[ a/4 εsin^2 (λ K)/λ^2 - a/4 εcos^2 (λ K) ( P_W/a + tan (λ K)/λ∂lnλ/∂W)^2 + sin (2 λ K)/2 λ( 𝒜 - atan (λ K)/λ∂lnλ/∂ε) - εcos^2 (λ K) (W')^2/a +( cos^2 (λ K)/λ∂λ/∂W - λ^2 sin (2 λ K)/2 λP_W/a) W' ε'/a - ( cos^2 (λ K)/4 ε(1 - 4 ε∂lnλ/∂ε) - λ^2 sin (2 λ K)/2 λ𝒜/a) (ε')^2/a + cos^2 (λ K) ( a' ε'/a^2 - ε”/a) ] . (The term (∂lnλ/∂ϵ)K in (<ref>) then disappears.) Unlike the phase-space coordinates in (<ref>), the holonomy-like coordinates of (<ref>) imply a finite constraint at the curvature bound, implying a dynamics that can cross such a hypersurface of maximum curvature. The holonomy-like object sin (λ̅ K) = sin(λ̅ (E^x K_x + E^y K_y) / √(E^x E^y)) in the periodic version of the constraint always requires a non-trivial dependence on the densitized triads, in contrast to what appears in holonomy-like terms of spherically symmetric models, or of a restricted Gowdy system in which E^x=E^y and K_x=K_y. In the full polarized Gowdy model, some densitized-triad dependence always remains even if the initial function λ, which may depend on ε as well as W̅, has been replaced by a constant λ̅ using a canonical transformation. The specific phase-space function in (<ref>) can be related to the (x,y)-contribution to the trace of the momentum tensor K_a^i canonically conjugate to E^a_i, given by K_a^iE^a_i/√(| E|)=e^a_iK_a^i. In general, however, the expression in emergent modified Gowdy models is not equal to the trace of extrinsic curvature in the resulting emergent space-time for two reasons. First, the phase-space expressions E^a_i and K_a^i have modified geometrical meanings compared with the classical densitized triad and extrinsic curvature of spatial slices because the geometry is determined by the emergent metric. Secondly, the momentum tensor K_a^i with components that appear in (<ref>) has been altered by several canonical transformations applied in our derivations. Relating the K-terms in (<ref>) or (<ref>) to traditional holonomy modifications in models of loop quantum gravity therefore requires some care. §.§.§ Interpretation as holonomy modifications Any appearance of triad components in holonomy-like terms in models of loop quantum gravity is usually motivated as a volume or area dependence of the coordinate length of a holonomy used to construct the Hamiltonian constraint. In particular, dynamical solutions lead to large symmetry orbits, such as all of space in homogeneous models of an expanding universe or spherical orbits in non-rotating black-hole models. As a consequence, extrinsic-curvature components, given by linear combinations of time derivatives of the metric or triad components, can be large even in classical regimes. Their appearance in holonomies is then in danger of violating the classical limit on large length scales. This problem can be solved in an ad-hoc manner by using a length parameter for holonomies that decreases with the size of increasing symmetry orbits, such that holonomy modifications are negligible even when some extrinsic-curvature or connection components become large. Heuristically, such a dependence can be motivated by lattice refinement <cit.>, relating the holonomy length to a lattice structure in space that is being subdivided as the symmetry orbit expands, maintaining sufficiently short geometrical lengths of its edges. Comparing with this motivation, the specific form of holonomy-like terms of the form (<ref>) found here, required for covariance, is crucially different: The coefficient functions of K_x and K_y can both be expressed in terms of E^x/E^y=√(q_yy/q_xx)=e^-2W̅, which describes the geometrical anisotropy in the x-y plane but is independent of its area √(q_xxq_yy)=ε. Analyzing the general form of potential physical implications of this difference requires us to perform a detailed analysis of canonical transformations used here to arrive at the expression (<ref>). In this context, it is useful to consider possible forms and interpretations of holonomy modifications for models of loop quantum gravity in the strictly isotropic context <cit.>, in which spatial homogeneity eliminates the non-trivial covariance conditions. (See also <cit.> for a related discussion in spherical symmetry.) Extrinsic curvature (or a connection with its associated holonomies) reduced to isotropy has a single independent component, k, canonically conjugate to the independent densitized-triad component p. (We assume p>0, fixing the orientation of space.) Classically, using the scale factor a, we have k∝ȧ and p∝ a^2. Holonomies for U(1), or suitable components of holonomies for SU(2), are then of the form exp(iℓ k) with the coordinate length ℓ of a spatial curve, derived from the general Pexp(∫ A_a^iσ_i dx^a) for an isotropic A_a^i∝δ_a^i, with generators σ_i of the gauge group. The geometrical length of this curve in an expanding universe increases like ℓ a and may therefore reach macroscopic values after a suitable amount of time. Similarly, k∝ȧ=aH with the Hubble parameter H is an approximately linear function of a in a universe dominated by dark energy or during inflation. The exponent ℓ k is then large in a macroscopic universe, such that modifications would be noticeable on low curvature scales and contradict cosmological observations. This problem can be solved by using a coordinate length or holonomy parameter ℓ∝ a^-1∝ p^-1/2, such that the geometrical length is constant in an expanding universe. The relevant phase-space function exp(iℓ̅ k/√(p)), with a constant ℓ̅, then depends on extrinsic-curvature and densitized-triad components. It is easier to quantize this expression if one first applies a canonical transformation that turns k/√(p) into a basic canonical variable. Classically, this ratio is proportional to the Hubble parameter H, and the map from (k,p) to H can be completed to a canonical transformation by using the volume V∝ a^3 of some region in space, whose precise form does not matter thanks to homogeneity and isotropy. It is then possible to quantize exp(iℓ̅H) to a simple translation operator in V. Different versions of holonomy modifications are obtained by introducing periodic functions depending on different variables, such as k or H. The classical contribution to the isotropic Hamiltonian constraint can be written as √(p)k^2=p^3/2(k/√(p))^2∝ VH^2. Holonomy modifications may then be introduced for k or H (or any function of the form p^qk with some exponent q), leading to dynamically inequivalent modifications of the form H_1=√(p)sin^2(ℓ̅k)/ℓ̅^2 and H_2=Vsin^2(ℓ̅H)/ℓ̅^2, respectively. The latter can be transformed back to k-variables, implying a term proportional to p^3/2sin^2(ℓ̅k/√(p))/ℓ̅^2 in which the decreasing length scale ℓ=ℓ̅/√(p) appears. Independently of canonical transformations, the different types of holonomy modifications can also be identified by analyzing equations of motion for small ℓ̅. From H_1, we obtain ṗ∝√(p)k or k∝ṗ/√(p)∝ȧ, while H_2 implies V̇∝ VH or H∝V̇/V∝ȧ/a. Therefore, we do not have to know which canonical transformations may have been applied in order to determine how a given classical or modified constraint implies small or large values of holonomy modifications in classical regimes. In isotropic models, the appearance of a scale-dependent holonomy length can be seen in two alternative ways: A dependence on the scale factor may directly appear in periodic functions, as in sin(ℓ̅k/√(p)), or it may be implied by equations of motion that tell us whether an expression such as H in exp(iℓ̅H) equals the classical basic phase-space variable k in the limit of small ℓ̅, or a different function such as H in which the potential growth of k as a function of a in some dynamical solutions is reduced. More generally, the different status of a modification with scale-factor dependent ℓ can be seen in coefficients of the Hamiltonian constraint. In isotropic models, a holonomy modification can be implemented by directly replacing the classical k in the Hamiltonian constraint with ℓ^-1sin(ℓ k). For constant ℓ=ℓ̅, the k-independent coefficient of this term retains its classical dependence on p. If ℓ depends on p, or if H is used instead of k, the p-dependence of the coefficient is modified along with the k-dependence. From the point of view of canonical structures, there is no difference between (k,p) and (H,V) if the relationship between k or H and classical extrinsic curvature is ignored (or unknown if one considers a generic modified theory). A modification of the form sin(ℓ k_1)/ℓ=sin(ℓ̅k_2)/ℓ=ℓ̅/ℓsin(ℓ̅k_2)/ℓ̅ with triad-dependent ℓ/ℓ̅, such that the map from k_1 to k_2 is part of a canonical transformation with ℓ k_1=ℓ̅k_2, can therefore be interpreted in two different ways, depending on whether k_1 or k_2 is closely related to classically reduced extrinsic curvature. If k_1 is extrinsic curvature, we have a triad-dependent holonomy length ℓ, and the small-k_1 limit reproduces the classical dependence of the coefficients because ℓ^-1sin(ℓ k_1)= k_1(1+O(ℓ^2 k_1^2)). If k_2 is extrinsic curvature, we have constant holonomy length ℓ̅, and the classical triad-dependent coefficients of k_2 are modified because sin(ℓ̅k_2)/ℓ=ℓ̅/ℓ k_2(1+O(ℓ̅^2k_2^2))=k_1(1+O(ℓ̅^2k_2^2)) . Instead of reducing the growing holonomy length in an expanding universe, the model is made compatible with the classical limit, producing the same k_1 to leading order, by modifying the triad-dependent coefficients of k_2-holonomy terms in the Hamiltonian constraint by factors of ℓ̅/ℓ. However, this classical limit, assuming small ℓ̅k_2, is in general only formal because it may not be guaranteed that this product is indeed small in expected classical regimes, such as a large isotropic universe. The limit is suitable as a classical one if k_2=H, but not if k_2=k. In isotropic models, the H-variable is therefore preferred. Therefore, k_1 rather than k_2 can be identified with extrinsic curvature in the classical limit, necessitating the application of a non-constant holonomy function λ. Whether a canonical variable behaves like k or like H (or possibly a different version) follows from equations of motion generated by the modified Hamiltonian constraint. The possibility of applying canonical transformations in isotropic models is comparable to some of the steps in our derivation of covariant modifications of polarized Gowdy models. We have constant holonomy parameters λ̅ or ℓ̅ in one form, and triad-dependent functions λ or ℓ in another one. In each case, both versions, if they are related by (<ref>) or the canonical transformation that includes the mapping between H and k, are dynamically equivalent. But the two versions are not equivalent if the holonomy function, constant or non-constant, multiplies the same phase-space function without applying a canonical transformation. Since the general form of a modified theory is defined only by its Hamiltonian constraint and does not contain an independent specification of what canonical transformations may have been used compared with the standard classical phase space, we should consider equations of motion in order to determine whether holonomy modifications depending on K in a polarized Gowdy model can include an area-dependent holonomy length. For non-constant λ/λ̅, the ε-dependence of the coefficients in (<ref>) differs from the classical one in the limit of λ̅→0. As in (<ref>), these terms signal deviations of K from the original component of extrinsic curvature: The equation of motion for ε (which is canonically conjugate to A) is given by ε̇={ε,H^(cc)}=-λ_0λ̅/λ√(ϵ) sin(2λ̅K)/2λ̅ (1+λ̅^2(ε')^2/a̅^2) and implies K∼-λ/λ̅λ_0ε̇/√(ε) for small λ̅ if we assume λ=λ̅h(ε) with a λ̅-independent holonomy function h(ε). (In some classes of modified theories, h may also depend on the anisotropy parameter W̅.) The formal classical limit requires small λ̅K, but a specific regime in which classical behavior is expected may well imply large λ̅K if the area ε of symmetry orbits is large. If one chooses a λ that decreases with ε sufficiently quickly, (<ref>) implies that the corresponding K of the modified theory increases less strongly than in a model with constant λ=λ̅. It is easier to study the classical limit if the inverse of the canonical transformation (<ref>) is applied, such that all holonomy terms now depend on λ K with an explicit decreasing coefficient λ as a function of ε. As shown in the transition from a Hamiltonian constraint of the form (<ref>) to an expression (<ref>), ε-dependent modifications λ/λ̅ of coefficients in the Hamiltonian constraint are then replaced with holonomy-like terms with an ε-dependent function λ. In isotropic models, these two versions, given by triad-dependent coefficients and triad-dependent holonomy length, respectively, are equivalent. In polarized Gowdy models, in which modifications are strongly restricted by covariance conditions, only the first viewpoint is available if a strict definition of holonomy modifications as periodic functions is used: Only (<ref>), in which the coefficient functions are modified in their ε-dependence, is periodic in K, while (<ref>), in which the function λ(ε) appears in some of the holonomy terms, also contains non-periodic contributions linear in K such as K ∂lnλ/∂ε. Effects of an ε-dependent holonomy length can therefore be inferred only indirectly when equations of motion are used, turning it into an on-shell property. Assigning an ε-dependent holonomy length directly to off-shell properties of the Hamiltonian constraint, as in isotropic models, is not possible unless one weakens the strict periodicity condition on holonomy modifications. Another difference between isotropic and polarized Gowdy models appears in the specific form (<ref>) interpreted in terms of components K_x and K_y of the momentum that appear in the phase-space function K. The only triad dependence allowed in this combination refers to anisotropy in the (x,y)-plane rather than its area. This specific dependence, just as properties of how an ε-dependent λ may appear in holonomy modifications, is implied by general covariance. The anisotropy dependence of holonomy modifications is therefore unavoidable, and unlike λ(ε) it cannot be moved to coefficient functions. Moreover, holonomy modifications can only be implemented for the specific combination of K_x and K_y given by (<ref>), but not separately for the two components K_x and K_y because all modified constraints allowed by covariance depend polynomially on the second phase-space variable, P_W̅, that together with K represents K_x and K_y after our first canonical transformation. Therefore, unlike in spherically symmetric models, the Hamiltonian constraint is not built out of basic holonomy operators that depend only on momentum components canonically conjugate to the densitized triad. There is always a necessary triad dependence given by the specific form of K that may appear in periodic terms as a linear combination of K_x and K_y with triad-dependent coefficients, derived from the covariance conditions. In loop quantum gravity, curvature (or connection) components and the triad are instead separated into basic holonomy and flux operators, which were used as building blocks of the first proposed operators for the Hamiltonian constraint <cit.>. More recent versions <cit.> use triad-dependent shift vectors in order to construct detailed properties of hypersurface deformations from operators, which is somewhat reminiscent of but conceptually unrelated to the triad dependence of holonomy-type expressions found here. § GENERAL MODIFIED THEORY Linear combinations of the classical constraints with phase-space dependent coefficients have revealed interesting properties of possible modifications of polarized Gowdy models. More generally, one may expect that individual terms in the Hamiltonian constraint can receive independent modifications. We now analyze this possibility within a setting of effective field theory in which we expand a generic Hamiltonian constraint in derivatives up to second order. The resulting expressions then determine gravitational theories of polarized Gowdy models compatible with the symmetry of general covariance, taking into account the possibility that the space-time metric is not fundamental but rather emergent. New modifications are then possible even at the classical order of derivatives. §.§ Constraint ansatz and the emergent space-time metric We consider modifications to the Gowdy system with phase-space variables (W̅ , P_W), (K , a), and (𝒜 , ε). If we modify the Hamiltonian constraint, then the constraint brackets (<ref>)–(<ref>) determine the inhomogeneous component of the spatial metric via q̃_θθ = 1 / q̃^θθ, while the homogeneous components of the metric cannot be obtained in this way because they do not appear in the structure functions. The emergent space-time line element is then given by d s^2 = - N^2 d t^2 + q̃_θθ ( dθ + N^θ d t )^2 + α_ε (ε) e^2 f_W (ε, W) d x^2 + α_ε (ε) e^- 2 f_W (ε, W) d y^2 , with q̃_θθ to be determined by anomaly-freedom of the hypersurface deformation brackets, while we have partially chosen the form of the homogeneous components q̃_x x and q̃_yy based on their classical forms. We will discuss the free functions α_ε and f_W in due course. (If the structure function is negative in some regions, the inhomogeneous metric component is determined by the inverse of its absolute value, while -N^2 dt^2 is replaced by -σ N^2 dt^2 where σ is the sign of the structure function relative to the classical function, making the 4-dimensional line element Euclidean in regions where σ=-1. For more details, see <cit.>.) We consider the following ansatz for the Hamiltonian constraint: H̃ = a_0 + e_WW (W')^2 + e_aa (a')^2 + e_εε (ε')^2 + e_WaW' a' + e_WεW' ε' + e_aεa' ε' + p_aε K' ε' + r_aaa' K' + e_2 aa” + p_2 a K” + e_2 εε” where a_0, e_i j, e_2 i, p_i j, and p_2 i are all functions of the phase-space variables, but not of their derivatives. For the sake of tractability, we have omitted some terms that would be possible at second order in derivatives, such as terms containing W̅”. Spatial derivatives of 𝒜 and P_W̅ have been omitted in anticipation of covariance condition (<ref>), to be discussed shortly, which requires the constraint to be independent of them. Because of the discrete symmetry H̃ (W,P_W) = H̃ (-W,-P_W), we see that all the functions obey this even symmetry, except for e_Wε which should be odd. Starting from this constraint ansatz we will obtain the conditions for it to satisfy the hypersurface-deformation brackets, (<ref>)–(<ref>), with a possibly modified structure function, q̃^θθ. We will then apply the covariance conditions in order to make sure that the new structure function can play the role of an inverse metric component in space-time. §.§.§ Canonical transformations I In order to obtain distinct classes of Hamiltonian constraints for possible modified theories, it is crucial that we factor out canonical transformations that preserve the diffeomorphism constraint. If we do not take this extra care we risk obtaining equivalent versions of the same theory that differ only in a choice of the phase-space coordinates. Two constraints differing only by a canonical transformation will look different and even the space-time metric will do so too kinematically, but they in fact describe the same physical system. We will therefore consider the following set of canonical transformations that preserves the diffeomorphism constraint: W̅ = f_c^W̅ (ε , W̃̅̃) , P_W̅ = P̃_W̅( ∂ f_c^W̅/∂W̃̅̃)^-1 - ã̅̃∂ f_c^K/∂W̃̅̃( ∂ f_c^K/∂K̃)^-1 , K = f_c^K (ε , W̃̅̃, K̃) , E^φ = Ẽ^φ( ∂ f_c^K/∂K̃)^-1 , A = ∂ (α_c^2 ε)/∂εà + Ẽ^φ∂ f_c^K/∂ε( ∂ f_c^K/∂K̃)^-1 + P̃_W̅∂ f_c^W̅/∂ε( ∂ f_c^W̅/∂W̃̅̃)^-1 , ε̃ = α_c^2 (ε) ε , where the new phase-space variables are written with a tilde. A transformation with f_c^K=K̃, f_c^W̅ (ε,W̃̅̃), and α_c (ε) can always be used to transform the homogeneous components of the metric in (<ref>) from potentially modified expressions to their classical ones q̃_xx=ε e^2W̅ and q̃_yy=ε e^-2W̅. If we fix the classical form for these components, the residual canonical transformations are given by W̅ = W̃̅̃ , P_W̅ = P̃_W̅ - ã̅̃∂ f_c^K/∂W̃̅̃( ∂ f_c^K/∂K̃)^-1 , K = f_c^K (ε , W̃̅̃, K̃) , E^φ = Ẽ^φ( ∂ f_c^K/∂K̃)^-1 , A = à + Ẽ^φ∂ f_c^K/∂ε( ∂ f_c^K/∂K̃)^-1 , ε̃ = ε where the new phase-space variables are again written with a tilde. Under the previous canonical transformation, the emergent space-time metric simplifies to d s^2 = - N^2 d t^2 + q̃_θθ ( dθ + N^θ d t )^2 + ε e^2 W d x^2 + ε e^- 2 W d y^2 , while the constraint ansatz (<ref>) does not acquire any new derivative terms. §.§.§ Anomaly-freedom and covariance conditions Starting with a Hamiltonian constraint of the form (<ref>) we impose anomaly-freedom by requiring that, together with the unmodified diffeomorphism constraint, it reproduces the hypersurface-deformation brackets (<ref>)-(<ref>) up to a potentially modified structure function: { H_θ [N^θ] , H_θ [M^θ] } = - H_θ [ M^θ (N^θ)' - N^θ (M^θ)'] , {H̃ [N] , H_θ [M^θ] } = - H̃[M^θ N'] , {H̃ [N] , H̃ [M] } = - H_θ[ q̃^θθ( M N' - N M' )] . Doing so restricts the functions in (<ref>) by a set of partial differential equations for the modification functions. The same procedure reveals the dependence of the structure function q̃^θθ on the phase-space variables. Furthermore, the modified brackets (<ref>)-(<ref>) imply gauge transformation of the shift vector (<ref>) according to δ_ϵ N^θ = ϵ̇^θ + ϵ^θ (N^θ)' - N^θ (ϵ^θ)' + q̃^θθ(ϵ^0 N' - N (ϵ^0)' ) , which now involves the modified structure function as it should if it were to play the role of a component of the emergent space-time metric. Once the structure function has been obtained from the imposition of anomaly-freedom, we have the full expression of a candidate space-time metric as in (<ref>). Because the homogeneous metric components retain their classical forms, the covariance conditions (<ref>) remain unchanged. In the new phase-space variables they are given by ∂H̃/∂𝒜' = ∂H̃/∂𝒜” = ∂H̃/∂ P_W̅' = ∂H̃/∂ P_W̅” = 0 . The inhomogeneous component (<ref>) turns into highly non-trivial conditions, ∂({q̃^θθ , H̃[ϵ^0] })/∂ (ϵ^0)'|_O.S. = ∂({q̃^θθ , H̃[ϵ^0] })/∂ (ϵ^0)”|_O.S. = … = 0 . In addition to general covariance, we will require that the modified system retains the types of conserved quantities of the classical theory. We therefore impose the preservation of the symmetry generator G[ω] in (<ref>), such that it commutes with the modified constraint, {G[ω] , H̃[N]} = 0 . We will also demand that for P_W̅,W̅→ 0 a gravitational Dirac observable exists with (<ref>) as its classical limit. §.§.§ Canonical transformations II and additional guiding conditions The imposition of anomaly-freedom, covariance, and the gravitational symmetries all restrict the generic Hamiltonian constraint (<ref>) by providing a large set of partial differential equations. These equations can be considerably simplified by a choice of phase-space coordinates, fixing the residual canonical transformations (<ref>). So far, all of the conditions we impose in the new variables are identical to those we chose when coupling scalar matter in spherical symmetry <cit.>. While these conditions are quite restrictive, they still do not allow a complete exact solution of the partial differential equations for modification functions. We therefore refer to additional conditions, most of which have also been used in the scalar case. Classical W̅ limit. One such condition applied in <cit.> was the compatibility of the constraint with a limit in which the corresponding class of modifications contains models with the classical equations of motion for the scalar matter, corresponding to the Klein-Gordon equation on a curved emergent space-time. We can apply the same condition to the Gowdy model, thanks to its correspondence with the spherically symmetric system, by imposing compatibility of the constraint with a limit in which the class of modifications contains models with the classical equations of motion for W̅ on an emergent background q̃_θθ. Classical constraint surface as a limit. Another additional condition considered in <cit.> was the compatibility of the constraint with a non-trivial limit in which the constraint surface took its classical form. This is the case if there is a limit in which the constraint contains the modifications from linear combinations of the classical constraints, as derived in the previous section. Classes of constraints. With these conditions, we are in the position to obtain explicit expressions for the modified Hamiltonian constraint. The conditions of anomaly-freedom, covariance, implementation of symmetries, and the factoring out of canonical transformations imply a set of differential equations that can be solved exactly if the additional conditions just described are considered. However, if we implement the essential conditions we are left with some ambiguities. If one is not interested in the classical W̅ limit, several of these ambiguities are removed, but one modification function remains unresolved. The vanishing of this function then leads to the compatibility of the classical constraint surface as a limit, thus describing our first class of constraints. On the other hand, using a non-trivial choice for this modification function leads us to another class of constraints that is no longer compatible with the classical constraint surface as a limit, nor with the classical W̅ limit. Lacking a positive characterization of these models, we simply call this set of modified theories the class of the second kind. Finally, a third class of constraints can be obtained by imposing compatibility with the classical W̅ limit. These are precisely the three classes of constraints obtained in <cit.> for the spherically symmetric system coupled to scalar matter. We can then simply import the results and reinterpret the Hamiltonian constraint by its correspondence with the Gowdy system. The modified theory allows for modification functions that can be redefined and adapted to the Gowdy system. Discrete symmetry. The Gowdy system has one further symmetry that is not obvious in the spherically symmetric system coupled to scalar matter. This is the discrete symmetry P_W̅→ - P_W̅ , W̅→ - W̅. We will implement this symmetry in the classes of constraints imported from <cit.> and, therefore, obtain slightly simpler expressions. §.§ Emergent modified gravity as a basis for quantization Our generic Hamiltonian constraint contains only up to second-order spatial derivatives and uses the classical phase space, which implies that the equations of motion do not have higher time derivatives. Theories of this form, even though they are modified compared with general relativity, may therefore be considered classical gravitational systems that can be used as a basis for canonical quantization. Several additional conditions are then useful for different procedures of finding suitable constraint operators. §.§.§ Partial Abelianization Gravitational theories with a description as space-time geometry require constraints that generate hypersurface deformations. The presence of structure functions is then a well-known obstacle toward quantization because an operator-valued structure function implies severe ordering problems in commutators of the constraints. This problem can be simplified if the original constraints can be replaced by linear combinations that replace the structure function by a constant, and perhaps setting it equal to zero in a partial Abelianization. In spherical symmetry, such a procedure has been proposed in <cit.>, and then generalized in <cit.> by making it fully local. For a systematic derivation of partial Abelianizations we make use of the procedure described in Section <ref>, introducing a new phase-space function as a linear combination of the (now already modified) Hamiltonian constraint and the classical diffeomorphism constraint that replaces the Hamiltonian constraint of hypersurface-deformation brackets. The new constraints therefore have brackets that differ from the classical gravitational ones, and their gauge transformations do not correspond to hypersurface deformations. However, they have the same constraint surface as the original system, which can therefore be turned into a quantum description by this procedure. We will make use of definition (<ref>) for the constraint function H̃^ (A) = B H̃ + A H_θ . Poisson brackets of B and A with the old Hamiltonian constraint are given by (<ref>) and (<ref>), and the latter is related to the former by A = - B^θ(B) as in (<ref>). The only difference with the procedure in Section <ref> is that we are not seeking a new covariant modified theory, but rather a partial Abelianization of the brackets of H^ (A) and H_θ. Therefore, we impose the condition that the new structure function (<ref>) vanishes: q̃^θθ_ (A) = B^2 q̃^θθ + B A^θ = 0 . We will apply this condition to the three classes of constraints derived below. §.§.§ Point holonomies In <cit.>, it was possible to include point holonomies of the scalar field ϕ, given by periodic modification functions depending on this variable. Like partial Abelianizations, this property may be useful for quantizations because some of the basic fields can be represented by bounded operators, akin to a loop quantization. Invoking the correspondence between spherical symmetry and the polarized Gowdy model, this result can be translated to point holonomies of W̅ in the latter case. Recall that the relation between this variable and the original phase-space degrees of freedom is given by W̅ = ln√(E^y / E^x). This function depends on densitized-triad components rather than their momenta, classically related to extrinsic curvature or a connection, and the dependence is logarithmic. Given the logarithmic dependence on triad components instead of linear combinations of extrinsic curvature, periodic modification functions of W̅, or polymerizations of this variable, are rather different from what is usually assumed in models of loop quantum gravity, even compared with a polymerization of K in (<ref>) which already showed several deviating features. But while a polymerization of W̅ may not be directly motivated by traditional loop quantum gravity, we include this possibility here for completeness of the correspondence with spherical symmetry. New canonical quantizations could still be constructed in this way by exploiting the boundedness of operators quantizing a periodic function of W. § CLASSES OF CONSTRAINTS As derived in detail in <cit.>, we consider different classes of constraints and the modified structure functions they imply, depending on which conditions are chosen in order to make the consistency equations explicitly solvable. §.§ Constraints compatible with the classical constraint surface Modified constraints that are compatible with the classical constraint surface in a suitable limit are direct generalizations of the models constructed in Section <ref> from linear combinations of the classical constraints. §.§.§ General constraint As always, the expression for Hamiltonian constraints compatible with certain symmetry conditions may depend on modification functions that distinguish different cases of consistent constraints, but also on free functions that represent the freedom to apply canonical transformations. Here, we fix the latter choice by working with partially periodic modification functions in the phase-space variable K. In this class, the general expression of the Hamiltonian constraint is given by H̃ = - λ̅/λλ_0 cos^2 (ν̅W̅) √(ε)[ a̅( - λ^2/λ̅^2Λ_0 + ( α_2/4 ε c_f + 1/2∂ c_f/∂ε) sin^2 (λ̅ K)/λ̅^̅2̅ + ( 𝒜/a̅ - P_W̅/a̅tan (ν̅W̅)/ν̅∂lnν/∂ε - tan (λ̅ K)/λ̅∂lnλ/∂ε) c_f sin (2λ̅ K)/2λ̅ - ( P_W̅/a̅cos (ν̅W̅) + tan (λ̅ K)/λ̅(ν̅/ν c_h3 + ∂lnλ/∂W̅) )^2 α_3/4 εν^2/ν̅^2 c_f cos^2 (λ̅ K) ) + (ε')^2/a̅( λ̅^2 𝒜/a̅sin (2 λ̅ K)/2 λ̅ + cos^2 (λ̅ K) ( ∂lnλ/∂ε - α_2/4 ε - sin (ν̅W̅)/ν̅∂lnν/∂ε( ν̅/ν c_h3 + ∂lnλ/∂W̅ + sin (ν̅W̅)/ν̅∂lnν/∂εε/α_3) ) ) + ( ε' a̅'/a̅^2 - ε”/a̅) cos^2 (λ̅ K) + cos^2 (λ̅ K) ( - 1/a̅((sin (ν̅W̅)/ν̅)')^2 ε/α_3 + ε'/a̅(sin (ν̅W̅)/ν̅)' ( 2 ε/α_3sin (ν̅W̅)/ν̅∂lnν/∂ε + ν̅/ν c_h3 + ∂lnλ/∂W̅ - P_W̅/a̅cos (ν̅W̅)λ̅^2 tan (λ̅ K)/λ̅)) ] with the structure function q̃^θθ = ( c_f + (λ̅ε'/a̅)^2 ) cos^2 (λ̅ K) λ̅^2/λ^2λ_0^2 cos^4 (ν̅W̅) ε/a̅^2 . All the non-classical parameters are undetermined functions of ε only, except for λ̅ and ν̅, which are constants, and λ_0 and λ, which can depend on both ε and W̅. (This is the only class of constraints that allows λ to depend on W̅.) The constraint (<ref>) and its structure function (<ref>) are symmetric under the discrete transformation W̅→ - W̅, P_W̅→ - P_W̅ only if the λ_0 and λ dependence on W̅ is restricted by the discrete symmetry (<ref>) to be of the form λ_0 (ε , W̅)=λ_0 (ε , -W̅), and only if c_h3 = 0 because it is independent of W̅. (Alternatively, the discrete transformation could be redefined as W̅→ - W̅, P_W̅→ - P_W̅, and c_h3→ - c_h3, in which case the constraint and structure function are symmetric even for non-zero c_h3.) The classical limit can be taken in different ways, the simplest one given by λ→λ̅ and ν→ν̅ followed by λ_0 , c_f , α_2,α_3 → 1, λ̅ , ν̅→ 0 and Λ_0 →Λ. The inhomogeneous-field observable in this class is given by G [ω] = ∫ dθ ων/ν̅( P_W̅/cos(ν̅W̅) + a̅tan (λ̅ K)/λ̅∂lnλ/∂W̅) , where ω is a constant. The associated conserved current J^μ has the components J^t = ν/ν̅( P_W̅/cos(ν̅W̅) + a̅tan (λ̅ K_φ)/λ̅∂lnλ/∂W̅) , J^θ = - ν/ν̅λ̅/λλ_0 √(ε)cos^2 (ν̅W̅) cos^2 (λ̅ K) ( - 2/a̅(sin (ν̅W̅)/ν̅)' ε/α_3 + ε'/a̅( 2 ε/α_3sin (ν̅W̅)/ν̅∂lnν/∂ε + ν̅/ν c_h3 + ∂lnλ/∂W̅ - P_W̅/a̅cos (ν̅W̅)λ̅^2 tan (λ̅ K)/λ̅)) . When P_W̅,W̅→0, the homogeneous mass observable associated to (<ref>) is given by ℳ = d_0 + d_2/8(exp∫ dε (α_2/2 ε - ∂lnλ^2/∂ε)) ( c_f sin^2(λ̅ K)/λ̅^2 - cos^2 (λ̅ K) (ε'/a̅)^2 ) + d_2/4∫ dε (λ^2/λ̅^2Λ_0 exp∫ dε (α_2/2 ε - ∂lnλ^2/∂ε)) , where d_0 and d_2 are constants with classical limits given by d_0→0 and d_2 → 1. §.§.§ Partial Abelianization Following Section <ref> and using the constraint (<ref>) with the structure function (<ref>) we obtain q̃^ (A) = Q_0 + Q_ε (ε')^2 = 0 for the Abelianization condition (<ref>), where Q_0 and Q_ε are functions of B, as it appears in H̃^ (A) = B H̃ + A H_θ, and of the phase-space variables, but not of their derivatives. Therefore, these two coefficients must vanish independently. The condition Q_0=0 implies the equation B - sin(2 λ̅ K)/2 λ̅∂ B/∂ K = 0 , with the general solution B = B_0 tan (λ̅ K)/λ̅ for an undetermined function B_0 (ε , W̅). The condition Q_ε = 0 implies the equation 2 λ̅^2 B + λ̅sin (2λ̅ K) ∂ B/∂ K - 2 cos^2 (λ̅ K) ∂^2 B/∂ K^2 = 0 . By direct substitution we find that (<ref>) solves this equation too. The Abelianized constraint is then given by H̃^ (A)/B_0 = - tan (λ̅ K)/λ̅λ̅/λλ_0 cos^2 (ν̅W̅) √(ε)[ a̅( - λ^2/λ̅^2Λ_0 + ( α_2/4 ε c_f + 1/2∂ c_f/∂ε) sin^2 (λ̅ K)/λ̅^̅2̅ + ( 𝒜/a̅ - P_W̅/a̅tan (ν̅W̅)/ν̅∂lnν/∂ε - tan (λ̅ K)/λ̅∂lnλ/∂ε) c_f sin (2λ̅ K)/2λ̅ - ( P_W̅/a̅cos (ν̅W̅) + tan (λ̅ K)/λ̅(ν̅/ν c_h3 + ∂lnλ/∂W̅) )^2 α_3/4 εν^2/ν̅^2 c_f cos^2 (λ̅ K) ) + (ε')^2/a̅( λ̅^2 𝒜/a̅sin (2 λ̅ K)/2 λ̅ + cos^2 (λ̅ K) ( ∂lnλ/∂ε - α_2/4 ε - sin (ν̅W̅)/ν̅∂lnν/∂ε( ν̅/ν c_h3 + ∂lnλ/∂W̅ + sin (ν̅W̅)/ν̅∂lnν/∂εε/α_3) ) ) + ( ε' a̅'/a̅^2 - ε”/a̅) cos^2 (λ̅ K) + cos^2 (λ̅ K) ( - 1/a̅((sin (ν̅W̅)/ν̅)')^2 ε/α_3 + ε'/a̅(sin (ν̅W̅)/ν̅)' ( 2 ε/α_3sin (ν̅W̅)/ν̅∂lnν/∂ε + ν̅/ν c_h3 + ∂lnλ/∂W̅ - P_W̅/a̅cos (ν̅W̅)λ̅^2 tan (λ̅ K)/λ̅)) ] - λ̅/λλ_0 cos^2 (ν̅W̅) √(ε)ε' ( a̅ K' + P_W̅W̅' - Aε' ) . The first line in (<ref>) has a kinematical divergence at K = π / (2 λ̅) due to the overall tangent factor. This divergence can be removed if α_2/4 ε c_f + 1/2∂ c_f/∂ε - λ^2 Λ_0 = 0 is satisfied because the relevant terms then combine to produce a cos^2-factor and hence cancel the divergence of the tangent. If we interpret this condition as an equation for c_f, we must restrict λ to be a function of ε only. However, the solution to this equation is not compatible with the classical limit c_f → 1. (For instance, for Λ_0=0 we have c_f∝ε^-α_2/2.) Therefore, we have to weaken the condition by neglecting the first term, and hence leave it as a divergent term of the Abelian constraint. The resulting equation for the partial resolution of the divergence, 1/2∂ c_f/∂ε - λ^2 Λ_0 = 0 , can now be directly integrated, yielding the modification function c_f = 2 ∫λ^2 Λ_0 dε . If we choose the classical value of the cosmological constant Λ_0 = Λ, and λ^2 = Δ / ε with a constant Δ (sometimes used in models of loop quantum gravity), we obtain c_f = 1 + 2 ΛΔln(ε/c_0) , where c_0 is an integration constant. The correct classical limit is obtained for Δ→0. The logarithmic dependence on ϵ is relevant on intermediate scales far from black-hole or cosmological horizons. It may then be related to MOND-like effects as shown in <cit.>. §.§ Constraints of the second kind A second class of explicit modified constraints is obtained from a specific choice for one of the modification functions, so far without a detailed physical motivation. Nevertheless, this case is interesting because it can be used to show the variety of possible covariant theories. §.§.§ General constraint Again referring to more detailed derivations for a scalar field in spherical symmetry <cit.>, we now have the modified Hamiltonian constraint H̃ = - λ̅/λλ_0 cos^2 (ν̅W̅) √(ε)[ a̅( - λ^2/λ̅^2Λ_0 + sin^2 (λ̅ K)/λ̅^2( (α_2/4 ε - ∂lnλ/∂ε) c_f + 1/2∂ c_f/∂ε)) + a̅sin (2 λ̅ K)/2 λ̅( (α_2/2 ε - ∂lnλ/∂ε) λ/λ̅ q + λ/λ̅∂ q/∂ε) + ( 𝒜 + P_W̅/cos (ν̅W̅)( ν/ν̅ c_h3 - sin (ν̅W̅)/ν̅∂lnν/∂ε) ) (c_f sin (2 λ̅ K)/2 λ̅ + λ/λ̅ q cos(2 λ̅ K)) - ν^2/ν̅^2P_W̅^2/a̅cos^2 (ν̅W̅)α_3/4 ε( c_f cos^2 (λ̅ K) - 2 λ/λ̅ q λ̅^2 sin (2 λ̅ K)/2 λ̅) + ( ε' a̅'/a̅^2 - ε”/a̅) cos^2 (λ̅ K) - (ε')^2/a̅( (α_2/4 ε - ∂lnλ/∂ε) cos^2 (λ̅ K) - ( 𝒜/a̅ + P_W̅/a̅cos (ν̅W̅)( ν/ν̅ c_h3 - sin (ν̅W̅)/ν̅∂lnν/∂ε) ) λ̅^2 sin (2 λ̅ K)/2 λ̅ + ν^2/ν̅^2P_W̅^2/a̅^2 cos^2 (ν̅W̅)λ̅^2 α_3/4 εcos^2 (λ̅ K) ) - 1/a̅ν̅^2/ν^2( (sin (ν̅W̅)/ν̅)' + ε' ( ν/ν̅ c_h3 - sin (ν̅W̅)/ν̅∂lnν/∂ε) )^2 ε/α_3] , with structure function q̃^θθ = λ̅^2/λ^2λ_0^2 ( ( c_f + (λ̅ε'/a̅)^2 ) cos^2(λ̅ K) - 2 λ̅^2 λ/λ̅ q sin(2 λ̅ K)/2 λ̅) cos^4 (ν̅W̅) ε/a̅^2 All the non-classical parameters are undetermined functions of ε only, except for the parameters λ̅ and ν̅ which are constants, and λ_0 which can depend on both ε and W̅. The constraint (<ref>) and its structure function (<ref>) are symmetric under the discrete transformation W̅→ - W̅, P_W̅→ - P_W̅ only if the λ_0 dependence on W̅ is restricted by the discrete symmetry (<ref>) to the form λ_0 (ε , W̅)=λ_0 (ε , -W̅), and only if c_h3 = 0 because it is independent of W̅. (Alternatively, the discrete transformation can be redefined as W̅→ - W̅, P_W̅→ - P_W̅, and c_h3→ - c_h3, in which case the constraint and structure function are symmetric even for non-zero c_h3.) The classical limit can be taken in different ways, the simplest one given by λ→λ̅ and ν→ν̅, followed by λ_0 , c_f , α_2,α_3 → 1, q, λ̅ , ν̅→ 0, and Λ_0 →Λ. The inhomogeneous-field observable is G [ω] = ∫ dθ ων/ν̅P_W̅/cos(ν̅W̅) , where ω is a constant. The associated conserved current J^μ has the components J^t = ν/ν̅P_W̅/cos(ν̅W̅) , J^θ = ν̅/νλ̅/λλ_0 cos^2 (ν̅W̅) 2 ε^3/2/α_3 a̅( (sin (ν̅W̅)/ν̅)' + ε' ( ν/ν̅ c_h3 - sin (ν̅W̅)/ν̅∂lnν/∂ε) ) . When P_W̅,W̅→0, the homogeneous mass observable associated to (<ref>) is given by ℳ = d_0 + d_2/8(exp∫ dε(α_2/2 ε - ∂lnλ^2/∂ε)) ×( c_f sin^2(λ̅ K)/λ̅^2 + 2 λ/λ̅ q sin(2 λ̅ K)/λ̅ - cos^2 (λ̅ K) (ε'/a̅)^2 ) + d_2/4∫ dε (λ^2/λ̅^2Λ_0 exp∫ dε (α_2/2 ε - ∂lnλ^2/∂ε)) , where d_0, and d_2 are constants with classical limits given by d_0→0 and d_2 → 1. Most of these properties are similar to those in the first class, but explicit solutions in solvable cases, such as spatially homogeneous ones, can reveal crucial differences, as we will see in Section <ref>. A key difference between the first and second class can be seen easily in the structure functions (<ref>) and (<ref>), respectively. The first one is even in the curvature component K, while the second one contains an odd term, multiplied by the modification function q. The same behavior was possible in spherical symmetry <cit.> where it may have far-reaching implications for various particle effects <cit.>. In the Hamiltonian constraint, the q-terms show that there may be modifications linear in K (if a Taylor expansion is used for the trigonometric functions). Such terms can be more relevant than the classical quadratic terms as the curvature scale is increased. The second class of modified constraints for polarized Gowdy models shows that these interesting features are not restricted to spherical symmetry. §.§.§ Partial Abelianization The partial Abelianization of this constraint follows the same procedure as the last one. It requires exactly the same B-factor (<ref>) and the associated A. However, a partial Abelianization is subject to the additional condition that the modification function q vanishes. §.§ Constraints compatible with the classical-W̅ limit The third class has modified constraints that have a limit in which the field W̅ behaves like a classical scalar field on the emergent (and non-classical) space-time. This case is useful because it allows us to make comparisons between the propagation speed of W̅ as a polarized gravitational wave and the speed of a massless scalar field that may be coupled minimally. §.§.§ General constraint The modified Hamiltonian constraint is given by H̃ = - λ̅/λλ_0 cos^2 (ν̅W̅) √(ε)[ a̅( λ^2/λ̅^2Λ_0 + ( c_f (α_2/4 ε - ∂lnλ/∂ε) + 1/2∂ c_f/∂ε) sin^2 (λ̅ K)/λ̅^2) + a̅( q/2(α_2/ε - 2 ∂lnλ/∂ε) + λ/λ̅∂ q/∂ε) sin(2 λ̅ K)/2 λ̅ + ( 𝒜 - P_W̅tan (ν̅W̅)/ν̅∂lnν/∂ε) ( c_f sin (2 λ̅ K)/2 λ̅ + λ/λ̅ q cos(2 λ̅ K)) + (ε')^2/a̅( (∂lnλ/∂ε-α_2/4 ε) cos^2 (λ̅ K). . + λ̅^2 (𝒜/a̅ - P_W̅/a̅tan (ν̅W̅)/ν̅∂lnν/∂ε) sin (2 λ̅ K)/2 λ̅) + ( ε' a̅'/a̅^2 - ε”/a̅) cos^2 (λ̅ K) ] + ν̅^2/ν^2√(q^θθ)/2[ P_W̅^2/cos^2 (ν̅W̅)α_3/2 ε + 2 ε/α_3( (sin (ν̅W̅)/ν̅)' - sin (ν̅W̅)/ν̅∂lnν/∂εε' )^2 ] , with the structure function q̃^θθ = ( ( c_f + (λ̅ε'/a̅)^2 ) cos^2 (λ̅ K) - 2 q λ/λ̅λ̅^2 sin(2 λ̅ K)/2 λ̅) λ̅^2/λ^2λ_0^2 cos^4 (ν̅W̅) ε/a̅^2 appearing explicitly in the last line. All the non-classical parameters are undetermined functions of ε only, except for the parameters λ̅ and ν̅ which are constants, and λ_0 which can depend on both ε and W̅. The constraint (<ref>) and its structure function (<ref>) are symmetric under the discrete transformation W̅→ - W̅, P_W̅→ - P_W̅ only if the λ_0 dependence on W̅ is restricted by the discrete symmetry (<ref>) to the form λ_0 (ε , W̅)=λ_0 (ε , -W̅). The classical limit can be taken in different ways, the simplest one given by λ→λ̅ and ν→ν̅, followed by λ_0 , c_f , α_2,α_3 → 1, q, λ̅ , ν̅→ 0, and Λ_0 →Λ. The classical-W̅ limit is obtained for ν=ν̅→0 and α_3→1. The last parenthesis in the Hamiltonian constraint then approaches the form of a classical scalar field propagating on the emergent space-time with inhomogeneous spatial component q̃_θθ. The W̅-field observable is given by G [ω] = ∫ dθ ων/ν̅P_W̅/cos(ν̅W̅) where ω is a constant. The associated conserved current J^μ has the components J^t = ν/ν̅P_W̅/cos(ν̅W̅) , J^θ = = ν̅/ν√(q^θθ)2 ε/α_3( (sin (ν̅W̅)/ν̅)' - ε' sin (ν̅W̅)/ν̅∂lnν/∂ε) . When P_W̅,W̅→0, the homogeneous mass observable associated to (<ref>) is given by ℳ = d_0 + d_2/8(exp∫ dε(α_2/2 ε - ∂lnλ^2/∂ε)) ×( c_f sin^2(λ̅ K)/λ̅^2 + 2 λ/λ̅ q sin(2 λ̅ K)/2 λ̅ - cos^2 (λ̅ K) (ε'/a̅)^2 ) + d_2/4∫ dε (λ^2/λ̅^2Λ_0 exp∫ dε (α_2/2 ε - ∂lnλ^2/∂ε)) , where d_0, and d_2 are constants with classical limits given by d_0→0 and d_2 → 1. §.§.§ Partial Abelianization The partial Abelianization of this constraint follows the same procedure as outlined in the first class of modified constraints. It implies the B-factor (<ref>) and the associated A-factor, but in addition requires that the modification function q vanishes, as in the second class. § DYNAMICAL SOLUTIONS WITH HOMOGENEOUS SPATIAL SLICES First indications of possible physical effects of our modifications can be obtained by looking at properties of spatially homogeneous solutions. In this case, partial differential equations are replaced by ordinary ones that can often be solved more easily. §.§ Classical constraint For the sake of comparison, we first present useful gauge conditions and solutions in the classical case with vanishing cosmological constant. Strict homogeneity then implies Kasner solutions, while an inhomogeneous solution for W̅ can also be allowed. §.§.§ Conformal gauge The coordinates of the conventional Gowdy metric (<ref>) are associated to the gauge choice N^θ = 0 , ε = T with the time coordinate T. We impose this gauge and for now work with the classical constraint (<ref>). The remaining metric components can be expressed in terms of the lapse function and the two fields W=ln√(E^y/E^x)=W , a = ln√(E^x E^y / ε) = lna - 1/2lnε . The on-shell conditions H_θ = 0 and H=0 in this gauge become H_θ = P_WW' + a K' = 0 , H = P_W^2 - 4 T a K 𝒜 - a^2 K^2 + 4 T^2 (W')^2/4 √(T)a = 0 . Using the latter expression, we obtain an equation of motion ∂_T (a K) = {a K , H [N]} = - NH which vanishes on-shell, such that a K = μ where μ is a constant. The consistency equation ε̇=∂ε/∂ T = 1 can be solved for the lapse function N = 1/K √(T) = μ^-1a̅/√(ε) = μ^-1√(q_θθ) . The equations of motion for a and W, respectively, imply 𝒜 = μ( ȧ/a - 1/2 T) , P_W = 2 μ T Ẇ . Using these results, the on-shell conditions H_θ=0 and H=0 can be rewritten as a' = 2 T W W' , a = - 1/4 T + T (W^2 + (W')^2/μ^2) , where we have used the identification (<ref>). The equation of motion W = {{W , H[N]} , H[N]} requires some care because the lapse function (<ref>) is phase-space dependent. In a first-order equation of motion, using a single Poisson bracket with H[N], any term resulting from a non-zero Poisson bracket with N would be multiplied by H and therefore vanish on-shell. However, this argument does not apply to iterated Poisson brackets of some phase-space function with H[N], where non-zero on-shell terms may contribute. Duely taking into account the phase-space dependence of N, such that the second {·,H[N]} acts on this function contained in the first H[N], we obtain the second-order equation of motion 0 = W + W/T - W”/μ^2 . It can be checked that this equation is equivalent to what would be obtained in standard general relativity. (If N were treated as phase-space independent, we would instead obtain the bracket {{W , H[N]} , H[N]}= - W/T + W”/μ^2 + Ẇ(1/(4 T) - T Ẇ - T W”/μ^2 ), using the lapse function (<ref>) only after computing the brackets. This expression has extra terms compared with the correct equation (<ref>).) These are the equations of motion for the polarized Gowdy system in conventional variables, which have the general solution <cit.> W = α + βln T + ∑_n=1^∞[ a_n J_0 (n T) sin (n μ^-2θ + γ_n) + b_n N_0 (n T) sin (n μ^-2θ + δ_n) ] , where α, β, a_n, b_n, γ_n, and δ_n are real constants, and J_0 and N_0 are Bessel and Neumann functions of the zeroth order, respectively. Given the solution for W, direct integration of (<ref>) gives the expression for a. The space-time line element in this gauge becomes d s^2 = μ^-2 e^2 a( - d T^2 + μ^2 dθ^2 ) + T ( e^-2 W d x^2 + e^2 W d y^2 ) . The square of the lapse function, N^2 = a^2 / (μ^2 T), exactly equals q_θθ = a^2 / T only if μ=± 1. Thus, setting μ=1, imposing positivity of the lapse function in the region T>0, we obtain a conformally flat metric for the T-θ components. With this choice, the equation of motion (<ref>) implies that W-excitations travel at the speed of light. Finally, note that using (<ref>) and (<ref>) and inserting this solution in the symmetry generator (<ref>) with ω=1 we find that it corresponds to G [1] = 4 πμβ , where we used the periodicity conditions in θ. This is clearly a conserved quantity. §.§.§ Homogeneous solution For a spatially homogenous background, we consider the special case W'=0. From (<ref>), this implies that a_n=b_n = 0. The equations of motion (<ref>) now have the solution a = 4β^2-1/4ln( T/T_0) , a̅ = T_0^1/4-β^2 T^β^2+1/4 , with a constant T_0, and hence K = μ T_0^β^2-1/4 T^-β^2-1/4 . The space-time metric (<ref>) becomes d s^2 = μ^-2(T/T_0)^2β^2-1/2( - d T^2 + μ^2 dθ^2 ) + e^2 α T^1+2β d x^2 + e^-2 α T^1-2β d y^2 . The constants μ, T_0 and α can be absorbed in the definition of coordinates. In proper time, defined by τ(T)∝ T^β^2+3/4, we then have the line element ds^2 = - dτ^2+ τ^2p_1 dθ^2+ τ^2p_2 dx^2+ τ^2p_3 dy^2 with exponents p_1=β^2-1/4/β^2+3/4 , p_2=β+1/2/β^2+3/4 , p_3=β-1/2/β^2+3/4 that satisfy the Kasner relations p_1+p_2+p_3=1=p_1^2+p_2^2+p_3^2. The background Kasner behavior is therefore determined by the observable G[1]. If the periodic terms from a non-homogeneous W in (<ref>) are included, they describe a polarized gravitational wave travelling on a Kasner background. §.§.§ The flat Kasner solution As a special case, a flat solution within the Kasner class is defined by further taking α=0, β = 1/2, and μ=1. We obtain a = 0 , a̅ = √(T) , K = μ / √(T) , W̅ = 1/2ln T , P_W̅ = 1 , A = 0 and the space-time metric (<ref>) becomes d s^2 = - d T^2 + dθ^2 + T^2 d x^2 + d y^2 . The Ricci scalar vanishes, which means that this expression may be considered a vacuum solution. (The T-x part is a 2-dimensional Milne model.) Upon applying the coordinate transformation t_ M = T cosh x , x_ M = T sinh x with inverse T^2 = t_ M^2 - x_ M^2 , x = arctanhx_ M/t such that - d t_ M^2 + d x_ M^2 = - d T^2 + T^2 d x^2 , the Kasner-like line element (<ref>) becomes Minkowskian, d s^2 = - d t_ M^2 + dθ^2 + d x_ M^2 + d y^2 . While this result shows that the specific Kasner solution (<ref>) is a locally flat space-time, hypersurfaces of constant time T define a 3-dimensional space with non-vanishing extrinsic curvature K_a b d x^a d x^b = 1/2{q_a b , H[1]} d x^a d x^b = T d x^2 . §.§.§ Homogeneous solution: Internal-time gauge In order to compare the results of the different classes of constraints, we will evaluate them in the homogeneous case, P_W̅'=W̅'=a̅'=ε'=K'=𝒜'=N'=0, with vanishing cosmological constant, Λ=0. It will be convenient to work with coordinates adapted to the full range of the curvature variable K, used as an internal time coordinate. We define this internal-time gauge by N^θ = 0 , K = T_K , with a new time coordinate T_K. In the homogeneous case, as before, we obtain ∂_T_K(a̅ K) = {a̅ K , H[N] } = - H N , which vanishes on-shell and implies a̅ = μ/T_K = μ/K , for some constant μ. Because of homogeneity, the local version of the observable (<ref>) is conserved, Ġ=2πṖ_W=0. Therefore, any P_W in the constraints and equations of motion is time-independent and can be set equal to P_W=2 μβ, defining the constant β in the internal-time gauge. Using this expression and the chain rule, we obtain the equations dε/ d K = ε̇/K̇ = - 4 ε/(1 + 4 β^2) K , dW̅/ d K = Ẇ̅̇/K̇ = - 4 β/(1 + 4 β^2) K , with solutions ε = c_ε T_K^- 4 / (1 + 4 β^2) , W̅ = c_w - 4 β/1 + 4 β^2ln T_K = ln( e^c_w T_K^- 4 β/(1 + 4 β^2)) . The integration constants may be redefined as c_ε = (μ T_0^(4β^2-1)/4)^4/(4 β^2+1) , e^2 c_w = e^2α(μ T_0^(4β^2-1)/4)^8 β/(4 β^2+1) . The on-shell condition H_θ=0 is trivial in the homogeneous case, while H=0 greatly simplifies and can be solved for 𝒜 = μ4 β^2 - 1/4 ε = 4 β^2 - 1/4 c_εμ T_K^4 μ^2 / (μ^2 + 4 β^2) . Finally, the lapse function is obtained by solving the consistency equation K̇=∂ K/∂ T_K=1, N = - 4/1 + 4 β^2√(c_ε) T_K^-2 /(1 + 4 β^2) - 2 . The negative value of the lapse function means that evolution runs from higher to lower values of T_K, similar to what happens in Schwarzschild coordinates in a black hole's interior. The space-time metric (<ref>) is then given by d s^2 = c_ε^-1 T_K^2 (1 - 4 β^2) / (4 β^2+1)( - (4/1 + 4 β^2)^2 c_ε^2 T_K^- 2 (5 + 4 β^2)/(4 β^2+1) d T_K^2 + μ^2 dθ^2 ) + c_ε( e^2 c_w T_K^- 4 ( 2 β + 1 )/(4 β^2+1) d x^2 + e^-2 c_w T_K^4 (2 β - 1)/(4 β^2+1) d y^2 ) . The conventional time coordinate T and our curvature time T_K are related by T_K = μ T_0^β^2-1/4 T^-β^2-1/4 . This coordinate transformation turns the metric (<ref>) into (<ref>). The flat Kasner solution defined by α=0, β = 1/2, and μ=1, in the present gauge implies ε = T_K^- 2 , W̅ = - ln T_K . The coordinate transformation relating the two gauges is then simplified to T_K = 1/√(T) and the space-time metric is given by d s^2 = - 4 T_K^-6 d T_K^2 + dθ^2 + T_K^-4 d x^2 + d y^2 with extrinsic curvature K_a b d x^a d x^b = T_K^-2 d x^2 of constant-T_K-slices. The coordinate singularity of (<ref>) at T→0_+ is here given by T_K →∞, while the coordinate singularity of (<ref>) at T_K → 0_+ corresponds to T→∞. §.§ Singularity-free solutions We now consider the first two classes of modified theories, given by constraints compatible with the limit of reaching the classical constraint surface and constraints of the second kind. In both cases, the classical singularity of homogeneous solutions is removed. §.§.§ New variables We first use the modified constraint (<ref>) with constant λ=λ̅ and λ_0, choosing the gauge N^θ = 0 , ε = T . The on-shell conditions do not change under a linear combination of the constraints, and thus we have the classical constraint surface given by (<ref>). The consistency equation ε̇ = 1 can be solved for the lapse function N = λ_0^-1/√(ε)1/K √(1 - λ̅^2 K^2) . Using this, we obtain ∂_T ( a K ) = {a K , H̃ [N] }∝H̃ , which vanishes on-shell. Thus, a K = μ with a constant μ. The T-θ part of the line element is no longer conformally flat because we have the emergent metric component q̃_θθ = λ_0^-2a̅^2/T = λ_0^-2 e^2 a while N^2 = (μλ_0)^-2/1 - λ̅^2 K^2a^2/ε = μ^-2/1 - λ̅^2 K^2q̃_θθ = μ^-2 B^-2 q_θθ^ (cl) , where q_θθ^ (cl) = a^2 / ε is the classical expression and B = λ_0 √(1 - λ̅^2 K^2) is the factor in the linear combination (<ref>). Because ε'=0 in this gauge, the coefficient A in the linear combination vanishes. Thus, in this gauge the full Hamiltonian generator is identical to the classical one: H̃[N]+H_θ[N^θ]=H̃[B^-1√(μ^-1 q_θθ^ (cl))]= H[√(μ^-1 q_θθ^ (cl))]= H[N^ (cl)] . This means that all of the equations of motion are identical to the classical ones, (<ref>) and (<ref>), obtaining the same classical solutions (<ref>). However, even though all the phase-space solutions retain their classical forms, the resulting space-time geometry is non-classical because the structure function differs by a constant factor of λ_0^2 from the classical one and the lapse function differs from its classical expression more significantly. The resulting emergent space-time line element is given by d s^2 = e^2 a/μ^2λ_0^2( - d T^2/1-λ̅^2 μ^2 e^- 2 a / T + μ^2 dθ^2 ) + T ( e^2 W d x^2 + e^- 2 W d y^2 ) , where a and W are related to the phase-space variables by (<ref>). In the limit λ̅→0, λ_0→1 we recover the classical solution whose curvature invariant R_αβμν R^αβμν diverges as T → 0_+. Seen from positive T, this singularity lies in the past where the x-y plane had collapsed to zero area ε=T. In the modified case, the T-θ part of the metric is no longer conformally flat. There is a new singularity at T =λ̅^2 e^-2 a =: T_λ̅, a time later than the classical singularity at T=0_+. This new singularity is therefore the relevant one in the positive-T branch, but it is not a physical singularity. To see this in detail, we will use a new gauge in the next subsection. For now, the special case of a (modified) flat Kasner solution can be analyzed more easily. It is given by μ=1, β = 1/2, α=a_n = b_n = 0, which implies W = 1/2ln T and a = 0. The line element is then equal to d s^2 = - d T^2/1-λ̅^2 / T + dθ^2 + T^2 d x^2 + d y^2 , where we have chosen λ_0=1 so as to recover the classical Kasner metric for large T≫λ̅^2. In the classical case λ̅→ 0 this is the flat Kasner solution whose curvature invariants are finite. The solutions can be analytically extended across T=0, but they are causally ill-behaved as they form closed time-like curves. However, for λ̅≠ 0, the singularity T=0_+ is hidden inside a region bounded by the new singularity at T= λ̅^2 > 0. Curvature invariants can be used to support the expectation that this is only a coordinate singularity. We first note that the Kasner solution (<ref>) of the modified theory is not flat: It has the Ricci scalar R = λ̅^2/T^3 , and the Kretschmann invariant K≡ R_μναβ R^μναβ = - 1/2λ̅^4/T^10(1 + T^4 ( 1 - λ̅^2/T)^2) . At T= λ̅^2, both are finite. Moving across this value, a new gauge must be chosen, in which, as we will see in what follows, the physical singularity at T=0 no longer appears. (This construction is similar to the non-singular black-hole models in <cit.>.) Any hypersurface of constant time T of the modified Kasner spacetime (<ref>) defines a 3-dimensional space with non-vanishing extrinsic curvature K_a b d x^a d x^b = T √(1 - λ̅^2/T) d x^2 unless T=λ̅. This specific value implies a hypersurface of time-reflection symmetry, which can be used to glue a time reverse of our solution at T=λ̅. The classical singularity at T=0 is then replaced by a transition from collapse to expansion. If the interpretation of the classical flat solution as a vacuum space-time is extended to the modified theory, it could suggest a vacuum different from the usual Minkowski one, being approximately flat only for T≫λ̅^2. However, in Section <ref> we will show that drawing such a conclusion only based on homogeneous models in a fixed gauge of internal time would not be justified. For now, we continue our analysis of homogeneous dynamics. §.§.§ Periodic variables We shall now use the modified constraint (<ref>) with constant λ_0 and λ=λ̅, reproducing the above results because this version will serve as a guide to obtaining the dynamical solutions of the other two constraints. We again choose the gauge N^θ = 0 , ε = T . The on-shell conditions are 0 = 1/4 εsin^2 (λ̅ K)/λ̅^2 - cos^2 (λ̅ K)/4 εP_W^2/a^2 + sin (2 λ̅ K)/2 λ̅𝒜/a - ε(W')^2/a^2cos^2 (λ̅ K) , 0 = a K' + P_WW' and the consistency equation ε̇ = 1 can be solved for the lapse function N = λ_0^-1/√(ε)2 λ̅/sin (2 λ̅ K) . Using this result, we obtain ∂_T ( atan (λ̅ K)/λ̅) = {atan (λ̅ K)/λ̅ , H̃ [N] }∝H̃ , which vanishes on-shell. Thus, atan (λ̅ K) / λ̅ = μ where μ is a constant. Because of the canonical transformation involved, the identification (<ref>) changes to W = W , a = ln√(E^x E^y / ε) = ln( a/cos (λ̅ K)) - ln√(ε) . Therefore, sin(λ̅ K)=λ̅μ/a̅/cos(λ̅ K)= λ̅μ/√(ε) e^a . The equations of motion for a and W respectively give 𝒜 = μ∂_T ( ln( a/cos(λ̅ K)) - ln√(T)) = μa , P_W = 2 μ T Ẇ . Using these results, the constraints H_θ=0 and H̃=0 can be rewritten as a' = 2 T W W' , a = - 1/4 T + T (W^2 + (W')^2/μ^2) , where we have used the identification (<ref>). The equation of motion W = {{W , H̃[N]} , H̃[N]}, such that the brackets do act on the lapse as discussed before, can be rewritten as 0 = W + W/T - W”/μ^2 . These equations are identical to the classical ones. The emergent line element is then given by d s^2 = λ_0^-2 e^2 a( - ^2 (λ̅ K) d T^2 + dθ^2 ) + T ( e^2 W d x^2 + e^- 2 W d y^2 ) = λ_0^-2 e^2 a( - d T^2/1-λ̅^2μ^2/(T e^2a) + dθ^2 ) + T ( e^2 W d x^2 + e^- 2 W d y^2 ) . Modified Kasner models are obtained for W=βln T, which implies e^2a∝ T^2β^2-1/2 as in the classical case. The line element then equals ds^2 = λ_0^-2T^2β^2-1/2( - d T^2/1-λ̅^2T^-2β^2-1/2 + dθ^2 ) + T^1+2β d x^2 + T^1-2β d y^2 if we set μ^2 equal to the proportionality factor in e^2a. For constant but non-zero λ, proper time is now given by a hypergeometric function of T, which complicates any further analysis of general Kasner models. It is nevertheless possible to understand the general behavior. To do so, we first introduce a new time coordinate t=T^β^2+3/4, such that dτ∝T^β^2-1/4/√(1-λ̅^2 T^-2β^2-1/2) dT= 1/β^2+3/4 dt/√(1-λ̅^2 t^-2(β^2+1/4)/(β^2+3/4)) according to the time component of (<ref>). For large T, t and τ therefore proceed at almost the same rate, up to a constant rescaling. With respect to proper time, we now have the line element ds^2=- dτ^2+ λ_0^-2 t(τ)^2p_1 dθ^2+ t(τ)^2p_2 dx^2+t(τ)^2p_3 dy^2 with Kasner exponents p_i as in (<ref>), obeying the classical relations, and the inverse of a hypergeometric function (times t) for t(τ). (For λ̅=1, t(τ) is the inverse of t _2F_1(1/2,1/a;1+1/a;t^a) if a=-2(β^2+1/4)/(β^2+3/4)≠-1. If a=-1, t(τ) is the inverse of √((t-1)t)-sinh^-1(t).) For large t(τ) such that λ̅≪ T^β^2+1/4=t^(β^2+1/4)/(β^2+3/4), the behavior is close to the classical Kasner dynamics with the same relationship between the Kasner exponents and the conserved quantity β. For smaller t, however, there is a new effect because the relationship between t and τ is not one-to-one, in contrast to the classical solutions. We have dt/ dτ= λ_0(β^2+3/4) √(1-λ̅^2 t^-2(β^2+1/4)/(β^2+3/4))=0 at t=t_λ̅=λ̅^(β^2+3/4)/(β^2+1/4) . At the same value of t, d^2t/ dτ^2 = λ_0 λ̅^2(β^2+1/4) t^-(3β^2+5/4)/(β^2+3/4)/√(1-λ̅^2 t^-2(β^2+1/4)/(β^2+3/4)) dt/ dτ = λ_0^2λ̅^2(β^2+1/4)(β^2+3/4) t^-(3β^2+5/4)/(β^2+3/4) = λ_0^2(β^2+1/4)(β^2+3/4) t_λ̅^-1>0 such that t(τ) has a local minimum at the value t(τ)=t_λ̅. The full dynamics therefore describes non-singular evolution of a collapsing Kasner model connected to an expanding Kasner model with the same exponents. All three spatial directions transition from collapse to expansion at the same time τ(t_λ̅). The behavior of t(τ) is illustrated in Figs. <ref> and <ref>. The special case of the modified flat Kasner model is given by μ=1, β = 1/2, α=a_n = b_n = 0, which implies W = 1/2ln T and a = 0. We have 𝒜=0 and sin(λ̅ K)/λ̅ = 1/√(T) , and the line element d s^2 = λ_0^-2( - ^2 (λ̅ K) d T^2 + dθ^2 ) + T^2 d x^2 + d y^2 . Here, proper time τ(T) can be integrated more easily but its inversion to T(τ) remains complicated. §.§.§ Homogeneous solution: Internal-time gauge Let us now use the inhomogeneous curvature component as an internal time, T_K = K. The two time coordinates are related to each other by sin(λ̅ T_K)/λ̅ = 1/√(T) such that - 2 λ̅^3/sin^3 (λ̅ T_K)cos (λ̅ T_K) d T_K = d T . Substituting in the line element for the modified flat Kasner model, we obtain d s^2 = λ_0^-2( - 4 λ̅^6/sin^6 (λ̅ T_K) d T_K^2 + dθ^2 ) + λ̅^4/sin^4(λ̅ T_K) d x^2 + d y^2 , which is indeed regular at maximum curvature, T_K = π / 2 λ̅, defining a surface of reflection symmetry. We now derive this result by directly solving the equations of motion in the internal-time gauge (<ref>), rather than performing a coordinate transformation. For the homogeneous model, we set P_W̅'=W̅'=a̅'=ε'=K'=𝒜'=N'=0 and assume a vanishing cosmological constant, Λ=0. We note that the modified constraints (<ref>) and (<ref>) are identical in the homogeneous case if the classical values for the functions c_f , α_2,α_3 → 1, q → 0, and Λ_0 →Λ→ 0 are taken, with constant λ=λ̅, ν=ν̅ and λ_0. The results of the present and the following subsections then apply to both cases. We first see that because of homogeneity the local version of the observable (<ref>) is conserved, Ġ=2πṖ_W=0, and we will write the momentum as P_W=2 μβ with constants μ and β. The on-shell condition H_θ=0 is trivially satisfied in this case, while H̃=0 is solved by 𝒜 = - a̅/4 εtan (λ̅ K)/λ̅ + λ̅ (λ̅ K)/4 ε4 μ^2 β^2/a . We then obtain ∂_T ( atan (λ̅ K)/λ̅) = {atan (λ̅ K)/λ̅ , H̃ [N] }∝H̃ , which vanishes on-shell, such that atan (λ̅ K) / λ̅ = μ where μ is a constant. Hence, 𝒜 = ( 4 β^2 - 1 ) μ/4 ε . In combination with the chain rule we obtain the equations dε/ d K = ε̇/K̇ = - 4 λ̅/(1+4 β^2) tan (λ̅ K)ε , d/ d K(sin(ν̅W̅)/ν̅) = - 4 βλ̅/(1+4 β^2) tan (λ̅ K) , solved by ε = c_ε(sin (λ̅ K)/λ̅)^-4/(1+4 β^2) and sin(ν̅W̅)/ν̅ = ln(e^c_wsin (λ̅ K)/λ̅)^-4 β/(1+4 β^2) . For convenience, the integration constants may be redefined as c_ε = (μ T_0^(4β^2-1)/4)^4/(4 β^2+1) and e^2 c_w = e^2α(μ T_0^(4β^2-1)/4)^8 β/(4 β^2+1) . Finally, the lapse function is obtained by solving the consistency equation K̇=1, N = - 4/1 + 4 β^2√(c_ε)/λ_0^2 (ν̅W̅) (sin (λ̅ K)/λ̅)^-2 /(1 + 4 β^2) - 2 . The space-time line element is then given by d s^2 = ^4 (ν̅W̅) (sin (λ̅ T_K)/λ̅)^4/(1+4 β^2)-2 ×( - (4/1 + 4 β^2)^2 c_ε/λ_0^2(sin (λ̅ T_K)/λ̅)^-8 /(1 + 4 β^2) - 2 d T_K^2 + μ^2/c_ε dθ^2 ) + c_ε e^2 c_w(sin (λ̅ T_K)/λ̅)^- 4 ( 2 β + 1 )/(4 β^2+1) d x^2 + c_ε e^-2 c_w(sin (λ̅ T_K)/λ̅)^4 (2 β - 1)/(4 β^2+1) d y^2 where W̅ is implicitly given by (<ref>). §.§.§ Modified flat Kasner solution The Kasner solution μ=1, β=1/2 is given by the simpler metric d s^2 = ^4 (ν̅W̅) ( - 4/λ_0^2(sin (λ̅ T_K)/λ̅)^-6 d T_K^2 + dθ^2 ) + (sin (λ̅ T_K)/λ̅)^- 4 d x^2 + d y^2 , where W̅ can be obtained from inverting its relation with K, sin (λ̅ T_K)/λ̅ = exp(- sin(ν̅W̅)/ν̅) . Taking ν̅→ 0, the Kasner solution in this gauge (<ref>) has the Ricci scalar R = λ̅^2 (sin (λ̅ T_K)/λ̅)^6 , and the Kretschmann invariant K≡ R_μναβ R^μναβ = - λ̅^4/8(sin (λ̅ T_K)/λ̅)^22 . Both expressions are finite at T_K= π / (2λ̅). The model is approximately flat only for T_K≪λ̅, and both curvature invariants vanish in the classical limit λ̅→0. A hypersurface of constant time T_K of the modified Kasner space-time (<ref>) defines a 3-dimensional space with extrinsic curvature K_a b d x^a d x^b = λ_0 λ̅^2 cos (λ̅ T_K)/sin^2 (λ̅ T_K) dθ^2 which vanishes only at the maximum-curvature hypersurface. §.§.§ Flat solution: Non-unique vacuum From the above example one might conclude that the vacuum solution of this theory is different from Minkowski space-time, as suggested for a similar case for instance in <cit.>. It is easy to see that such a statement is incorrect because flat space-time, described by N = 1 , N^θ = 0 , ε = 1 , a̅ = 1 , W̅ = 0 , A = 0 , K = 0 , P_W̅ = 0 , is a solution to the same theory if, to be specific, we take the classical values for all the modification functions except for λ and ν which we leave as arbitrary functions. This solution is excluded from the case of Kasner-like line elements by the assumption that ε can be used as a time variable, such that ε=1 can be obtained only on one spacelike hypersurface but not across an entire space-time region. Nevertheless, flat Minkowski space-time is a solution of the same modified theory in which we obtained our Kasner space-times. Minkowski space-time is relevant because its local behavior describes the background space-time of vacuum states in quantum field theory. According to the general meaning of “vacuum” in particle physics or general relativity, all Kasner models are vacuum solutions because they do not include matter. The case of β=1/2 is special only because, classically, it happens to be locally equivalent to Minkowski space-time and just appears written in non-Cartesian coordinates. The result that this correspondence is not realized in a modified theory only means that there is no longer a Kasner model related to flat Minkowski space-time. It does not mean that Minkowski space-time itself is modified or no longer appears as a solution, as demonstrated by the explicit counter-example of (<ref>). A distinguishing feature of (<ref>) compared with any Kasner solution is that it is not only devoid of matter, but also has a vanishing local gravitational degrees of freedom described by (W̅,P_W̅). We can formalize this property by making use of the definition of an effective stress-energy, obtained from the Einstein tensor of the emergent space-time metric. We find that the flat solution (<ref>) has a vanishing net stress-energy tensor, while the modified flat Kasner solution has a non-trivial one, as shown by the non-vanishing Ricci scalar (<ref>). Therefore, the solutions are distinguished from one another by their effective gravitational energy content. From this perspective, the standard Minkowski solution remains the preferred vacuum space-time also in a modified theory. For λ̅→0, the effective stress-energy tensor vanishes, and the modified flat Kasner solution approaches the strictly flat Minkowski solution. The correct identification of a candidate vacuum solution therefore requires an extension of strict minisuperspace models to some inhomogeneity, which tells us that the non-zero W̅ and P_W̅ in Kasner models are homogeneous remnants of a propagating gravitational degree of freedom, and the correct identification of a covariant space-time structure that defines curvature and effective stress-energy. It is also important to have a gauge-invariant treatment that is not built on a fixed gauge choice such as an internal time, as such a choice might restrict the accessible solution space. None of these ingredients had been available in previous models of quantum cosmology. With some choices of modification functions, it might happen that strict Minkowski space-time is no longer a solution or that the zero-mass limit of a black-hole solution differs from Minkowski space-time as seen explicitly in an example in <cit.>. But such a conclusion can not be drawn in a reliable manner in theories based on restriced gauge choices or on incomplete demonstrations of covariance properties. §.§ Constraints compatible with the classical-W̅ limit We now use the constraint (<ref>) in the internal time gauge (<ref>) for the homogeneous case where P_W̅'=W̅'=a̅'=ε'=K'=𝒜'=N'=0. For simplicity we take the classical values for the following modification functions and assume a vanishing cosmological constant, c_f , α_2,α_3 → 1, q → 0, and Λ_0 →Λ→ 0. We also set λ_0, λ=λ̅, and ν=ν̅ constant. With these values, the inhomogeneous component of the emergent spatial metric is given by q̃_θθ = λ_0^-2cos^-4 (ν̅W̅)a̅^2/cos^2 (λ̅ K)1/ε . As before, homogeneity implies that the local version of the observable (<ref>) is conserved, Ġ=0, and we shall write it as G=4πμβ such that P_W̅=2μβ, anticipating an integration constant (μ) that will be introduced in the process of solving equations of motion. The value of β then parameterizes the momentum. The relevant equations of motion for recovering the emergent space-time geometry are given by dln(a̅^2/cos^2(λ̅ K))/ d( sin(λ̅ K) / λ̅) = - 2 λ̅/sin(λ̅ K)( sin^2 (λ̅ K)/λ̅^2cos^2(λ̅ K) a̅^2/cos^2(λ̅ K) + 4 μ^2 β^2 cos(2λ̅ K)/|cos(λ̅ K)|) ×( sin^2(λ̅ K)/λ̅^2cos^2(λ̅ K) a̅^2/cos^2(λ̅ K) + 4 μ^2 β^2 |cos(λ̅ K)|)^-1 as well as dlnε/ d K = - 4 sin(2 λ̅ K)/2 λ̅( sin^2(λ̅ K)/λ̅^2 |cos(λ̅ K)| + 4 μ^2 β^2 cos^2(λ̅ K)/a̅^2)^-1 and d/ d K(sin (ν̅W̅)/ν̅) = - 4 μβcos^2(λ̅ K)/a̅( sin^2(λ̅ K)/λ̅^2 |cos(λ̅ K)| + 4 μ^2 β^2 cos^2(λ̅ K)/a̅^2)^-1 where we have chosen K as an evolution parameter. Equation (<ref>) can be solved exactly, a̅^2/cos^2 (λ̅ K) = μ^2/4λ̅^2/sin^2(λ̅ K)( 1-4 β^2 + √( (1-4 β^2)^2 + 16 β^2/|cos(λ̅ K)|))^2 . The integration constant μ^2 and the sign of the square root have been chosen for the solution to match the classical one, (<ref>), in the limit λ̅→ 0. The ratio (<ref>) appears directly as a factor in the emergent metric component q_θθ, given by (<ref>). Near the maximum-curvature hypersurface, defined by K→π / (2λ̅), this expression diverges as (λ̅ K), and its internal-time derivative (<ref>) diverges as ^2 (λ̅ K). Using this, we find that the right-hand side of (<ref>) remains finite at the maximum-curvature hypersurface, dlnε/ d K ≈ - 4 λ̅/sin (λ̅ K) , while that of (<ref>) vanishes. We conclude that both ε and sin (ν̅W̅) remain finite, and hence the homogeneous components q_xx and q_yy are finite too. We complete the gauge fixing by enforcing the consistency equation K̇=1 and solve it for the lapse function, N = - ^2 (ν̅W̅)/λ_0 4 √(ε)(a̅^2/cos^2 (λ̅ K) |cos (λ̅ K)| sin^2 (λ̅ K)/λ̅^2 + 4 μ^2 β^2)^-1a̅^2/cos^2 (λ̅ K) |cos (λ̅ K)| , which is finite at the maximum-curvature hypersurface provided W̅≠ -π/(2 ν̅). Because of the divergence of (<ref>) and its derivative (<ref>), the emergent line element has a singular θ-component at the maximum-curvature hypersurface, and its time-derivatives are singular there too. Thus, neglecting the time-derivatives of the q_xx and q_yy components, a homogeneous line element of the form d s^2 = - N^2 d T_K^2 + q̃_θθ dθ^2 + q_xx d x^2 + q_yy d y^2 , has the Ricci scalar R ≈ - q̇̃̇_θθ/q̃_θθṄ/N^3 - 1/2 N^2( (q̇̃̇_θθ/q̃_θθ)^2 - 2 q_θθ/q_θθ) , which diverges as R ∼^2 (λ̅ T_K) near the maximum-curvature hypersurface, while the Kretschmann scalar takes the form K≈ - R^2/2 q̃_θθ N^2 , which diverges as K∼^3 (λ̅ T_K) near the maximum-curvature hypersurface This constraint, unlike the other two versions considered in this paper, therefore implies a singular geometry at the maximum-curvature hypersurface. § DISCUSSION We have extended emergent modified gravity from spherically symmetric models to polarized Gowdy systems, preserving most of the qualitative features observed in previous publications. In particular, modification functions of the same number and type remain in the classes of modified constraints derived explicitly here, building on a relationship with models of a scalar field coupled to spherically symmetric gravity. Emergent modified gravity therefore is not restricted to spherical symmetry, and it is compatible with different kinds of local degrees of freedom from matter or gravity. One class of models, compatible with the classical limit of the local gravitational degree of freedom, has a set of modification functions such that polarized gravitational waves travel on an emergent space-time geometry just like a minimally coupled scalar field. The existence of these models shows that a non-trivial class of theories in emergent modified gravity has gravitational waves and matter (a minimally coupled massless scalar field propagating on the same geometry) travelling at the same speed. Emergent modified gravity is therefore compatible with strong observational restrictions on the difference of the two speeds <cit.>. Moreover, emergent modified gravity does not require higher time derivatives for non-trivial modifications, and is therefore free of related instabilities <cit.>. Compared with spherically symmetric models, polarized Gowdy systems have a large class of homogeneous solutions that correspond to the full Kasner dynamics of the Bianchi I model. We have derived consistent modifications of this dynamics with the correct classical limit at large volume but different behaviors at small volume. Some types of modifications lead to non-singular evolution connecting collapsing and expanding Kasner dynamics, while models compatible with the classical limit for the local gravitational degree of freedom retain the classical big-bang singularity. In the non-singular case, all three spatial directions transition from collapse to expansion at the same time. We demonstrated that the modified Kasner family may no longer include Minkowski space-time, but that a different gauge choice not based on an internal time nevertheless shows that this geometry remains a solution of the modified theory. Discussions of possible vacuum states in a modified theory therefore require access to different gauge choices and cannot be made reliably in a deparameterized setting, as often used in quantum cosmology. The restrictions on inhomogeneous terms in the covariant constraints, imposing the covariance requirement on an emergent space-time metric distinct from the basic phase-space variables, demonstrates the non-trivial nature of modifications or quantizations of the polarized Gowdy model. In particular, a separate modification or quantization of a homogeneous Bianchi model coupled to linearized classical-type inhomogeneity, as proposed for instance in hybrid loop quantum cosmology <cit.>, does not lead to covariant space-time solutions because it is not contained in the general class of consistent models derived here. Modifications of the background dynamics, one of the key ingredients in cosmological models of loop quantum gravity, instead have to be reflected in coefficients of the inhomogeneous terms and in the corresponding emergent line element, as determined by strong covariance conditions. Midisuperspace quantizations of polarized Gowdy and related models, as in <cit.>, would have to take into account the new holonomy behavior found in equation (<ref>) in order to be compatible with a covariant semiclassical limit. The dependence of the holonomies on anisotropies (rather than areas or volumes as previously assumed in models of loop quantum gravity) then implies new phenomenological behaviors. These applications indicate that emergent modified gravity has important implications for classical as well as quantum models of gravity. § ACKNOWLEDGEMENTS The authors thank Manuel Díaz and Aidan Kelly for discussions. This work was supported in part by NSF grant PHY-2206591. 10 Higher M. Bojowald and E. I. Duque, Emergent modified gravity, Class. Quantum Grav. 41 (2024) 095008, [arXiv:2404.06375] HigherCov M. Bojowald and E. I. Duque, Emergent modified gravity: Covariance regained, Phys. Rev. D 108 (2023) 084066, [arXiv:2310.06798] EmergentFluid E. I. Duque, Emergent modified gravity: The perfect fluid and gravitational collapse, Phys. Rev. D 109 (2024) 044014, [arXiv:2311.08616] EmergentScalar M. Bojowald and E. I. Duque, Emergent modified gravity coupled to scalar matter, Phys. Rev. D 109 (2024) 084006, [arXiv:2311.10693] SphSymmMinCoup A. Alonso-Bardají and D. Brizuela, Spacetime geometry from canonical spherical gravity, [arXiv:2310.12951] Regained S. A. Hojman, K. Kuchař, and C. Teitelboim, Geometrodynamics Regained, Ann. Phys. (New York) 96 (1976) 88–135 EffLine M. Bojowald, S. Brahma, and D.-H. Yeom, Effective line elements and black-hole models in canonical (loop) quantum gravity, Phys. Rev. D 98 (2018) 046015, [arXiv:1803.01119] SphSymmEff A. Alonso-Bardají, D. Brizuela, and R. Vera, An effective model for the quantum Schwarzschild black hole, Phys. Lett. B 829 (2022) 137075, [arXiv:2112.12110] SphSymmEff2 A. Alonso-Bardají, D. Brizuela, and R. Vera, Nonsingular spherically symmetric black-hole model with holonomy corrections, Phys. Rev. D 106 (2022) 024035, [arXiv:2205.02098] HigherMOND M. Bojowald and E. I. Duque, MONDified gravity, Phys. Lett. B 847 (2023) 138279, [arXiv:2310.19894] EmergentSig M. Bojowald, E. I. Duque, and D. Hartmann, A new type of large-scale signature change in emergent modified gravity, Phys. Rev. D 109 (2024) 084001, [arXiv:2312.09217] EmergentMubar I. H. Belfaqih, M. Bojowald, S. Brahma, and E. I. Duque, Black holes in effective loop quantum gravity: Covariant holonomy terms, [arXiv:2407.12087] OstrogradskiProblem R. P. Woodard, Avoiding Dark Energy with 1/R Modifications of Gravity, Lect. Notes Phys. 720 (2007) 403–433, [astro-ph/0601672] Gowdy R. H. Gowdy, Vacuum spacetimes with two-parameter spacelike isometry groups and compact invariant hypersurfaces: Topologies and boundary conditions, Ann. Phys. 83 (1974) 203–241 EinsteinRosenAsh K. Banerjee and G. Date, Loop quantization of polarized Gowdy model on T^3: Classical theory, Class. Quantum Grav. 25 (2008) 105014, [arXiv:0712.0683] LapseGauge J. M. Pons, D. C. Salisbury, and L. C. Shepley, Gauge transformations in the Lagrangian and Hamiltonian formalisms of generally covariant theories, Phys. Rev. D 55 (1997) 658–668, [gr-qc/9612037] CUP M. Bojowald, Canonical Gravity and Applications: Cosmology, Black Holes, and Quantum Gravity, Cambridge University Press, Cambridge, 2010 SymmRed M. Bojowald and H. A. Kastrup, Symmetry Reduction for Quantized Diffeomorphism Invariant Theories of Connections, Class. Quantum Grav. 17 (2000) 3009–3043, [hep-th/9907042] SphSymm M. Bojowald, Spherically Symmetric Quantum Geometry: States and Basic Operators, Class. Quantum Grav. 21 (2004) 3733–3753, [gr-qc/0407017] SphSymmHam M. Bojowald and R. Swiderski, Spherically Symmetric Quantum Geometry: Hamiltonian Constraint, Class. Quantum Grav. 23 (2006) 2129–2154, [gr-qc/0511108] InhomLattice M. Bojowald, Loop quantum cosmology and inhomogeneities, Gen. Rel. Grav. 38 (2006) 1771–1795, [gr-qc/0609034] IsoCosmo M. Bojowald, Isotropic Loop Quantum Cosmology, Class. Quantum Grav. 19 (2002) 2717–2741, [gr-qc/0202077] LivRev M. Bojowald, Loop Quantum Cosmology, Living Rev. Relativity 11 (2008) 4, [gr-qc/0601085], http://www.livingreviews.org/lrr-2008-4 AnoFree T. Thiemann, Anomaly-Free Formulation of Non-Perturbative, Four-Dimensional Lorentzian Quantum Gravity, Phys. Lett. B 380 (1996) 257–264, [gr-qc/9606088] AnoFreeWeak C. Tomlin and M. Varadarajan, Towards an Anomaly-Free Quantum Dynamics for a Weak Coupling Limit of Euclidean Gravity, Phys. Rev. D 87 (2013) 044039, [arXiv:1210.6869] AnoFreeWeakDiff M. Varadarajan, Towards an Anomaly-Free Quantum Dynamics for a Weak Coupling Limit of Euclidean Gravity: Diffeomorphism Covariance, Phys. Rev. D 87 (2013) 044040, [arXiv:1210.6877] ConstraintsG M. Varadarajan, The constraint algebra in Smolin's G→ 0 limit of 4d Euclidean Gravity, Phys. Rev. D 97 (2018) 106007, [arXiv:1802.07033] LoopSchwarz R. Gambini and J. Pullin, Loop quantization of the Schwarzschild black hole, Phys. Rev. Lett. 110 (2013) 211301, [arXiv:1302.5265] Axion I. H. Belfaqih, M. Bojowald, S. Brahma, and E. I. Duque, [to appear] GowdyQuadratic B. K. Berger, Quantum Cosmology: Exact Solution For The Gowdy T^3 Model, Phys. Rev. D 11 (1975) 2770–2780 KasnerFLat P. Singh, Is classical flat Kasner spacetime flat in quantum gravity?, Int. J. Mod. Phys. D 25 (2016) 1642001, [arXiv:1604.03828] MultiMess1 LIGO Scientific, Virgo, Fermi GBM, INTEGRAL, IceCube, IPN, Insight-Hxmt, ANTARES, Swift, Dark Energy Camera GW-EM, DES, DLT40, GRAWITA, Fermi-LAT, ATCA, ASKAP, OzGrav, DWF (Deeper Wider Faster Program), AST3, CAASTRO, VINROUGE, MASTER, J-GEM, GROWTH, JAGWAR, CaltechNRAO, TTU-NRAO, NuSTAR, Pan-STARRS, KU, Nordic Optical Telescope, ePESSTO, GROND, Texas Tech University, TOROS, BOOTES, MWA, CALET, IKI-GW Follow-up, H.E.S.S., LOFAR, LWA, HAWC, Pierre Auger, ALMA, Pi of Sky, DFN, ATLAS Telescopes, High Time Resolution Universe Survey, RIMAS, RATIR, SKA South Africa/MeerKAT Collaborations, AstroSat Cadmium Zinc Telluride Imager Team, AGILE Team, 1M2H Team, Las Cumbres Observatory Group, MAXI Team, TZAC Consortium, SALT Group, Euro VLBI Team, and Chandra Team at McGill University (Abbott B. P.ẽt al.), Multi-messenger Observations of a Binary Neutron Star Merger, Astrophys. J. 848 (2017) L12 MultiMess2 LIGO Scientific, Virgo, Fermi-GBM, and INTEGRAL Collaborations (Abbott B. P. et al.), Gravitational Waves and Gamma-Rays from a Binary Neutron Star Merger: GW170817 and GRB 170817A, Astrophys. J. 848 (2017) L13 MultiMess3 D. A. Coulter, R. J. Foley, C. D. Kilpatrick, M. A. Drout, A. L. Piro, B. J. Shappee, M. R. Siebert, J. D. Simon, N. Ulloa, D. Kasen, B. F. Madore, A. Murguia-Berthier, Y.-C. Pan, J. X. Prochaska, E. Ramirez-Ruiz, A. Rest, and C. Rojas-Bravo, Swope Supernova Survey 2017a (SSS17a), the optical counterpart to a gravitational wave source, Science 358 (2017) 1556–1558 MultiMess4 A. Murguia-Berthier, E. Ramirez-Ruiz, C. D. Kilpatrick, R. J. Foley, D. Kasen, W. H. Lee, A. L. Piro, D. A. Coulter, M. R. Drout, B. F. Madore, B. J. Shappee, Y.-C. Pan, J. X. Prochaska, A. Rest, C. Rojas-Bravo, M. R. Siebert, and J. D. Simon, A Neutron Star Binary Merger Model for GW170817/GRB 170817A/SSS17a, Astrophys. J. 848 (2017) L34 Hybrid M. Martín-Benito, L. J. Garay, and G. A. Mena Marugán, Hybrid Quantum Gowdy Cosmology: Combining Loop and Fock Quantizations, Phys. Rev. D 78 (2008) 083516, [arXiv:0804.1098] Hybrid2 L. J. Garay, M. Martín-Benito, and G. A. Mena Marugán, Inhomogeneous Loop Quantum Cosmology: Hybrid Quantization of the Gowdy Model, Phys. Rev. D 82 (2010) 044048, [arXiv:1005.5654] Hybrid3 M. Martín-Benito, D. Martín-de Blas, and G. A. Mena Marugán, Matter in inhomogeneous loop quantum cosmology: the Gowdy T^3 model, Phys. Rev. D 83 (2011) 084050, [arXiv:1012.2324] PolCylVol D. E. Neville, The volume operator for singly polarized gravity waves with planar or cylindrical symmetry, [gr-qc/0511006] EinsteinRosenQuant K. Banerjee and G. Date, Loop Quantization of Polarized Gowdy Model on T^3: Quantum Theory, Class. Quantum Grav. 25 (2008) 145004, [arXiv:0712.0687] PlaneWavesLoop F. Hinterleitner and S. Major, Toward loop quantization of plane gravitational waves, Class. Quantum Grav. 29 (2012) 065019, [arXiv:1106.1448] PlaneWavesLoop2 F. Hinterleitner, Canonical LQG operators and kinematical states for plane gravitational waves, [arXiv:1703.03757] GowdyCov M. Bojowald and S. Brahma, Covariance in models of loop quantum gravity: Gowdy systems, Phys. Rev. D 92 (2015) 065002, [arXiv:1507.00679] GowdyAbel D. Martín-de Blas, J. Olmedo, and T. Pawlowski, Loop quantization of the Gowdy model with local rotational symmetry, Phys. Rev. D 96 (2017) 106016, [arXiv:1706.05673] GowdyComplex J. Ben Achour and S. Brahma, Covariance in self dual inhomogeneous models of effective quantum geometry: Spherical symmetry and Gowdy systems, Phys. Rev. D 97 (2018) 126003, [arXiv:1712.03677] PlaneWavesLoop3 F. Hinterleitner, Symmetry-reduced Loop Quantum Gravity: Plane Waves, Flat Space and the Hamiltonian Constraint, [arXiv:2403.11864]
http://arxiv.org/abs/2407.12378v1
20240717075643
StoX-Net: Stochastic Processing of Partial Sums for Efficient In-Memory Computing DNN Accelerators
[ "Ethan G Rogers", "Sohan Salahuddin Mugdho", "Kshemal Kshemendra Gupte", "Cheng Wang" ]
cs.AR
[ "cs.AR", "cs.AI", "cs.ET" ]
Both authors contributed equally to this work. [1] Iowa State University of Science and Technology Ames Iowa USA 50010 § ABSTRACT Crossbar-based in-memory computing (IMC) has emerged as a promising platform for hardware acceleration of deep neural networks (DNNs). However, the energy and latency of IMC systems are dominated by the large overhead of the peripheral analog-to-digital converters (ADCs). To address such ADC bottleneck, here we propose to implement stochastic processing of array-level partial sums (PS) for efficient IMC. Leveraging the probabilistic switching of spin-orbit torque magnetic tunnel junctions, the proposed PS processing eliminates the costly ADC, achieving significant improvement in energy and area efficiency. To mitigate accuracy loss, we develop PS-quantization-aware training that enables backward propagation across stochastic PS. Furthermore, a novel scheme with an inhomogeneous sampling length of the stochastic conversion is proposed. When running ResNet20 on the CIFAR-10 dataset, our architecture-to-algorithm co-design demonstrates up to 22x, 30x, and 142x improvement in energy, latency, and area, respectively, compared to IMC with standard ADC. Our optimized design configuration using stochastic PS achieved 666x (111x) improvement in Energy-Delay-Product compared to IMC with full precision ADC (sparse low-bit ADC), while maintaining near-software accuracy at various benchmark classification tasks. <ccs2012> <concept> <concept_id>00000000.0000000.0000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> StoX-Net: Stochastic Processing of Partial Sums for Efficient In-Memory Computing DNN Accelerators Cheng Wang July 22, 2024 ================================================================================================== § INTRODUCTION Crossbar-based analog in-memory computing (IMC) has demonstrated great potential for Deep Neural Network (DNN) acceleration by achieving high efficiency and massive parallelism at processing matrix-vector-multiplications (MVMs)<cit.>, which dominates most state-of-the-art DNN workloads<cit.>. However, the hardware cost of large-scale IMC systems is dominated by the significant overhead of peripheral analog-to-digital converters (ADCs), which is required to ensure robust data communication in a large-scale system. ADC consumes over 60-80% of the energy and chip area of IMC hardware <cit.>. Due to the large area overhead of high-precision A-D conversion, an ADC must be shared by multiple columns in a crossbar. Such design leads to a severe throughput bottleneck since the processing of the array-level outputs across multiple columns has to be done sequentially. While various recent works have aimed to mitigate the ADC bottleneck <cit.>, minimizing ADC overhead in IMC while still maintaining satisfactory inference accuracy remains a significant challenge. Various factors contribute to the difficulty in addressing the ADC bottleneck. (1) First, since the array-level partial sums (PS) are not represented at the application level, the standard quantization approach focusing on activation and weights can not directly address the ADC precision for PS. (2) Second, while quantization-aware training is effective at enabling hardware-friendly models, training with PS quantization desires backward propagation with careful incorporation of array-level variables, which requires re-designing the computational graph of the DNN model. (3) Third, the required ADC bit precision for MVM may vary significantly depending on both the algorithmic attributes (such as sparsity and DNN layer dimension) and hardware attributes (such as array size and bits per memory cell). Accommodating these varying scenarios requires reconfigurability and flexibility in ADC, leading to increased overhead and design complexity. In this work, we propose stochastic processing of the array-level partial sum for efficient IMC leveraging a spin-orbit-torque magnetic tunnel junction (SOT-MTJ) with simple circuitry such as an inverter. To address the accuracy drop due to loss of information at the stochastic PS, a PS-aware training methodology with the stochastic switching behavior of SOT-MTJs is developed. The low-overhead crossbar peripheral based on the proposed spintronic devices/circuits demonstrates significant improvement in energy efficiency and considerable area savings. Our proposed IMC design eliminates the ADC bottleneck and opens up an exciting direction where stochastic computation could play a vital role in designing ultra-efficient non-von Neumann ML accelerators. The major contributions of our work are the following: * We propose StoX-Net, an IMC architecture with aggressively quantized array-level partial sums based on stochastic switching dynamics of SOT-MTJs with simple CMOS peripherals. The ADC bottleneck in IMC is eliminated, leading to significant improvement in hardware efficiency. * We develop a comprehensive framework of PS quantization-aware training that addresses the accuracy degradation due to the stochastic conversion. In particular, both the device-level stochastic MTJ switching behavior, and key architectural attributes including bit slicing and array size, are incorporated into backward propagation. Our model with 1-bit stochastic PS achieves (or slightly exceeds) state-of-the-art accuracy at benchmarking image classification tasks. * We identify that the state-of-the-art quantization of convolutional DNN has severe limitations due to the first convolution layer remaining at high precision. The first convolution layer is typically compute-intensive and dominates the total computation of DNN. To address this challenge, we explore layer-wise inhomogeneous sampling numbers to enable aggressive PS quantization on the first convolution layer. Our inhomogeneous stochastic sampling based on a Monte-Carlo-based sensitivity analysis achieves improvement in accuracy with minimal additional overhead, demonstrating an advantageous trade-off. § BACKGROUND §.§ DNN acceleration with in-memory computing Analog IMC is being extensively explored for accelerating DNN inference to meet the growing computational demand. With the weight matrices stored in crossbar arrays, the data movement of MVM processing is drastically reduced, alleviating the von Neumann memory bottleneck. To process a large-scale DNN workload, large matrices and high-precision MVMs need to be partitioned into multiple crossbar arrays. To ensure robust data communication, an analog crossbar IMC macro is connected with other components through digital-to-analog-converters (DAC) at the input and ADC at the crossbar output. The resolution of DAC at each row is typically designed at 1 bit for area saving, while high-precision input activation will be converted to bit streams over multiple time steps (bit streaming). As for high-precision weights, due to the technological limitation in the number of bits per cell, a full-precision weight matrix will be partitioned into several slices of sub-arrays (bit slicing). The required ADC solution N for array-level PS processing is N = log_2(N_row) + I + W - 2 , where Nrow is the number of activated rows; I is input bits per stream; and W is bits per slice. Since the energy, area, and latency overhead of ADC increase significantly with bit precision <cit.>, ADC becomes the bottleneck of efficiency and performance in IMC design. §.§ Stochastic spintronics for machine learning Several emerging NVM technologies, including resistive memory (ReRAM) and magnetic random access memory (MRAM), have been explored for machine learning acceleration. Recent explorations have demonstrated that spintronic devices exhibit sufficient endurance to realize both synaptic weight storage and neural activation. <cit.> Particularly, an SOT-MTJ under an excitation current with varying magnitude will have the magnetization switching probability as a sigmoidal function of the current, which provides a direct emulation of a stochastic neuron <cit.> with significantly higher area/energy efficiency compared to CMOS implementations. Moreover, the separate read/write paths of SOT-MTJs offer immense design flexibility by eliminating the constraint of write current density<cit.>. All-spin neuro-synaptic processors based on stochastic switching of SOT-MTJ have been proposed <cit.>. We will explore SOT-MTJs for efficient PS processing of the crossbar output. §.§ Related work on addressing the ADC bottleneck in IMC Various works have investigated co-designing the IMC hardware architecture and DNN algorithms to reduce the ADC precision. One major thrust focuses on exploiting quantization with hardware-aware training. In <cit.> binary neural networks exhibit improved efficiency for IMC, but the models are limited to 1-bit representation. Recent works <cit.> implemented bit slicing/streaming to map workloads with multi-bit weights/activations. Aggressively reducing the ADC precision to 1-bit will essentially enable using sense amplifiers. However, 1-bit PS showed sizeable accuracy degradation even after hardware-aware re-training <cit.>. Moreover, it is important to note that the state-of-the-art quantization-aware training models keep the first convolution layer at full precision <cit.>. However, the compute-intensive first convolution layer in image processing can dominate the overall computation workloads. Keeping a full-precision first layer severely limits the overall improvement in energy and throughput. Another major thrust is to exploit sparsity through pruning and re-training to enable lower ADC precision <cit.>. Higher sparsity reduces the range of possible values of MVM output and thus reduces the ADC resolution requirement. As a result, sparsity-aware IMC with reconfigurable ADC resolution can improve energy and latency. However, it is important to note that the area of the peripheral ADC circuitry remains large in order to handle the highest bit precision of the reconfigurable design. Such large area overhead still hinders the hardware parallelism in IMC architecture. Compared to related works, our proposed StoX-Net exhibits distinctive characteristics. First, we, for the first time, represent the array-level PS by hardware-inspired stochastic bits, and incorporate the stochastic representation into training with bit slicing. Second, we leverage the sampling of stochastic bits to provide reconfigurability in the conversion precision without sacrificing the area efficiency and hardware parallelism. Third, to achieve better scalability to larger-scale input, we venture into aggressive PS quantization of the first convolution layer and demonstrate a balanced performance of accuracy and hardware efficiency. § IMC WITH STOCHASTIC PARTIAL SUMS The proposed Stox-Net is built on IMC crossbar arrays where current-voltage converters, column-shared MUXs, and peripheral ADCs are replaced by a row of stochastic SOT-MTJs (Fig. <ref>). We first present the device/circuit design of our stochastic PS processing components and then implement stochasticity-aware training of quantized DNNs with bit slicing considered <cit.>. §.§ Stochastic PS processing based on MTJ The stochastic processing based on the current-driven probabilistic switching of SOT-MTJs is shown in Fig. <ref>. The device behavior is simulated using a MATLAB-based macro-spin Landau Lifshitz Gilbert (LLG) equation simulator with the SOT current included. Using an MTJ with tunnel magnetoresistance ratio (TMR) of 350-450% <cit.>, a simple voltage divider circuit combined with a CMOS inverter can behave as a stochastic binary analog to digital converter as shown in Fig. <ref>. The area of the stochastic MTJ converter is obtained based on the layout in Fig. <ref>. The layout is drawn following the λ-based design rules <cit.>, where λ is half the feature size. Considering a 28 nm feature size, the area can be estimated as A = 32λ * 26λ = 0.0163 μm^2. The capability of the stochastic MTJ-converter is illustrated in Fig. <ref>. For simulating the SOT-MTJ, we use a modified version of the STT-SOT-MTJ model developed by the Spintronics Interdisciplinary Center (Spinlib) <cit.>. The MTJ-converter features a 435 kΩ reference MTJ and a free SOT-MTJ device arranged as a voltage divider. The SOT-MTJ receives pulses of 100 μ A and -100 μ A of 2.5 ns each to imitate the crossbar current. The SOT-MTJ switches its state when 100 μ A is supplied and resets with an equivalent negative current. This current modulation alters the SOT-MTJ’s resistance state, affecting the voltage division ratio with the reference resistor. The midpoint voltage of this divider varies due to the SOT-MTJ’s state and is used as the input to an inverter, and the corresponding output has distinct ‘0’ and ‘1’ voltage levels. The sizing of the transistors in the inverter is optimized by minimizing the width to ensure distinct and clear output levels corresponding to the voltage variations from the divider. The MTJ’s resistance is adjusted by varying its dimensions and the thickness of the oxide layer t_ox to achieve a parallel low state resistance R_LRS of 180 kΩ and an anti-parallel high state resistance R_HRS of 970.2 kΩ, with a TMR of 4.4. The SOT-MTJ’s elliptical shape enhances the efficiency and reliability of the switching process by optimizing the magnetic field distribution within the junction. Our simulations reported a write latency of 650.6 ps and read latency of 1.2 ns, indicating rapid response times suitable for high-frequency applications. Energy consumption was measured at 5.695 fJ per conversion cycle by calculating the total power consumption for one period. The device operates over a temperature range of -30°C to 300°C, emphasizing the robustness and potential versatility of the MTJ device in various environmental conditions. As Demonstrated in the following sections, the proposed SOT-MTJ-based converter is orders of magnitude more energy and area-efficient than ADC, leading to significant improvement of system-level hardware efficiency compared to standard IMC. §.§ Hardware-Aware Training of Quantized DNN with Stochastic Partial Sum We map large DNN workloads into crossbars with finite array size and limited bits per cell with bit slicing from <cit.> and input/weight representation from <cit.>. Mapping a convolution layer of kernel size K_h * K_w and C_in input channels using a crossbar array of R_arr rows results in a number of subarrays defined by N_arrs=ceil(K_h*K_w*C_in/R_arr). For bit slicing, we consider 1-, 2-, and 4-bit per memory cell in the StoX-Net design. Based on Algorithm <ref>, a software-level MVM operation with the number of weight and input slices being W_s and A_s, will have A_s*W_s array-level PSs to be shift-and-added (S&A). In addition to workload partitioning, StoX-Net also considers SOT-MTJ samplings in its optimization. To process the crossbar output using the SOT-MTJ, the curve of switching probability versus current is emulated by the tanh function: MTJ(x) = 0 tanh(α x) < rand 1 tanh(α x)≥ rand where α is the sensitivity parameter. Increasing α will make the tanh curve more step-like, approaching a deterministic sense amplifier. The effective sensitivity can be altered by tuning the range of crossbar current when mapping MVM operations to hardware. As shown in Fig. <ref>, the stochastic network is trained to generate a broader distribution, covering more non-ternary values, which makes leveraging the stochastic switching possible. The deterministic 1-bit sense amplifier (SA) model trains to generate a narrower Gaussian distribution around 0 and a concentration at -1 and 1. During the MTJ conversion, the accumulated analog current supplied to the heavy metal layer of SOT-MTJ will be converted to bi-stable states, which are interpreted as (-1, 1) for better representation in training <cit.>. To mitigate the errors due to stochastic bit generation, we leverage multisampling to recover accuracy with some trade-off of hardware efficiency. More SOT-MTJ samples will lead to a better representation of the analog output. We explored no more than 8 samples per conversion since the total conversion energy and latency increase linearly with the number of samples. Additionally, we perform a Monte Carlo simulation on our network's trainable StoX layers to guide an inhomogeneous multisampling scheme. For each layer, we apply a uniform random perturbation to its weights at inference and then measure the significance of the weights of that layer to the model performance, which is represented by the accuracy loss. Fig. <ref> shows the first layer being most susceptible to perturbation, indicating the most significance, while layers close to the output classifications are moderately susceptible to error as well. Based on this analysis, we implement a model with mixed layer-wise sampling numbers ("Mix-QF"), where layers with higher sensitivity utilize more samples and layers with lower sensitivity fewer samples. We achieve an accuracy similar to a homogeneous 4-sample network with only a small increase in operations compared to the 1-sample network. For maintaining higher accuracy and decreasing overhead, only layers that require extra samples are given multisampling in our study. During backward propagation, the loss ∂ L/∂ W_l across a single convolutional layer with partial sums follows the high-level representation in Fig. <ref> and is shown in-depth via the following equations: ∂ L/∂ W_l = ∂ L/∂ O_l∑_i=1^N_arrs∑_j=1^samples∂ O_l^i,j/∂ W_l. Where O_l is the layer's final output obtained from the S&A summation of all array-level output O_i,j (each sample i and each subarray j). For each S&A subarray output vector and its respective MTJ samples, the gradient needs to be calculated as: ∂ O_l^i,j/∂ W_l = ∂ O_l^i,j/∂ O_S&A^i,j∂ O_S&A^i,j/∂ O^i,j_MTJ∂ O^i,j_MTJ/∂ O^i_q∂ O^i_q/∂ W^i_q∂ W^i_q/∂ W_bn∂ W_bn/∂ W_l. Equation 3 considers the S&A, MTJ, weight quantization, array-splitting, and normalization algorithms with respect to the layer's weights. We approximate the MTJ's gradient using a straight-through estimator (STE), and clamp values outside its saturation range to avoid exploding gradients. The weight quantization's gradients are similar to that of <cit.>. Considering all of these, the gradient for backward propagation can be reduced to: ∂ L/∂ W = ∂ L/∂ O_l∂ O_l/∂∑ O_arrs^mtjs∂∑ O_arrs^mtjs/W_qW_q/W_bnW_bn/W, as shown in Fig. <ref>. § EXPERIMENTAL RESULTS: FUNCTIONAL ACCURACY AND HARDWARE EFFICIENCY §.§ Evaluation Methodology We evaluate both the functional accuracy and hardware efficiency of the proposed StoX-Net designs, including both a quantized first layer (QF) and a high-precision first layer (HPF), while other subsequent layers adopt the MTJ-based stochastic conversion. All QF models take 8 samples per MTJ conversion in the first layer due to its importance shown by the Monte Carlo analysis in Fig. <ref>. We compare StoX-Net with a standard IMC featuring a high-precision ADC for all layers (HPFA). We also include an IMC design with sparse ADC (SFA) as a strong baseline for hardware comparison, where we reduce the precision of the ADC by 1 from the full precision ADC in HPFA. We investigate the impact of hardware and software configurations, including array sizes, weight bit precision, tanh sensitivity parameter, and the number of MTJ samples across the study. We denote X-bit weight, Y-bit activation, and Z-bits per slice as XwYaZb_s. For example, 4-bit weight, 4-bit activation, and 2-bits per slice are 4w4a2b_s. Unless otherwise specified, the baseline StoX network can be assumed, having characteristics of 4w4a4b_s, α=4, R_arr=256, 1 sample per MTJ, and HPF. Networks will only be described by their difference from this baseline. For example, 1-QF uses the stochastic MTJ converter with 1 sample in all layers except the first, which would have an MTJ converter with 8 samples. 4-HPF uses a high-precision ADC in the first layer and 4 samples of the stochastic converter in all other layers. Layers with high-α will not see multisampling as such hardware is deterministically binary. For StoX-Net's accuracy measurements, we train Resnet-20 models based on the proposed convolutional methodology in Section <ref>. For CIFAR-10, we train with hyperparameters similar to <cit.>, and for MNIST we lower epochs from 400 to 25. For evaluating hardware efficiency, we simulate a simple IMC architecture similar to that of ISAAC <cit.>, as shown in Fig. <ref>. To account for both positive and negative input activations and weights, we follow the representation described in <cit.>. In this representation, the positive and negative components of the activations are streamed separately. Two bits corresponding to the positive and negative components of each weight bit are stored in a group of two memristor cells. As a result, the current accumulated in each crossbar column, which can be either positive or negative, represents the MVM operation. Hence, the energy and area values for the DAC and crossbar (Xbar) cells are doubled during the hardware evaluation. We create our architecture models using Accelergy/Timeloop <cit.>. We use ReRAM devices with (R_LRS) of 500kOhm, and an R_HRS/R_LRS ratio of 10 <cit.> as the Xbar cells. We use Successive Approximation Register (SAR) ADC with the area and power metrics scaled for 28nm technology based on comprehensive surveys and analysis of ADC architectures <cit.> <cit.>. Table <ref> summarizes key parameters for the hardware components. Our baseline HPFA architecture results were verified to be consistent with previous works <cit.><cit.>. §.§ Functional Accuracy As summarized in Tables <ref> & <ref>, the proposed StoX-Net can reach within 1% of the reference HPF model's accuracy for both MNIST and CIFAR-10 datasets. As shown in Fig. <ref>A, large arrays suffer small accuracy loss as the stochastic conversion is applied to an increased range of possible crossbar output values, losing information. Bit slicing is shown to moderately mitigate the accuracy degradation of quantization (Fig. <ref>B). Most notably, stochastic multisampling consistently mitigates accuracy loss. In Fig. <ref>E, "high-α, high-α QF" implements step-like sense amplifier to quantize all layers including the first conv layer. Comparatively, by enabling a stochastic first layer, "high-α, QF" experiences immediate (over 3.4%) accuracy improvement, with 8 samples for the first StoX conv (see the bright-green and grey bars). Such observation demonstrates that our stochastic MTJ enables an effective quantization of the first conv layer through its increased representational capability. We also observe that while larger α (step-like) might be preferred for 1-time sampling (Fig. <ref>D), higher accuracy can be obtained through multisampling (Table <ref>), such as 4 or 8 samples, of a smaller α (stochastic). The configuration of layer-wise mixed samplings ("Mix-QF") only increases the number of MVM conversions by 14.3% compared to the 1-sample model but reaches 85.5% accuracy near the 4-sample network. As discussed in the following section, although sampling multiple times monotonically increases the energy and latency, our 4-QF configuration that uses 4 samples per MTJ conversion still achieves ∼100x improvement in EDP. §.§ Hardware Efficiency It is important to note that our area-efficient and highly parallel PS processing directly impacts the datapath design pattern of crossbar MVM operation. Fig. <ref> shows the pipelined operations inside a crossbar using an ADC (top) and a stochastic MTJ converter (bottom). In standard IMC, the length of each pipeline stage is determined by the longest stage, i.e. the ADC readout of all shared columns in a crossbar. Our optimization alleviates the throughput bottleneck by parallelizing all columns with compact MTJ converters, thereby considerably reducing the length of the pipeline stage (from 128 ns to 1.85 ns in our case). However, increasing the number of samples reduces such throughput benefits and creates a trade-off between latency and accuracy. Our hardware performance evaluation is summarized in Fig. <ref>. Full precision ADC (HPFA) and Sparse design (SFA) are our baselines for comparison. The first observation is that SOT-MTJ converters significantly enhance parallelism without large area overhead, showing large savings in area and EDP. Our proposed hardware design can offer area, energy, and latency reduction of up to 142x, 22x, and 30x, respectively. Our design achieves up to 111x (666x) EDP improvement compared to an IMC with sparse (full-precision) ADC. We further demonstrate in Fig. <ref> that our architecture can be scaled to larger models and workloads while maintaining remarkable efficiency improvement. Moreover, the multisampling feature of our proposed stochastic converter offers the flexibility to trade off EDP gain in exchange for improved accuracy. § CONCLUSION We develop StoX-Net, an efficient IMC design that eliminates the ADC bottleneck by processing array-level partial sum using probabilistic switching of MTJ. The proposed IMC co-design framework reaches close to the state-of-the-art accuracy at CIFAR-10 classifications while achieving over 140x area efficiency and 100-600x improvement in hardware efficiency characterized by EDP. Moreover, leveraging the multisampling capability of stochastic MTJ converters, quantizing the first convolution layer, which is costly in prior quantization-aware training, was demonstrated to have less than 2% accuracy loss. Our work suggests exciting opportunities for utilizing stochastic computation and spintronics for developing next-generation AI hardware accelerators. ACM-Reference-Format.bst
http://arxiv.org/abs/2407.13318v1
20240718091919
A new approach to delegate signing rights to proxy signers using isogeny-based cryptography
[ "Kunal Dey", "Somnath Kumar", "Vikas Srivastava", "Sumit Kumar Debnath" ]
cs.CR
[ "cs.CR" ]
1]Kunal Deykunaldey3@gmail.com 2]Somnath Kumarsomnath1691997@gmail.com 3]Vikas Srivastavavikas.math123@gmail.com 2]Sumit Kumar Debnathsdebnath.math@nitjsr.ac.in [1]Department of Computer Science, University of Calgary, Calgary, Canada [2]Department of Mathematics, National Institute of Technology Jamshedpur, Jamshedpur, India [3]Department of Mathematics, Indian Institute of Technology Madras, Chennai, India E-governance is a two-way protocol through which one can use government services, share data and request information. It refers to the use of communication and information technologies to provide government services to public in an efficient and fast manner. In addition, any document submitted to the e-Government system must be authenticated by a government officer using a digital signature scheme. In the context of digital signatures, the proxy signature is an important cryptographic primitive that allows the original signer to delegate signing authority to another signer (proxy signer). The proxy signature has a number of important applications in the e-government system. There are now a large amount of proxy signature schemes. The security of most of them relies on the following hard problems: the discrete logarithm problem and the factorization of integers problem. However, a large-scale quantum computer can solve them in polynomial time due to Shor's algorithm. As a consequence, there is a need for a quantum computer-resistant proxy signature to secure e-governance system from quantum adversaries. In this work, we propose the first post-quantum isogeny based proxy signature scheme CSI-PS (commutative supersingular isogeny proxy signature). Our construction is proven to be uf-cma secure under the hardness of the group action inverse problem (GAIP) based on isogeny. A new approach to delegate signing rights to proxy signers using isogeny-based cryptography [ July 22, 2024 ============================================================================================ § INTRODUCTION Electronic governance often abbreviated as e-governance is an application of information and communication technologies to facilitate the direct interaction between government and other entities like citizens, businesses and employees. In other words, e-governance aims to use technology to achieve the objective of governance. The basic goals of e-governance are to increase individual's access to information, connect with business and industry, and make government more accountable, transparent, and effective. Essentially, it performs government functions through information, communication, and technology. In e-governance, there are four types of government interactions: government-to-citizen (G2C), government-to-government (G2G), government-to-business (G2B), and government-to-employee (G2E). In e-governance initiatives worldwide, various security measures are implemented to protect the data of residents. Each department's representatives are provided with digital signature certificates, allowing them to digitally sign documents for authentication purposes. However, a dilemma arises as to how the government representative can sign a paper when he is on leave. That is where proxy signatures come in. A Proxy signature is a type of digital signature, where a participant (original signer) makes some representative (proxy signer) able to sign a message on behalf of the original signer. Given the context of the situation, the proxy signature can be employed by the government official to solve the problem. By making use of such a scheme, he can transfer the signing authority to a proxy signer who can sign on behalf of the government official for a limited time.We demonstrate the above in Figure <ref>. The proxy signature idea was developed by Mambo et al. <cit.> in 1996. A proxy signature should attain the below-mentioned security features <cit.>: * Unforgeability: No one but the proxy signer, not even the original signer, should be able to generate a correct proxy signature. * Identifiability: The proxy signer should be identified after observing the proxy signature by anyone having the authority to access the signature. * Undeniability: After a proxy signer generates a valid proxy signature for the original signer, they cannot later deny having created it. * Verifiability: The verifier can be convinced of the original singer's agreement from the proxy signature. * Distinguishability: In polynomial time, the original signer's valid signature can be distinguished from a valid proxy signature. * Secrecy: Any information about the secret key of the original signer cannot be determined at any stage during proxy signature. * Prevention of misuse: Proxy signer will not be able to sign a message with some proxy key that is not defined in its warrant. Otherwise, it will be identified from its warrant. * Revocability: The proxy signer's signing authority will be disabled automatically on completion of the validity of the period. Then any proxy signature produced by the proxy signer will not be valid. In addition to applications in e-governance, proxy signature find applications in other real-life situations like global distribution networks <cit.>, mobile agent applications <cit.>, etc. The majority of the currently used proxy signature schemes <cit.> are based on hard problems like the discrete logarithm problem and the integer factorization problem. However, they are not protected against attacks by quantum computers since Shor's algorithm <cit.> can efficiently solve them. Consequently, post-quantum cryptography (PQC) is considered a promising alternative believed to offer security against quantum adversaries. There are several candidates of PQC, such as Lattice-based cryptography <cit.>, Multivariate cryptography <cit.>, Isogeny-based cryptography <cit.>, Hash-based cryptography <cit.>, Code-based cryptography <cit.>. Isogeny-based cryptography (IBC) is a relatively new area within post-quantum cryptography (PQC) that offers small public key sizes compared to other PQC candidates. The concept of IBC was first introduced by Couveignes <cit.> in 1997. That work was based on ordinary elliptic curves for which the endomorphism ring is an order of some imaginary quadratic field (IQF). In 2006, Rostovtsev and Stolbunov <cit.> rediscovered the idea and proposed a Diffie-Hellman <cit.> like key exchange (KE) protocol. In 2018, Castryck et al.  <cit.> developed an efficient key agreement protocol Commutative supersingular Isogeny Diffie-Hellman (CSIDH) which uses supersingular elliptic curves defined over a prime finite field. However, instead of the full endomorphism ring, they worked with the 𝔽_p-rational endomorphism ring, which is an order of IQF and hence commutative. Throughout the literature, the emphasis is on isogeny-based protocols relying on class group action <cit.>. Research shows that there are many advantages <cit.> in isogeny-based cryptography, such as the size of the signatures and public keys, which are almost the same as existing practical schemes. Another major advantage is its sufficiently low communication cost. In the literature, there are some efficient isogeny based digital signatures like `SeaSign' <cit.>, `CSI-Fish'<cit.>, `SQISign' <cit.> which are based on the `Fiat-Shamir' technique. Additionally, there are isogeny-based identity based signature scheme (IBS) <cit.>, Group signature <cit.> and Ring signature <cit.> and Signcryption <cit.>. However, there is a lack of proxy signatures based on IBC. Consequently, designing a secure and efficient isogeny-based proxy signature is an interesting open challenge. §.§ Our contribution The major contributions of the paper are outlined below. * To the extent of our knowledge, there has been no construction of proxy signature in the context of isogeny-based cryptography. In this paper, we addressed this open challenge by designing the first isogeny-based post-quantum secure proxy signature CSI-PS (Commutative Supersingular Isogeny Proxy Signature). * The security of our proposed design CSI-PS is based on hardness assumption of the group action inverse problem (GAIP). In particular, we show that CSI-PS achieves uf-cma security under the random oracle model. Moreover, CSI-PS also meets the other security requirements, such as identifiability, undeniability, and verifiability. * We observe that the public keys of original signer and proxy signer are of sizes L_0⌈log(p)⌉ bits, original signer's and proxy signer's secret key sizes are L_0⌈n log(2I_0+1)⌉ bits each, sizes of proxy share and proxy signature become M_1M_2⌈log(L_1+1)⌉ + M_1M_2⌈n log(2I_1+1)⌉ bits each. * To determine the most optimal size for the proxy signature, we perform calculations for different parameter choices. We conclude that our proxy signature achieves a size of 3052B with parameters set to (16, 8, 8, 2, 65535, 255). A concrete instantiation of CSI-PS for the parameters set (16, 8, 8, 2, 65535, 255) is provided in Section <ref>. We demonstrate how an original signer can use our proposed CSI-PS design to create a proxy signer with signing authority, and how the proxy signer can then generate a valid proxy signature on a document. * The only cost intensive operation in our design is class group computation. We computed the cost of class group computation in terms of field multiplications. In addition, we calculated the exact number of field multiplications required for public key generation, proxy share generation, and proxy signature. Moreover, we also estimated the time complexity of CSI-PS. The results are summarized in Table <ref>. §.§ Organization of the paper The structure of the paper is outlined as follows: Section <ref> covers the preliminaries to aid readers in understanding our scheme. Section <ref> introduces our scheme, CSI-PS. Security analysis is discussed in Section <ref>, while communication complexities are examined in Section <ref>. A concrete instantiation illustrating the CSI-PS flow is provided in Section <ref>. Finally, we conclude our discussion in Section <ref>. § PRELIMINARIES Notations. Given a finite set X, |X| stands for its cardinality. 𝔽_q denotes the finite field of order q, where q is a power of some prime p. Closure of a field 𝔽_q is denoted by 𝔽_q. [a,b] denotes the set of integers z such that a≤ z≤ b. Negligible function: A function ϵ:ℕ⟶ℝ is called a negligible function, if for every polynomial poly(x) in x, there exists a positive integer N such that ϵ(x)< 1/poly(x) ∀ x>N . §.§ Elliptic curves and isogenies <cit.> An elliptic curve over a field 𝔽_q is a smooth projective curve of genus 1 having at least one 𝔽_q-rational point. For char(𝔽_q)≠2,3, the short Weierstrass affine equation for an elliptic curve over 𝔽_q is y^2=x^3+cx+d, where the point at infinity is (0:1:0), and c,d∈𝔽_q with 4c^3+27d^2≠0. Throughout the paper, elliptic curves are assumed to be defined over 𝔽_p. According to Hasse's theorem <cit.>, the cardinality |E(𝔽_p)| of E(𝔽_p) satisfies |E(𝔽_p)|=p+1-t with |t|≤ 2√(p). An isogeny χ:E_1→ E_2 is an onto morphism inducing a group homomorphism from E_1(𝔽_q) to E_2(𝔽_q). Here E_1 and E_2 are called isogenous. Indeed, E_1 and E_2 having same number of points assure that there exist an isogeny between them. An isogeny sends the identity element Θ of E_1 to identity element Θ^' of E_2. An endomorphism is a morphism from E to itself that fixes the distinguished point Θ. The set of all endomorphisms on E is denoted by End(E). One should say that the set of all endomorphisms is actually a ring. The Frobenius endomorphism is an endomorphism on E over 𝔽_q and it is defined as π_E: (x:y:z) ⟼ (x^q:y^q:z^q). Another example of endomorphism is the multiplication-by-n map, defined as [n] : P⟼ [n]P. Its kernel is called the n-th torsion subgroup E[n].The standard form of isogeny χ over 𝔽_q is given by χ(x,y)=(α(x)/β(x), γ(x)/δ(x)y), where α, β, γ, δ∈𝔽_q[x], α is co-prime to β and γ is co-prime to δ in 𝔽_q[x]. The degree of χ is denoted by deg(χ) and defined by deg(χ)= max{deg(α), deg(β)}. If (α(x)/β(x))^'≠ 0 then χ is called separable; otherwise, it is called inseparable. Note that the Frobenius endomorphism over 𝔽_p is an inseparable isogeny. The endomorphism [m] for an elliptic curve E/𝔽_p is separable if and only if m and p are co-prime. The j-invariant of y^2=x^3+cx+d is defined by j(E)= 17284c^3/4c^3+27d^2. Two elliptic curves over 𝔽_q with same j-invariant are called isomorphic. The torsion subgroup E[p] is isomorphic to either ℤ/pℤ or 0. E is called ordinary in the first case, while E is called a supersingular elliptic curve in the second case. If E is supersingular over a finite field of characteristic p, then j(E) lies in 𝔽_p^2. §.§ Endomorphism algebra and ideal class group <cit.> A quaternion algebra over 𝔽_q is a 𝔽_q-algebra with 𝔽_q basis {1,α,β,αβ}, where αβ=-βα and α^2,β^2∈𝔽_q-{0}. Let A, B be two rings and f:A→ B be a ring homomorphism. An A-module structure on B is defined by the scalar multiplication a.b=f(a)b. If B has a ring structure along with an A-module structure via f then B is called an algebra. The endomorphism algebra of E is denoted by End^0(E):= End(E)⊗_ℤℚ, where End^0(E) is a ℚ-algebra. End(E) is a free ℤ-module of rank r. Note that r=1 implies that End^0(E) isomorphic to ℚ, r=2 implies that End^0(E) is isomorphic to an imaginary quadratic field and r=4 implies that End^0(E) is isomorphic to a quaternion algebra. Note that if E is defined over a finite field, the case r=1 can not happen. An order ϑ of a ℚ-algebra is a ℤ-lattices of the ℚ-algebra, which is also a subring. For any number field K, the ring of integers ϑ_K is the unique maximal order. The orders in imaginary quadratic number fields are defined by ϑ=ℤ+fϑ_K, where f is an integer which is called the conductor of ϑ.The Frobenius endomorphism π for an elliptic curve E /𝔽_q satisfies the equation π^2-tπ+q=0 with |t|≤2√(q). Here t is called trace of π and Δ_π=t^2-4q is discriminant of π. Therefore, the discriminant Δ_π=t^2-4q of π is negative and we have π∈ℚ(√(Δ_π)) which is an imaginary quadratic field. Let G be a finite commutative group that acts freely and transitively on some set H. A hard homogeneous space (HHS) <cit.> consists of the sets G and H, where the following computational problems are required to be easy: * Computation of the group action in G. * Random selection of an element from G with uniform distribution. * Deciding validity and equality of a representation of H's elements. * Computation of the group action of g∈ G on some h∈ H. and the following problems are assumed to be hard: * Given h,k∈ H, finding g∈ G such that k=gh. * Given h,k,w∈ H with k=gh, finding k^'=gw Let ϑ be the order of a number field K. With the help of an ϑ-ideal L, we can define a fractional ϑ- ideal as L^'= ω L={ωα: α∈ L } for some ω∈ K^*. A fractioal ϑ-ideal L^' is called invertible if there exists a fractional ϑ-ideal L^'' such that L^'L^''=ϑ. Collection of invertible fractional ϑ-ideals forms a group with respect to multiplication of fractional ϑ-ideals which is defined as L^'N^'=ωω^' L N for two fractional ϑ-ideals L^'= ω L and N^'= ω^' N. Let I represents the group of invertible fractional ϑ-ideals and P denotes the subgroup of I consisting of principal fractional ϑ-ideals. The ideal class group is defined by the quotient group cl(ϑ)=I/P. The 𝔽_p-rational endomorphism ring End_p(E) for a supersingular elliptic curve E over 𝔽_p is an order in an imaginary quadratic field. Suppose E_p is the set of isomorphism classes of elliptic curves over 𝔽_p whose 𝔽_p-rational endomorphism ring is isomorphic to ϑ. Then, we have a group action of Cl(ϑ) on E_p which is free and transitive. As a consequence, E_p is Cl(ϑ)-torsor or principal homogeneous space (PHS). Let E be a supersingular elliptic curve defined over 𝔽_p with the Frobenius endomorphism π, where p ≡ 3 (mod 4) and p>3. The necessary and sufficient condition for End_p(E)=ℤ[π] is that there exists a unique e∈𝔽_p so that E is 𝔽_p isomorphic to a Montgomery curve: y^2=x^3+ex^2+x, where ℤ[π] is a subring of ℚ(√(-p)). for each S[FOR]ForEach[1] #1 §.§ Commutative supersingular isogeny Diffie-Hellman (CSIDH) <cit.> If E is a supersingular curve over 𝔽_p then its ring of 𝔽_p-rational endomorphism is an imaginary quadratic order ϑ⊆ℚ(√(-p)) which is denoted by End_p(E). In CSIDH <cit.>, we fix E_0:y^2=x^3+x over 𝔽_p for the prime p=4l_1⋯ l_n-1, where l_1,⋯, l_n are small distinct odd primes. As p≡(-1)( 4), we can say that E_0 is supersingular. Hence |E_0(𝔽_p)|=p+1≡0( l_i) and we can decompose l_iϑ=J_iJ_i, where J_i=(l_i, π-1) and J_i=(l_i, π+1). Now we have to assume that J_i's are evenly distributed in the class group and we expect that an ideal is of the form J_1^a_1⋯ J_n^a_n with (a_1,⋯ a_n) randomly chosen from [-s,s]^n for some natural number s. Thereby, we may represent the ideal J_1^a_1⋯ J_n^a_n as (a_1, ⋯, a_n). It has been observed that it is sufficient to choose m such that 2s+1≥√(|cl(ϑ)). As the trace of Frobenius endomorphism is zero for supersingular elliptic curve over 𝔽_p, its characteristic equation is π^2+p=0 yielding π=√(-p). Thus, End_p(E_0)=ℤ[√(-p)]=ℤ[π]. In other words, E_0 is 𝔽_p rational isomorphic to a Montgomery curve E_A:y^2=x^3+ex^2+x, which is represented by its coefficient e by theorem <ref>. To generate the key, Alice randomly chooses a n-tuple secret key a=(a_1, ⋯, a_n) randomly form [-s,s]^n and computes [a]=[J_1^a_1⋯ J_n^a_n] ∈ℤ[π]. She then computes the group action [a]E_0 using Algorithm <ref>, which is isomorphic to some Montgomery curve E_A with a coefficient f (public key) by Theorem <ref>. Here, Alice's secret key - public key pair is ([a], f). Similarly Bob chooses his key pairs as ([b], g). Next, Alice evaluates [a]E_B=[a]([b]E_0) and Bob computes [b]E_A=[b]([a]E_0). At the end of the protocol, they will share the same key [a][b]E_0 = [b][a]E_0 due to the commutative property of imaginary quadratic field. §.§ Rejection sampling <cit.> In order to prevent any leakage of information of the secret key, we can apply rejection sampling. Let there be two parties P and Q. P selects a secret vector a = (a_1,⋯,a_n) ∈ [-I,I]^n, where I∈ℕ so that ∏_t=1^n J_t^a_t covers almost all the ideal classes in order to make the output distribution uniform. In addition, P chooses another random vector a^' ∈ [-(δ + 1)I , (δ + 1)I]^n where δ∈ℕ. After that, Q randomly chooses a bit b and sends it to P. In the following, P evaluates γ = a^' - a if b=1 and otherwise, γ=a^'. As a consequence, we can filter γ as |γ|≤δ I to prevent any leakage of the secret key. Here the output vector γ is uniformly distributed and as a result it is independent of a. §.§ Hardness assumption Group Action Inverse Problem (GAIP) <cit.>: Given two elliptic curves E and E^' over a same field with E^'=[a]E for [a]∈ cl(ℤ[π_E]), it is hard to find [a]. Here π_E stands for the Frobenius endomorphism on E. §.§ SeaSign <cit.> The signature scheme SeaSign <cit.> consists of three algorithms Key generation, Signature and Verification which are discussed below. * (PK, SK)← Key generation(1^λ): Consider a supersingular elliptic curve E_0:y^2=x^3+x over 𝔽_p where p is a prime. On input of a security parameter λ, the signer randomly chooses a=(a_1,⋯,a_n) ∈ [-I_0,I_0]^n and evaluates E_1=[a]E_0. The public key-secret key pair of the signer is (PK, SK)=(E_1, a). * (σ)← Signature(m, SK, PK): On input of a message m, the signer performs the following steps: * chooses x^i∈_R[-(I_0+I_1),(I_0+I_1)]^n and evaluates X_i=[x^i]E_0 for i=1,2,…,t, * calculates H(X_1, ⋯, X_t, m)=b_1∥⋯∥ b_t, where H:{0,1}^*→{0,1}^t is a cryptographically secure collision resistant hash function, * sets z^k=x^k if b_k=0 and z^k=x^k-a otherwise by ensuring the fact that z^k∈[-I_1,I_1]^n, * outputs the signature as σ=(z_1, z_2, ⋯, z_t, b_1, b_2, ⋯, b_t). * (1   0)← Verification(m, σ, PK): On receiving a signature σ from the sender, a verifier executes the following operations: * computes X^'_k = . [z_k] ℰ, if b_k=0 [z_k] ℰ_1, if b_k=1 }, * evaluates H(X^'_1, ⋯, X^'_t, m)=b^'_1∥⋯∥ b^'_t , * outputs 1 if the equality b_1∥⋯∥ b_t= b^'_1∥⋯∥ b^'_t holds, else outputs 0. <cit.> The above signature scheme is unforgeable under chosen-message attack (uf-cma) relying on the hard problem GAIP in the random oracle model. §.§ Notion of a proxy signature scheme <cit.> In general, a proxy signature involves two entities A (original signer) and B (proxy signer) with the following five algorithms: * (pp) ← Setup (1^λ):- On input the security parameter λ∈ℕ, this algorithm outputs the collection of public parameters pp which are published in a public platform like a bulletin board. * ((PK_A,SK_A), (PK_B,SK_B) ) ← Key generation (pp):- In this algorithm, A and B generate their key pairs (PK_A,SK_A) and (PK_B,SK_B) respectively. * (z_B) ← Proxy share generation (SK_A,d_B):- Let d_B be the attribute of B consisting of the necessary information like DOB, id card number, etc. Then A generates a proxy share z_B with the help of SK_A in favor of B and publishes (d_B, z_B) as a warrant in the bulletin board. * (0 or 1) ← Proxy share verification (PK_A, d_B, z_B):- B runs this algorithm to check the validity of the warrant (d_B, z_B). The algorithm outputs 1 if the warrant is valid; otherwise, it outputs 0. * (σ) ← Proxy signature (m,SK_B, d_B,z_B):- The proxy signer B holding a valid warrant, outputs a signature σ on some message using its secret key SK_B. * (0 or 1) ← Proxy signature verification (m,σ, PK_A, PK_B, d_B, z_B):- Given a message-signature pair (m,σ) along with a warrant (d_B, z_B) of the proxy signer B, the verifier checks the validity of the warrant (d_B, z_B). If the warrant is valid, the verifier checks the correctness of (m,σ) using PK_B. It then outputs a deterministic value of 1 if the verification is done correctly; else, it outputs 0. §.§ Existential unforgeability under chosen-message attack (uf-cma) Let us consider a proxy signature scheme consisting of the algorithms: Setup, Key generation, Proxy share generation, Proxy share verification, Proxy signature, Proxy signature verification. The uf-cma security for the proxy signature scheme is defined by a game between an adversary (𝒜) and a challenger (𝒞) for a particular proxy signer with attribute d. Once d is chosen, it remains unchanged throughout the game. In this game 𝒞 generates the key pair for original signer and the proxy signer. In the following, 𝒞 responds to the queries made by 𝒜 over some messages polynomial number of times with d^'. Finally, the 𝒜 has to output a valid message-signature pair (m^⋆, σ^⋆) to win the game, where the message m^⋆ should not be queried before. We denote this experiment by Exp^uf-cma_proxy(1^λ) which is described below in detail. * Setup: 𝒞 runs the Key generation algorithm on input pp to generate a new public key-secret key pairs (PK_O,SK_O) for original signer and (PK_P,SK_P) for proxy signer on input of security parameter λ. * Sign-query: In this phase, 𝒜 queries to 𝒞 of proxy signature for a message m with the attribute d of the proxy signer. 𝒞 first recognizes the warrant (d, z) for d from the bulletin board and runs the algorithm Proxy signature to generate a proxy signature σ which is sent to 𝒜. 𝒜 can query polynomial number of times. * Forgery: 𝒜 outputs a forge signature σ^⋆ for a message m^⋆ which has not been previously queried. 𝒜 succeeds if it can impersonate the proxy signer i.e., Proxy signature verification outputs 1 on input (m^⋆,σ^⋆, PK_O, PK_P, d, z). The advantage of 𝒜 is defined as Adv_𝒜^Exp^uf-cma_proxy(1^λ) = Prob[Exp^uf-cma_proxy(1^λ) = 1] = Prob[𝒜 wins the game]. A prosy signature scheme is said to be uf-cma secure if for any probabilistic polynomial time adversary 𝒜, Adv_𝒜^Exp^uf-cma_proxy(1^λ)≤ϵ(λ) where ϵ(λ) is a negligible function of λ. § PROPOSED ISOGENY BASED PROXY SIGNATURE In this section, we describe our proposed scheme CSI-PS, where the original signer (A) delegates its signing authority to the proxy signer (B). Our approach consists of six algorithms: Setup, key generation, Proxy share generation, Proxy share verification, Proxy signature, Proxy signature verification. During the Setup phase, the original signer generates the public parameters. Key generation algorithm enables A and B to generate their public key-secret key pairs. During the Proxy share generation phase, A will generate a proxy share z_B for B by using its secret key and B's attribute d_B consisting of identity and other necessary information, like DOB, identity card number, etc. of B. On receiving a proxy share z_B from A, the proxy signer B checks the validity of the warrant (d_B, z_B) by implementing the algorithm Proxy share verification. During the Proxy signature algorithm, B generates a signature on a message m with its secret key and proxy share z_B. The verifier verifies the correctness of the warrant and validity of the message-signature pair during the Proxy signature verification phase. A comprehensive description is provided below: * (pp) ← Setup (1^λ):- On input of security parameter 1^λ, original signer A selects suitable CSIDH (see <ref>) parameters including one basic elliptic curve E_0:y^2=x^3 +x, one exponent interval [-I_0, I_0], I_0 ∈ ℕ, and parameters α_0, γ_0, γ_1, M_1, M_2 from ℤ . Then A computes one exponent interval I_1=α_0I_0, two branch integers L_0=2^γ_0-1, L_1=2^γ_1-1 and publishes pp = (E_0, α_0,γ_0, γ_1 M_1, M_2, I_0, I_1, L_0, L_1) * ((PK_A,SK_A), (PK_B,SK_B) ) ← Key generation (pp):- A does the following operations for i=1,2,⋯,L_0: * Randomly chooses a^(i)=(a_1^(i),⋯,a_n^(i)) ∈ [-I_0,I_0]^n. * Computes E_i=[a^(i)] E_0. * Publishes the public key PK_A=(E_i)^L_0_i=1 and keeps the secret key SK_A=(a^(i))^L_0_i=1. On the other hand, B executes the following for i=1,2,⋯,L_0: * Randomly selects b^(i)=(b_1^(i),⋯,b_n^(i)) ∈ [-I_0,I_0]^n. * Evaluates Ê_̂î=[b^(i)] E_0. * Sets the public key as PK_B=(Ê_̂î)^L_0_i=1 and the secret key as SK_B=(b^(i))^L_0_i=1. * (z_B) ← Proxy share generation (SK_A,d_B):- The original signer A executes the following steps to generate a proxy share z_B on input SK_A and B's attribute d_B. To do that, A performs the following tasks. * Computes h=H_1 (d_B,pp), where H_1 : {0,1}^*→ (ℤ_L_0+1)^M_1 is a cryptographically secure collision resistant hash function and parses h as integer {h_i}_i=1^M_1. * For i=1,⋯,M_1 and j=1,⋯,M_2 chooses x^(i,j)∈_R[-(I_0+I_1),(I_0+I_1)]^n and evaluates, X_ij=[x^(i,j)] E_h_i. * Determines C=H_2 (d_B, pp, {X_ij}_i=1 j=1^M_1 M_2) for cryptographically secure collision resistant hash function H_2 : {0,1}^*→ (ℤ_L_1+1)^M_1 M_2 and parses it to integers {C_(i,j)}_i=1 j=1^M_1 M_2. * For i=1,⋯,M_1 and j=1,⋯,M_2, sets y^(i,j)=x^(i,j), if C_(i,j) = 0 x^(i,j)+a^(h_i), otherwise * If y^(i,j)∈ [-I_1,I_1]^n then performs the next step; otherwise, starts from step 2. * Outputs the proxy share as z_B= (C, {𝐲^(i,j)}^M_1 , M_2_i=1 j=1) which is delivered to B and publishes (d_B,z_B) as warrant on the bulletin board. * (0 or 1) ← Proxy share verification (PK_A, d_B, z_B):- On input (PK_A, d_B, z_B), the proxy signer B performs the following tasks to check whether the warrant (d_B,z_B) is valid or not: * Determines h=H_1(d_B,pp), and parses h and C into integers as {h_i}_i=1^M_1 with 0≤ h_i≤ L_0 and {C_(i,j)}_i=1 j=1^M_1 M_2 with 0≤ C_(i,j)≤ L_1 respectively. * For i=1,⋯,M_1 and j=1,⋯,M_2 sets X_ij= [y^(i,j)] E_h_i, if C_(i,j) = 0 [y^(i,j)] E_0, otherwise * Determines C^'=H_2(d_B, pp, {X_ij}_i=1 j=1^M_1 M_2). * If C=C^', outputs 1 and gets the assurance of signing authority on behalf of the original signer A; otherwise, outputs 0. * (σ) ← Proxy signature (m,SK_B, d_B,z_B):- On input of a message m ∈{0,1}^⋆, SK_B, d_B and z_B, B executes the following steps to generate a proxy signature σ for m. * Computes h^'=H_1(z_B,pp) and parses h^' as integers {h_i^'}_i=1^M_1 for 0≤ h_i^'≤ L_0. * For i=1,⋯,M_1 and j=1,⋯, M_2, chooses ξ^(i,j)∈_R[-(I_0+I_1),(I_0+I_1)]^n and evaluates Y_(i,j)=[ξ^(i,j)]Ê_h_i^'. * Determines D=H_2(z_B, pp, m, {Y_ij}_i=1 j=1^M_1 M_2) and parses it to integers {D_(i,j)}_i=1 j=1^M_1 M_2 with 0≤ D_(i,j)≤ L_1. * For i=1,⋯,M_1 and j=1,⋯,M_2 sets η^(i,j)=ξ^(i,j), if D_(i,j) = 0 ξ^(i,j)+ b^(h^'_i), otherwise * If η^(i,j)∈ [-I_1,I_1]^n then performs the next step; otherwise, starts from step 2. * Outputs the signature as σ={D, {η^(i,j)}_i=1 j=1^M_1 M_2}. * (0 or 1) ← Proxy signature verification (m,σ, PK_A, PK_B, d_B, z_B):- Using (PK_A, d_B, z_B), the verifier initially checks the validity of the warrant (d_B,z_B) of B using the similar approach as discussed in proxy share verification. If the verification fails it outputs 0; otherwise, does the following: * Computes h^'=H_1(z_B,pp), and parses h^' and D into integers as {h_i^'}_i=1^M_1 with 0≤ h_i^'≤ L_0 and {D_(i,j)}_i=1 j=1^M_1 M_2 with 0≤ D_(i,j)≤ L_1, respectively. * For i=1,⋯,M_1 and j=1,⋯,M_2 , sets Y_(i,j)= [η^(i,j)]Ê_h_i^', if D_(i,j) = 0 [η^(i,j)]E_0, otherwise * Evaluates D^'=H_2(z_B, pp, m, {Y_ij}_i=1 j=1^M_1 M_2). * If D=D^', outputs 1; otherwise, outputs 0. §.§ Correctness To check the correctness of the proxy share and the proxy signature, we need to check the correctness of the equalities, C=C^' and D=D^', respectively.C=C^' will hold if and only if X_ij=X_ij for 1 ≤ i ≤ M_1, 1 ≤ j ≤ M_2. Note that X_ij=[x^(i,j)]E_h_i and X_ij= [y^(i,j)] E_h_i, if C_(i,j) = 0 [y^(i,j)] E_0, otherwise. For C_(i,j) = 0, y^(i,j) = x^(i,j) which implies X_ij=[y^(i,j)] E_h_i=[x^(i,j)] E_h_i. On the other hand, if C_(i,j)≠ 0 then y^(i,j) = x^(i,j) + a^(h_i), which implies X_ij=[y^(i,j)] E_0=[x^(i,j) + a^(h_i)]E_0=[x^(i,j)]E_h_i. Thus, the equality C = C^' holds. Similarly, it can be proved that the equality D=D^' holds. § SECURITY ANALYSIS In this section, we discuss the security of our proposed scheme. * Unforgeability of proxy signature: We now prove that our proposed isogeny based proxy signature scheme CSI-PS is uf-cma secure under the hardness of the group action inverse problem of isogeny. H_1 and H_2 are considered as random oracles. We show this by contradiction. If possible let there be an adversary 𝒜 with non-negligible advantage in the uf-cma game. Then we are going to show that using 𝒜 an oracle machine O^𝒜 can be designed to break the isogeny problem by utilizing the outputs of H_1 and H_2. In order to do that, we present a series of games Game^0, Game^1, Game^2, Game^3, where Game^i slightly modifies Game^i-1 for i=1,2,3. Let us denote the success probability of 𝒜 in Game^i as Prob[Game^i]. * Game^0: Game^0 is exactly same as uf-cma game for proxy signature. Therefore, Adv_𝒜^Exp^uf-cma_proxy(1^λ) = Prob[Exp^uf-cma_proxy(1^λ) = 1] = Prob[Game^0]. * Game^1: It is similar to Game^0, except that O^𝒜 replaces output of the the hash function H_1 on (z_B,pp) by a random string of length M_1 from (ℤ_L_0+1)^M_1. If |Prob(Game^1)-Prob(Game^0)| is non-negligible then 𝒜 will be able to distinguish the output distribution of H_1 which is impossible since H_1 is chosen as random oracle. Hence |Prob(Game^1)-Prob(Game^0)| should be negligible, say bounded by ϵ_1(λ). * Game^2: Game^2 is similar to Game^1, except that O^𝒜 substitutes output of the hash function H_2 by a random string chosen from (ℤ_L_1+1)^M_1 M_2. By a similar argument as discussed in Game^1, |Prob(Game^2)-Prob(Game^1)| ≤ ϵ_2(λ) (negligible). * Game^3: This is similar to Game^2, except that during the challenge phase when 𝒜 tries to forge a signature of a message m^*, O^𝒜 substitutes H_1 output by a random tuple from (ℤ_L_0+1)^M_1 and substitute H_2 by a random tuple from (ℤ_L_1+1)^M_1 M_2. Using the similar argument as discussed in Game^1, we may conclude that |Prob(Game^3)-Prob(Game^2)| ≤ ϵ_3(λ) (negligible). Now |Prob(Game^3)-Adv_𝒜^Exp^uf-cma_CSI-PS(1^λ)|= |Prob(Game^3)-Prob(Game^0)| ≤ |Prob(Game^3)-Prob(Game^2)|+|Prob(Game^2)-Prob(Game^1)|+ |Prob(Game^1)-Prob(Game^0)| ≤ ϵ_1(λ)+ϵ_2(λ)+ϵ_3(λ)=ϵ(λ) (say).Therefore, the success probability of 𝒜 in Game^3 is close to Adv_AD^Exp^uf-cma_CSI-PS(1^λ). Since we have assumed that Adv_𝒜^Exp^uf-cma_CSI-PS(1^λ) is non-negligible we can ensure that Prob(Game^3) is also non-negligible. Thereby, with the held of 𝒜 and controlling H_1, H_2, the oracle machine O^𝒜 can generate two valid transcripts ({Y_ij}, D^(1), η_1^(i,j)) and ({Y_ij}, D^(2), η_2^(i,j)) where D^(1)_(i,j)=0 and D^(2)_(i,j)≠ 0 ∀ i, j. Note that η_1^(i,j)=ξ^(i,j) since D^(1)_(i,j)=0 and η_2^(i,j)=ξ^(i,j)+b^(h^'_i ) since D^(2)_(i,j)≠ 0. Utilizing η_1^(i,j) and η_2^(i,j), O^𝒜 can easily extract the secret SK_B = {B^i}_i=1^L_0. Therefore, the group action inverse problem (GAIP) between two elliptic curves is broken which leads to a contradiction. Thus, the advantage of 𝒜 in the uf-cma game can not be non-negligible. In other words, our scheme CSI-PS is uf-cma secure. We now discuss the other security properties of our proposed design CSI-PS. * Identifiability: Given a signature σ with the proxy share z_B, the verifier finds (d_B, z_B) from the bulletin board and verifies the correctness of z_B. If it is valid, then from d_B, the identity of the proxy signer is detected. * Undeniability: At the stage of the verification process of the proxy signature, the proxy signer cannot deny their generation of the proxy signature. Here, the proxy signature contains {η^(i,j)}_i=1 j=1^M_1 L_1, which is constructed by using secret-key of the proxy signer. Hence, after generating the proxy signature, the proxy signer can not disagree with their signature generation. * Verifiability: In the verification process, the verifier has to be convinced about the agreement of the original signature and the proxy signer from the proxy signature. To generate the warrant z_B, the secret key of the original signer has been used. Thus, by verifying the correctness of z_B using PK_A, the verifier is convinced about the agreement. * Distinguishability: Anyone authorized to verify a signature can distinguish between a proxy signature and a normal signature. In the isogeny-based signature scheme Seasign <cit.>, the signer generates a signature on a message using their own secret key, and the verifier confirms the signature using the signer's public key. On the other hand, in a proxy signature scheme, the proxy signer uses their own proxy share z_B (generated by the original signer) and own secret key. The verifier then checks the signature using the proxy share and the public information of both the original signer and the proxy signer. Therefore, the proxy signature and the ordinary signature are distinguishable due to their different structures. * Secrecy: Given the proxy share z_B, it is impossible to derive the original signer's secret key. Deriving the secret key from the warrant would require solving the GAIP problem (see Definition <ref>) for the instances (X_ij, E_h_i), where i=1, 2, …, M_1 and j=1, 2, …, M_2. Thus, extracting the original signer's secret key is computationally hard. Similarly, it is difficult to extract the proxy signer's secret key from a proxy signature. * Prevention of misuse: The proxy signer can sign a message using the warrant (d_B, z_B) assigned specifically to them. This warrant includes their identity and other necessary information. Therefore, any misuse of the proxy signature can be traced back to the proxy signer through the warrant. * Revocability: Once the time period involved in the warrant (d_B,z_B) is over, automatically the signing authority of the proxy signer is removed. Besides, the original signer can broadcast the announcement of invalidation of the warrant (d_B,z_B) by putting a signed message on the bulletin board. § COMPLEXITY ANALYSIS OF CSI-PS We describe below the communication and computation complexity of our proposed CSI-PS. The public key sizes of the original signer and the proxy signer, the proxy share size, and the proxy signature size all contribute to the scheme's communication costs. Each signer's public key is made up of L_0 number of elliptic curves. According to Theorem <ref>, any elliptic curve can be represented by a single coefficient in 𝔽_p after class group action. Additionally, proxy share generation and proxy signature generation both contains one hash output and M_1M_2 vectors from [-I_1,I_1]^n. In our scheme, we have two hash functions H_1 and H_2. In proxy share generation and proxy signature generation algorithms four hash outputs h, h^', C and D have been used. The lengths of h and h^' are M_1⌈log(L_0+1)⌉ bits. The lengths of C and D are M_1 M_2⌈log(L_1+1)⌉ bits. The communication complexity of our proposed scheme CSI-PS is given in Table <ref>. To achieve λ bits of security we may take M_1⌈log(L_0+1)⌉≥λ and M_1 M_2⌈log(L_1+1)⌉≥λ. Since L_0 = 2^γ_0-1 and L_1 = 2^γ_1-1, we conclude that M_1γ_0≥λ and M_1 M_2γ_1≥λ. In particular, we select our own parameters M_1 and M_2 as M_1 γ_0 =λ, M_1 M_2γ_1=λ. Hence, M_1 = λ/γ_0, M_1 M_2 = λ/γ_1 and M_2 = γ_0/γ_1. We choose α_0=nM_1 L_1. Castryck et al. <cit.> proposed parameters for CSIDH-512 to achieve AES-128 bit security. CSIDH-512 includes the prime of the form p=4l_1l_2… l_n-1 where, n=74, I_73 = 373 and I_74=587, the key bound m is taken as 5 (See subsection <ref> for details of the parameters). In our scheme n=74, I_0 = 5. In order to calculate the sizes of the keys, proxy share and proxy signature, we rely on the different choices of parameters (γ_0, γ_1 M_1, M_2, L_0, L_1) as proposed in <cit.> for λ = 128-bit security level. For some choices of parameters the results are documented in Table <ref>. § CONCRETE INSTANTIATION In this section, we discuss and analyze a concrete instantiation of CSI-PS for 128-bit security level with parameter set (16, 8, 8, 2, 65535, 255) Suppose a government officer A (original signer) is the only person authorized to sign a specific document. In case A is on the leave, they must appoint a representative officer B (proxy signer) to delegate the signing authority. We demonstrate how B will obtain signing rights from A using a toy example. * Here I_0 = 5, n=74, γ_0=16, γ_1=8, M_1=8, M_2=2, L_0=2^16 -1 = 65535, L_1 = 2^8 -1 = 255, I_1=754800 * A selects the vectors a^1, ⋯, a^65535 randomly from [-5,5]^74. * A computes E_1, ⋯, E_65535, where E_i=[a^(i)]E_0, i=1, ⋯, 65535. * B selects the vectors b^1, ⋯, b^65535 randomly from [-5,5]^74. * B computes Ê_̂1̂,⋯, Ê_65535, where Ê_̂î=[b^(i)]E_0, i=1, ⋯, 65535. SK_A=a^1, ⋯, a^65535, PK_A=E_1, ⋯, E_65535 and SK_B = b^1, ⋯, b^65535, PK_B = Ê_̂1̂, ⋯, Ê_65535. The original signer will now perform the following actions. * h = H_1(d_B,pp)=h_1, ⋯, h_8, where h_1, ⋯, h_8 ∈ℤ_65536. * Randomly selects, x^(1,1), x^(1,2), x^(2,1), x^(2,2), ⋯, x^(8,1), x^(8,2) from [754795,754805]^74 * Evaluates X_11=[x^(1,1)]E_h_1, X_12=[x^(1,2)]E_h_1, X_21=[x^(2,1)]E_h_2, X_22=[x^(2,2)]E_h_2, ⋯, X_81=[x^(8,1)]E_h_8, X_ij=[x^(8,2)]E_h_8. * C=H_2 (d_B, pp, {X_ij}_i=1 j=1^8  2)= {C_(1,1), C_(1,2), C_(2,1), C_(2,2),⋯, C_(8,1), C_(8,2)}⊂ℤ_256. * Computes y^(1,1), y^(1,2), y^(2,1), y^(2,2), ⋯, y^(8,1), y^(8,2) according to Algorithm <ref>. Therefore the proxy share is z_B= (C, {𝐲^(i,j)}^8  2_i=1 j=1). Using Algorithm <ref>, the representative proxy signer can validate the proxy share using its own credentials. In the following, signing rights of the original signer might be assigned to the representative signer B. B can now use the signing rights to sign the respective documents on the behalf of the original signer A. We now demonstrate how the representative officer can sign a document. * Evaluates h^'=H_1(z_B,pp)={h_1^', ⋯, h_8^'}, where h_1^', ⋯, h_8^'∈ℤ_65536. * Randomly selects, ξ^(1,1), ξ^(1,2), ξ^(2,1), ξ^(2,2), ⋯, ξ^(8,1), ξ^(8,2) from [754795,754805]^74. * Computes the following elliptic curves, Y_11=[x^(1,1)]Ê_h_1^', Y_12=[ξ^(1,2)]Ê_h_1^', Y_21=[ξ^(2,1)]Ê_h_2^', Y_22=[ξ^(2,2)]Ê_h_2^', ⋯, Y_81=[ξ^(8,1)]Ê_h_8^', Y_ij=[ξ^(8,2)]Ê_h_8^'. * Assume that m represents the content of the document that the representative officer wishes to sign. To proceed further he calculates the hash value, D=H_2 (z_B, pp, m, {Y_ij}_i=1 j=1^8  2)= {D_(1,1), D_(1,2), D_(2,1), D_(2,2),⋯, D_(8,1), D_(8,2)}⊂ℤ_256. * Computes η^(1,1), η^(1,2), η^(2,1), η^(2,2), ⋯, η^(8,1), η^(8,2) by the method given in Algorithm <ref> and checks whether η^(1,1), η^(1,2), η^(2,1), η^(2,2), ⋯, η^(8,1), η^(8,2)∈ [-754800,754800]^74. Finally the proxy signer outputs the proxy signature σ={D, {η^(i,j)}_i=1 j=1^8  2}. The designated verifier can verify the proxy signature using the Algorithm <ref>. We now discuss the computation complexity of CSI-PS for 128-bit security level corresponding to aforementioned parameter set. The most computational intensive operation in CSI-PS is the class group computation. In the first step, we estimated the cost of one class group computation in terms of field multiplications. In the following, we compute the computational complexity of public key generation, proxy share generation, and proxy signature generation in terms of total number of field multiplications required. We also calculated the time complexity of CSI-PS by estimating the run time and cpu clock cycles for public key generation, proxy share generation, and proxy signature generation. We ran the C script[<https://yx7.cc/code/csidh/csidh-latest.tar.xz>] on a workstation having Intel Core i5-8700 2.40GHz processor with 8GB of RAM and Linux Lite v5.2 OS. The results of our experiments are summarized in Table <ref>. § CONCLUSION This paper presents a promising approach to delegate the signing rights of an original signer to proxy signers using isogeny-based cryptography. The isogeny-based signature scheme SeaSign <cit.> serves as the fundamental building block for our scheme. Our proposed scheme achieves uf-cma security under the assumption that the group action inverse problem (GAIP) between two elliptic curves is hard. Additionally, our scheme satisfies other security properties of a proxy signature. In our design, the proxy signature is M_1M_2⌈log(L_1+1)⌉ + M_1M_2⌈n log(2I_1+1)⌉ bits in size. Through calculations, we conclude that the proxy signature size is 3052B for a 128-bit security level with a certain parameter choice. Constructing the proxy signature requires 16 class group operations or 4,518,784 field multiplications. In particular, our proposed proxy signature scheme, CSI-PS, is the first of its kind using isogeny-based cryptography. § DECLARATIONS §.§.§ Funding Details This work was supported by the “International Mathematical Union (IMU) and the Graduate Assistantships in Developing Countries (GRAID) Program". §.§.§ Conflict of interest The authors state that they have not known competing financial interests or personal connections that may seem to have influenced the work described in this study. §.§.§ Data availability Data sharing is not applicable to this article as no new data were generated or analyzed to support this research.
http://arxiv.org/abs/2407.12292v1
20240717032409
Any Target Can be Offense: Adversarial Example Generation via Generalized Latent Infection
[ "Youheng Sun", "Shengming Yuan", "Xuanhan Wang", "Lianli Gao", "Jingkuan Song" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Adversarial Example Generation via Generalized Latent Infection Y.Sun et al. Center for Future Media, University of Electronic Science and Technology of China Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China youheng.sun@std.uestc.edu.cn, shengming.yuan@outlook.com, wxuanhan@hotmail.com, lianli.gao@uestc.edu.cn, jingkuan.song@gmail.com Any Target Can be Offense: Adversarial Example Generation via Generalized Latent Infection Youheng Sun1⋆0009-0006-3110-1658 Shengming Yuan1Equal contribution.0009-0003-0183-2976Xuanhan Wang2Corresponding author0000-0002-3881-9658 Lianli Gao10000-0002-2522-6394 Jingkuan Song10000-0002-2549-8322 ================================================================================================================================================================================================================ § ABSTRACT Targeted adversarial attack, which aims to mislead a model to recognize any image as a target object by imperceptible perturbations, has become a mainstream tool for vulnerability assessment of deep neural networks (DNNs). Since existing targeted attackers only learn to attack known target classes, they cannot generalize well to unknown classes. To tackle this issue, we propose Generalized Adversarial attacKER (GAKer), which is able to construct adversarial examples to any target class. The core idea behind GAKer is to craft a latently infected representation during adversarial example generation. To this end, the extracted latent representations of the target object are first injected into intermediate features of an input image in an adversarial generator. Then, the generator is optimized to ensure visual consistency with the input image while being close to the target object in the feature space. Since the GAKer is class-agnostic yet model-agnostic, it can be regarded as a general tool that not only reveals the vulnerability of more DNNs but also identifies deficiencies of DNNs in a wider range of classes. Extensive experiments have demonstrated the effectiveness of our proposed method in generating adversarial examples for both known and unknown classes. Notably, compared with other generative methods, our method achieves an approximately 14.13% higher attack success rate for unknown classes and an approximately 4.23% higher success rate for known classes. Our code is available in <https://github.com/VL-Group/GAKer>. § INTRODUCTION Deep neural networks (DNNs) have significantly advanced the field of artificial intelligence, achieving remarkable success in various domains, including image recognition <cit.>, natural language processing <cit.>, and AIGC <cit.>. Despite great success, DNNs have been shown to be significantly vulnerable to adversarial attacks <cit.>, which misleads DNNs to fail by using adversarial examples, i.e., adding human-imperceptible perturbations into clean images. Thus, it is of great importance to understand the mechanism behind DNNs and design effective assessment methods <cit.> to identify deficiencies of DNNs before deploying them in security-sensitive applications. In terms of the way of attacking, adversarial attacks generally can be divided into two categories. The first one is the untargeted attack <cit.>, where the goal of attackers is to fail DNNs. The second one is the targeted attack <cit.>, where attackers not only fail DNNs but also mislead them to recognize an image as the pre-specific target. Since the targeted attack with high flexibility poses a severe threat to security-sensitive applications, it has become a mainstream tool for vulnerability assessment of DNNs. Therefore, in this work, we focus on the study of targeted attacks. Among the target attack methods, two technical branches exist for adversarial example generation. The first one is the iteration-based framework, which produces an adversarial example of each clean image in an iterative manner. This framework has shown to be susceptible to overfitting white-box models, and iteration-based strategy leads to a heavy computational overhead <cit.>. In a different line of this, the second branch is the generator-based framework, which constructs an adversarial example by using a trained generative model and shows a great potential of transferability <cit.>. However, the generator used in existing methods <cit.> is trained to adapt adversarial examples to known target classes only. As depicted in <ref> and <ref>, the trained generator is responsible for either only one class or a set of known classes. When it comes to an unknown target class (, the class not seen during training), previous methods are not capable of generating relatively adversarial examples unless retraining the generator, thus limiting the comprehensive assessment of DNNs. To tackle this, one straightforward solution is to train a generator to adapt a vast number of target classes with a large-scale dataset (, ImageNet-21k). However, as demonstrated in <cit.>, the attack success rate of an adversarial example significantly degrades as the number of known classes increases. Hence, one question arises: how to design a generalized yet efficient assessment in which vulnerability of DNNs can be evaluated by any target classes? To answer the above question, we study a more practical paradigm. As shown in <ref>, any target object could be a good offense, and an adversarial example can be constructed from any target regardless of whether it is a known class or not. To achieve such generalization capability, we argue that extracting the major component of an object is the key to adversarial example generation. Motivated by this, we propose Generalized Adversarial attacKER, termed (GAKer). The core idea behind the GAKer is to contaminate the latent representation of a clean image with the major component of the target object. To equip GAKer with the capability of latent infection, it jointly utilizes the latent representation of a clean image and the major component of the target object to generate the adversarial example. Then, the adversarial example generated from GAKer is optimized to remain visually consistent with the clean image, but the corresponding major component is dominated by the target object. Once trained, the GAKer has the ability to replace major components between two images without depending on specific class or targeted DNN, thus improving the generalization capability of DNN assessment. Comprehensive evaluations across diverse DNNs, encompassing standard models, adversarially trained models, and vision-language foundation models, reveal that the proposed GAKer can effectively generate high-quality adversarial examples regardless of target classes. This demonstrates GAKer's generalization ability, making it a valuable tool for the adversarial robustness assessment of DNNs. In summary, three contributions are highlighted: * We propose a novel Generalized Adversarial attacKER, which is a general assessment tool since it can generate adversarial examples from any object. Without satisfying the visual appearance of an image, it can mislead any DNNs by latently changing the major components of an image. * To our knowledge, this work, for the first time, explores the problem of generalized target adversarial attack. Our study reveals that changing the major components of an object is the key to the generalized assessment of DNNs. * Extensive experiments conducted on a wide range of DNNs demonstrate the generalizability of our proposed method for vulnerability assessment, especially under the setting of any targeted class. Particularly, our method has increased the targeted attack success rate by approximately 14.13% for unknown classes and by approximately 4.23% for known classes compared with other generator-based approaches. § RELATED WORK §.§ Iterative Methods Since the discovery of adversarial examples, most iterative methods are proposed, which utilize model gradients to iteratively add adversarial perturbations to specified images. These methods are mainly categorized as gradient-based optimization and input transformation. The gradient-based optimization aims to circumvent poor local optima by employing optimization techniques. MI-FGSM  <cit.> and NI-FGSM <cit.> introduce momentum and Nesterov accelerated gradient into the iterative attack process to enhance black-box transferability, respectively. PI-FGSM <cit.> introduces patch-wise perturbations to better cover the discriminative region. VMI-FGSM <cit.> tunes the current gradient with the gradient variance from the neighborhood. RAP <cit.> advocates injecting worst-case perturbations at each step of the optimization procedure rather than minimizing the loss of individual adversarial points. The input transformation methods also increase adversarial transferability by preventing overfitting to the surrogate model. DI-FGSM <cit.> applies various input transformations to the clean images. SIT <cit.> applies a random image transformation onto each image block to generate a diverse set of images for gradient calculation. SU <cit.> introduces a feature similarity loss to encourage universal learned perturbations by maximizing the similarity between the global adversarial perturbation and randomly cropped local regions. §.§ Generative Methods Another branch of targeted attacks utilizes generators to craft adversarial examples. Compared with iterative-based attacks, generator-based attacks have several characteristics  <cit.>: high efficiency with just a single model-forward pass at test time and superior generalizability through learning the target distribution rather than class-boundary information <cit.>. Thus, many generator-based targeted attack methods are proposed, which can divided into single-target and multi-target generator attacks. Notably, this work focuses on scenarios where black-box models are entirely inaccessible, so query-based generator attacks which requiring extensive querying are not within the scope of discussion. Single-target Generative Methods: Early generative targeted attacks employed a single generator to attack a specific target, primarily aiming to enhance transferability across various models. Pourseed <cit.> proposes a generator capable of producing image diagnostic and image-dependent perturbations for targeted attacks. Naseer <cit.> introduced a relativistic training objective to mislead networks trained on completely different domains. Furthermore, Naseer <cit.> matches the perturbed image distribution with that of the target class. TTAA <cit.> captures the distribution information of the target class from both label-wise and feature-wise perspectives to generate highly transferable targeted adversarial examples. Multi-target Generative Methods: However, generator-based methods are confined to single-target attacks, which require training a generative model for each target class, resulting in considerable computational costs and inefficiency. Recently, exemplified by the introduction of the Multi-target Adversarial Network (MAN) framework <cit.>, have revolutionized this landscape. MAN represents a paradigm shift by enabling the generation of adversarial examples across multiple target classes through a unified training process. Yang  <cit.> introduce a significant contribution by leveraging a hierarchical generative network. Through this design, they are able to train 20 generators, each trained for 50 target classes, thereby covering all 1000 classes in the ImageNet dataset. Gao  <cit.> proposes a generative targeted attack strategy named Easy Sample Matching Attack (ESMA), which exhibits a higher success rate for targeted attacks through generating perturbations towards High-Sample-Density-Regions of the target class. §.§ Adversarial Defenses A primary class of defense methods processes adversarial images to break the perturbations. For instance, Guo  <cit.> introduces several techniques for input transformation, such as JPEG compression <cit.>, to mitigate adversarial perturbations. R&P <cit.> employs random resizing and padding to reduce adversarial effects. HGD <cit.> develops a high-level representation guided denoiser to diminish the impact of adversarial disturbances. ComDefend <cit.> proposes an end-to-end image compression model to defend against adversarial examples. NRP <cit.> trains a neural representation purifier model that removes adversarial perturbations using automatically derived supervision. Another approach enhances resilience to attacks by incorporating adversarial examples into the training phase. For instance, Tramèr  <cit.> bolster black-box robustness by utilizing adversarial examples generated from unrelated models. Similarly, Xie  <cit.> integrate feature denoising modules trained on adversarial examples to develop robustness in the white-box model. § METHODOLOGY §.§ Problem Formulation Formally, let x_s denote a clean image, and y is the corresponding label. ℱ_ϕ(·) is the classifier with parameters ϕ. The targeted attack aims to mislead the classifier to output a specific target class from an adversarial example x'_s corresponding to the clean image x_s, as formulated as Eq. <ref>. ℱ_ϕ(x'_s) = y^t, s.t. x'_s-x_s_∞≤ϵ where y^t represents a specific target class and it is often constrained as one of known classes, ϵ is the perturbation constraint. In terms of arbitrary-target attack, it releases the constraint of the specific target class and aims to construct adversarial examples from any target regardless of whether it is from known classes or not. Given an arbitrary target image x_t, the adversarial example generation is formulated as Eq. <ref>: x'_s = min(x_s+ϵ,max(𝒢(x_s, x_t), x_s-ϵ)), where 𝒢_i denotes a trained adversarial generator. Compared with the conventional targeted attack, the arbitrary-target attack is more challenging since it requires a generator with a strong generalization capability. §.§ Generalized Adversarial Attacker In this section, we introduce the details of the proposed Generalized Adversarial Attacker (GAKer). The overall pipeline is depicted in <ref>. Given a target object, the GAKer intends to contaminate latent representations of the clean image by utilizing major components of the target. It is worth noting that the target object can be either a one-hot class label or an image with the visual appearance of the object. Existing methods <cit.> use the label index or the one-hot label as the condition for targeted adversarial example generation. However, the one-hot label lacks visual characteristics, which leads a trained generator to memory class features, thus limiting the generalization capability. To incorporate richer target information, we use images of the target object as input for generating adversarial examples. Next, we divide classes into known classes and unknown classes 𝒴 = 𝒴_known∪𝒴_unknown. Then we select known classes as the training set (t ∼𝒴_known) and generate an adversarial example using relevant image x_t of the target. The objective function is represented as Eq. <ref>: θmin𝔼_(x_s ∼𝒳_s, t ∼𝒴_known)[ℒ(x_s, x_t)], where ℒ is the loss function, ℱ_ψ(·) is the pretrained feature extractor, and 𝒢_θ is our arbitrary-target generator (see <ref> for detailed architecture). To generalize to unknown target classes during the inference phase, we use cosine distance as the loss function instead of cross-entropy: 𝒟_cos(f'_s, f_t) = 1 - f'_s · f_t/ f'_s _2 · f_t _2, where f'_s=ℱ_ψ(x'_s), f_t=ℱ_ψ(x_t). In addition, an identical learning objective is used to constraint the feature of adversarial perturbation δ=x'_s-x_s and the feature of the target object: ℒ(x_s, x_t) = 𝒟_cos(ℱ_ψ(𝒢_θ(x_s, x_t)), ℱ_ψ(x_t)) + α𝒟_cos(ℱ_ψ(𝒢_θ(x_s, x_t) - x_s), ℱ_ψ(x_t)), where f_δ=ℱ_ψ(δ), α denotes the hyper-parameter (see Appendix C for the effect of α). The entire training process is independent of specific target classes, enabling adaptation to unknown classes, including those from different datasets. §.§ Latent Infection This section describes how to inject target features into the source image in the latent feature space. There are two major modules in GAKer: the Feature Extractor ℱ_ψ and the Generator 𝒢_θ, as shown in the <ref>. The Feature Extractor is adapted from a pretrained model by removing the classification head. It is specialized for extracting feature vectors from input images. During the training, the weights of this module are frozen. We employ a UNet <cit.> as the basic architecture of the generator. It is designed to generate adversarial examples by using clean images with features of the target object obtained from the Feature Extractor. t0.5 < g r a p h i c s > Schematic diagram of feature insertion into each ResBlock of a UNet. The features are first transformed by the Feature Transform Module (FTM), followed by dimension matching through the Dimension Matching Module (DMM) layers before being integrated into each ResBlock of the UNet. As depicted in <ref>, the latent infection involves a two-step process. First, features of the target object are processed through the Feature Transform Module (FTM), which adopts a Linear-GELU-Linear sequence to enhance their representational capacity. Second, these enhanced features are combined with the features of the clean image. Particularly, the Dimension Matching Module (DMM), which consists of a Linear-GELU layer, is used to align dimensions for two features. With the designed architecture, the generator is optimized to extract major components from the target object, and learns to replace the major components of the clean image with that of the target object. § EXPERIMENT §.§ Experimental Settings Datasets. We train our models on the ImageNet training set <cit.>. Correspondingly, we evaluate the performance on the ImageNet val set. Networks. We consider several models, including DenseNet-121 (Dense-121) <cit.>, ResNet-50 (Res-50) <cit.> and VGG-19 as surrogate models. We select various black-box models, i.e., ResNet-152 (Res-152) <cit.>, VGG-19 <cit.>, Inception-v3 (Inc-v3) <cit.>, ViT <cit.>, DeiT <cit.> and CLIP <cit.>, for testing the transferability of attacks. Additionally, we evaluate the proposed method on defense models, including Inc-v3_adv, Inc-v3_ens3, Inc-v3_ens4, IncRes-v2_ens <cit.>, and Large Vision-Language Models (LVLM) such as LLaVA <cit.> and Qwen-VL <cit.>. Baselines. For iterative attacks, we compare our method with MI <cit.> and the advanced method SU <cit.>, which is competitive in target settings. For single-target generative attacks, we choose TTP <cit.> as the method for comparison with our approach. Specifically, we train multiple TTP models to accomplish multi-target attacks. For multi-target generative attacks, we choose HGN <cit.> and ESMA <cit.>. Both of these methods, along with ours, only require training a single generator to perform attacks on multiple targets. Implementation details. In all experiments, the perturbation constraint ϵ is set to 16, the number of known classes N is set to 200, the α is set to 0.5 and the number of samples in each known class M is 325. We train the generator with an AdamW optimizer for 20 epochs. For the MI method, we set the decay factor μ to 1. For the SU method, we perform the combinational attack of DTMI <cit.> and SU <cit.>. For both iterative methods, we employ the logit loss and set the number of iterations to 300 steps. For multi-target generative attacks, such as HGN and ESMA, and our method, only one model needs to be trained to achieve attacks on multiple target classes. Detailed training costs and implementation specifics are provided in Appendix A and Appendix B, respectively. §.§ Main Results Results on Unknown Classes. Compared with existing generator-based attacks, the best innovation of our method is the ability to attack unknown classes. We select 200 classes as known classes and the remaining 800 classes as unknown classes from ImageNet. Then we train the generator with the known classes and evaluate the targeted attack success rate on unknown classes. Notably, only HGN and our method can be evaluated on unknown classes. As shown in <ref>, our method significantly outperforms HGN, highlighting the superior transferability of our approach. For instance, with a substitute model of ResNet-50 and a black-box model of VGG-19, our method achieves a success rate of 41.69% on unknown classes, while HGN only achieves 0.05%. This result underscores the limitation of existing methods in generating targeted adversarial examples for unknown classes, while our method demonstrates effective generalization to such classes. Separate average results on all unknown classes can be found in Appendix D. Results on Known Classes. For evaluation on known classes, we compare our method with state-of-the-art iterative-based method (SU), single-target generator-based attacks (TTP), and multi-target generator-based attacks (HGN, ESMA). All multi-target generator-based attacks (HGN, ESMA, and our method) are trained on the same 200 classes. Due to the TTP method requiring training a model for each target class, the cost of training 200 models is prohibitively high. Therefore, we randomly select 10 classes from the 200 classes and train 10 TTP models separately for each substitute model (TTP-10). We then test our method on the same 10 classes (GAKer-10). <Ref> shows the targeted attack success rates on known classes for each method. Compared with the iterative-based method SU, our method achieves higher targeted attack success rates on black-box models. For example, if the substitute model is Dense-121, our method performs 10.38% better than SU on average across different models. For generator-based attacks, our GAKer also achieves a similar attack success rate to the ESMA method, outperforming the HGN method. Notably, our method improves performance on several models by an average of 10.5% and 5.47% over HGN and ESMA, respectively. §.§ Results on Other Models To further evaluate the generalization ability of our method, we also test our method on defense models and Large Vision-Language Models (LVLM). Results on Defense Models. In addition to attacking normally trained models, we evaluate the attack performance of various methods on adversarially trained models when using ResNet50 as the white-box model. As shown in <ref>, our method surpasses the HGN method in attacking targets in unknown and known classes. For example, with the defense model as Inc-v3_adv, we outperform HGN by 6.46% in terms of target success rate on the known classes. On the unknown classes, HGN cannot achieve an attack at all, similar to the performance of the clean samples, whereas our GAKer still exhibits an attack success rate of 5.95%. This result indicates that our generator has discovered more common model vulnerabilities, regardless of whether the model has been specifically trained for defense. Results on Large Vision-Language Models. The growing importance of Large Vision-Language Models (LVLM), including LLaVA <cit.> and Qwen-VL <cit.>, has been noted in recent times. Our research examines how well our method can be adapted for use with these LVLMs. Specifically, we test multiple templates on LVLMs to reduce the impact of prompt bias: * Is there any (origin class / target class) in this image? Please begin answer with `Yes,' or `No,' * Does this image contain any (origin class / target class)? Please begin answer with `Yes,' or `No,' * In this image, is there a (origin class / target class) present? Please begin answer with `Yes,' or `No,' We then assessed the effectiveness of our approach by calculating both the untargeted and targeted attack success rates against LVLMs. We define the untargeted attack success rate as the percentage of adversarial examples that mislead the model into failing to recognize the original class. The targeted attack success rate is defined as the percentage of adversarial examples that are misclassified into the target class. As shown in <ref> and <ref>, our method achieves higher targeted attack success rates on LVLM than HGN. For example, when the substitute model is Res-50 and the target is unknown classes, our method achieves 52.60% and 56.45% targeted attack success rates on LLaVA and Qwen-VL, respectively, while HGN only achieves 12.50% and 13.85%. This result demonstrates that even when using a “small” substitute model like Res-50, our method can successfully attack “large” models such as Qwen-VL and LLaVA. We also show some cases on GPT-4V<cit.>, which can be found in Appendix E. §.§ Ablation Study This section discusses how different parameter selections in training dataset construction impact the generator's attack capability. Numbers of Known Classes. <Ref> demonstrates the impact of the number N of known classes on the generator's performance. Specifically, to mitigate t0.5 < g r a p h i c s > Comparison of targeted attack success rates across a range of known classes. The Res-50 serves as the substitute model, while the performance of black-box models, including Res-152, VGG-19, and Dense-121, is evaluated for both known (K) and unknown (U) classes. the impact of adding new classes as the number of known classes increases, we evaluate targeted attack success rate on a common set of 10 known classes and 500 common unknown classes. When N is 10 or 50, the attack success rate on unknown classes is low due to limited training data. With N increased to 500, the white-box success rate reaches 49.45% on 500 unknown classes. Despite higher training costs, performance does not significantly improve with more known classes. Therefore, N is set to 200 to balance performance and training cost. When N is 10 or 50, the attack success rate on unknown classes is low due to limited training data. With N increased to 500, the white-box success rate reaches 49.45% on 500 unknown classes. Despite higher training costs, performance does not significantly improve with more known classes. Therefore, N is set to 200 to balance performance and cost. Strategies for Choosing Known Classes. Previous work <cit.> on multi-target generators has highlighted the importance of not only determining the number of target classes but also selecting which target classes to attack. When there are significant differences between the selected target classes, the generator can achieve a better targeted attack success rate. We introduce a similarity greedy algorithm to select a set of classes with the largest feature differences. To simplify, we represent each target class using the average of its image features and measure the similarity using cosine similarity. Specifically, the algorithm starts by randomly selecting an initial feature vector as the first class and adds it to the selected group. It then iteratively selects a vector from the remaining pool that has the lowest average cosine similarity to the vectors in the selected group. We add this selected class to the selected group and repeat the process until achieve the specified number of target classes. r0.5 < g r a p h i c s > Comparison of targeted attack transfer success rates under different known classes selection strategies. We conduct multiple experiments to eliminate the randomness introduced by the initial selection. To validate our selection method's effectiveness, we compare it with randomly selecting strategy. <Ref> illustrates that our method significantly outperforms random selection on known classes. Compared with the random strategy, we achieve a success rate increase of 16.52% on black-box model Res-152 when the substitute model is Res-50. This demonstrates the necessity of selecting known classes and the effectiveness of our selecting method. Numbers of Sample in Each Known Class. We observe that noisy images, such as those with occlusion or blurring, can decrease performance. To validate this, we employ three selection methods to form groups, each comprising 325 images: low quality (largest classification loss), high quality (lowest loss), and random. <Ref>(a) demonstrates that image quality significantly impacts the attack success rate. Consequently, we prioritize training images by their classification loss, favoring those with the lowest losses. Specifically, we experiment with different numbers of images (M ∈{1, 130, 325, 650, 1300}) for each known class, as illustrated in <Ref>(b). When the number of images (M) is less than 325, the performance is lower, likely because the generator could not capture the full breadth of class characteristics. Conversely, when M exceeds 325, the attack success rate slightly decreased, potentially due to the inclusion of more noisy images. Thus, we ultimately select 325 as the optimal number of samples for each known class. This choice yields a maximum attack success rate of 41.69% for the unknown class in the ResNet-50 model. This finding underscores the equal importance of both the quantity and quality of training samples. § CONCLUSION Generator-based targeted attacks are able to mislead DNNs to any target they have been trained on, showing their dangers. For the first time, we find that the attack also extends to untrained unknown classes, and the extent of its potential harm is revealed. We propose the Generalized Adversarial attacker, which injects target feature into adversarial examples to attack unknown classes. Through comprehensive experiments across standard, defense, and large vision-language models, we demonstrate that our method can effectively attack unknown and known classes across models. We hope our work will draw attention to the potential dangers of generator-based targeted attacks and inspire future research in this area. § SOCIETAL IMPACTS & LIMITATION. Societal Impacts. Previous research on generator-based target attacks requires the attacker to know the target class. Our proposed algorithm allows successful targeted attacks without this information, highlighting the risk of relying solely on dataset and model confidentiality for security. Moreover, our method’s success on unknown classes reveals inherent model vulnerabilities, offering new insights for advancing security. Limitation. While our method validates the possibility of attacking unknown classes using generator-based methods, the gap in target attack success rates between known and unknown classes still exists. In the future, we will focus on analyzing the reasons for this difference. § ACKNOWLEDGEMENTS This study is supported by grants from the National Natural Science Foundation of China (Grant No. 62122018, No. 62020106008, No. U22A2097, No. U23A20315), Kuaishou, and SongShan Laboratory YYJC012022019. It is also supported by the Postdoctoral Fellowship Program of CPSF under Grant Number GZB20240114. splncs04 Any Target Can be Offense: Adversarial Example Generation via Generalized Latent Infection (Supplementary Material) § IMPLEMENT DETAILS During training, we consider samples with small classification loss to be excellent target samples with more prominent features. We continue to use this estimation during testing. Test on Known Classes. When evaluating the target attack success rate on known classes, since the attacker entirely constructs the training set of known classes, we select the target sample with the smallest classification loss in the train set as the target for that known class. Test on Unknown Classes. When testing on unknown classes, the attacker obviously cannot access the complete target dataset to select the best sample but can still choose some target samples that are relatively clear and unobstructed. To simulate this process, we select 10 images with relatively small classification losses for each target class from the validation set of ImageNet as targets. During testing, the target samples of unknown classes are randomly selected from these 10 images. § EFFICIENCY ANALYSIS In this section, we analyze the efficiency of different methods. To simulate the scenario of unknown classes, we divide the ImageNet-1k dataset into 5 parts, with each part appearing sequentially. Unlike other generator-based methods that require retraining for each new part, our GAKer can attack any target class with just one training process. As shown in <ref>, our method only requires 1% of the training time compared to TTP, highlighting the remarkable efficiency of our GAKer. § PARAMETER SENSITIVITY ANALYSIS To explore the impact of the parameter α, we conduct several experiments. We train the model with α set to 0, 0.25, 0.5, 0.75, and 1, respectively. Then, we validate the effect of different α settings on the attack success rate. In this experiment, we use ResNet-50 (Res-50) <cit.> as the substitute model and test the target attack success rates (TASR) on known and unknown classes on ResNet-50 (Res-50) <cit.>, ResNet-152 (Res-152) <cit.>, VGG-19 <cit.>, DenseNet-121 (Dense-121) <cit.>. The experimental results are shown in <ref>. Finally, we select α as 0.5, which yields the best experimental results. § FURTHER ANALYSIS ON UNKNOWN CLASSES To further analyze the performance of our GAKer on unknown classes, we visualize the targeted attack success rate (TASR) for all 800 unknown classes in <ref>. Meanwhile, we present the histogram of TASR in <ref>. The statistics show that there is a large variation in TASR among different unknown classes. The TASR for most of the unknown classes exceeds 50%, but some classes are hardly attacked, with a TASR lower than 50%. This phenomenon indicates that different classes have varying levels of difficulty. § RESULTS ON GPT-4V We conduct a series of tests on the GPT-4V <cit.> to showcase the effectiveness of our method when dealing with Large Vision-Language Models. All experiments on GPT-4V are conducted on 13 March 2024. As shown in <ref>, when we ask GPT-4V whether the adversarial example contains the original class or the target class, GPT-4V will deny the former and affirm the latter. Furthermore, when we ask GPT-4V to describe the content of the image by one sentence, the description corresponds with the target rather than the original image. When we ask GPT-4V to "Please generate a similar image", as shown in <ref>, we can see that the generated sample is also similar to the target and utterly unrelated to the original class. This is a strong indication that when dealing with Large Vision-Language Models, our method produces adversarial samples that effectively fool the model. § VISUALIZATION OF ADVERSARIAL EXAMPLES In this section, we show some adversarial examples in <ref>.
http://arxiv.org/abs/2407.12251v1
20240717013627
Performance Analysis and Blocklength Minimization of Uplink RSMA for Short Packet Transmissions in URLLC
[ "Yixin Zhang", "Wenchi Cheng", "Jingqing Wang", "Wei Zhang" ]
cs.IT
[ "cs.IT", "math.IT" ]
empty Performance Analysis and Blocklength Minimization of Uplink RSMA for Short Packet Transmissions in URLLC Yixin Zhang^†, Wenchi Cheng^†, Jingqing Wang^†, and Wei Zhang^  ^†State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, China ^School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, Australia E-mail: {yixinzhang@stu.xidian.edu.cn, wccheng@xidian.edu.cn, jqwangxd@xidian.edu.cn, w.zhang@unsw.edu.au} ================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT Rate splitting multiple access (RSMA) is one of the promising techniques for ultra-reliable and low-latency communications (URLLC) with stringent requirements on delay and reliability of multiple access. To fully explore the delay performance enhancement brought by uplink RSMA to URLLC, in this paper, we evaluate the performance of two-user uplink RSMA and propose the corresponding blocklength minimization problem. We analyze the impact of finite blocklength (FBL) code on the achievable rate region and the effective throughput of uplink RSMA. On this basis, we propose the problem of minimizing the blocklength for uplink RSMA with power allocation under constrained reliability and effective throughput. Then, we present an alternating optimization method to solve this non-convex problem. Simulation results show that different from the infinite blocklength (IBL) regime, the achievable rate region of the uplink RSMA is not always larger than that of uplink non-orthogonal multiple access (NOMA) in the FBL regime. But with the help of our proposed blocklength minimization scheme with power allocation, uplink RSMA can achieve the same achievable rate with a smaller blocklength compared to uplink NOMA, frequency division multiple access (FDMA), and time division multiple access (TDMA) in the FBL regime, showing the potential of uplink RSMA to achieve low delay without time sharing for URLLC. Rate splitting multiple access (RSMA), ultra-reliable and low-latency communications (URLLC), finite blocklength (FBL), blocklength minimization, power allocation. § INTRODUCTION A wide range of real-time applications and services, such as autonomous vehicles, Industrial Internet of Things (IIoT), and augmented reality/virtual reality (AR/VR), are emerging at a fast speed in the upcoming sixth generation (6G) wireless networks <cit.>. Ultra-reliable and low-latency communications (URLLC), as the core service in the fifth generation (5G) communication networks, aims to provide end-to-end (E2E) delay of less than 1 ms for 32-bit packet transmission while ensuring packet error probability of less than 10^-5 <cit.>. To meet the ultra-high quality of service (QoS) demands for various delay-sensitive services, more stringent requirement on delay is put forward as sub-millisecond level in 6G networks <cit.>. One of the key methods to meet the delay requirements of real-time applications is to use finite blocklength (FBL) code for short packet transmissions <cit.>. The traditional Shannon capacity, which is used in the infinite blocklength (IBL) for high-capacity-demanded services, is no longer applicable to short packet transmissions in URLLC. The authors derived a closed-form expression of the achievable rate and the decoding error probability in the FBL regime <cit.>. In addition, the design of future 6G networks also requires support for large-scale access to ensure stringent delay and reliability requirements from a large number of devices <cit.>. In the existing network architecture, delay timeout under multiple access (MA) conditions remains a difficult problem, where time-saving and reliable solutions are still very important for future networks. Recently, rate splitting multiple access (RSMA) has been proposed as a promising technology to enhance spectral efficiency (SE), energy efficiency (EE), coverage, QoS, user fairness, and reliability while entailing lower delay, feedback overhead, and complexity <cit.>. RSMA relies on rate splitting (RS) and superposition coding (SC) at the transmitter, as well as successive interference cancellation (SIC) at the receiver. RSMA has been widely studied for downlink systems, showing the SE, EE, and delay enhancement of downlink RSMA <cit.>. As for the uplink RSMA, it was first proposed in <cit.> for the single-input single-output (SISO) multiple access channel (MAC) to achieve every point of the Gaussian MAC capacity region without the need for time sharing and joint encoding-decoding among users. In this way, uplink RSMA can avoid high complexity and overhead, resulting in a relatively high rate and low delay. The throughput improvement of uplink RSMA has been studied in <cit.>. However, the delay performance with FBL code of uplink RSMA has not been studied, which means how to take advantage of RSMA in the URLLC system still needs to be addressed. To fully explore the delay performance improvement brought by RSMA to URLLC, the FBL and RSMA combined analysis and delay minimization with blocklength optimization are highly needed. In order to solve this problem, in this paper we analyze the impact of FBL code on uplink RSMA to obtain the achievable rate region and the effective throughput of uplink RSMA in the FBL regime. Based on the above analysis, we propose the blocklength minimization problem with power allocation for uplink RSMA under constrained reliability and effective throughput. Then, we present an alternating optimization algorithm to solve this non-convex problem. Simulation results show that our proposed uplink RSMA-based blocklength minimization problem with power allocation can use a lower blocklength with a lower delay to achieve the same achievable rate compared to uplink non-orthogonal multiple access (NOMA), frequency division multiple access (FDMA), and time division multiple access (TDMA). The rest of this paper is organized as follows. Section <ref> introduces the two-user uplink RSMA system model. Section <ref> analyzes the performance of uplink RSMA in the FBL regime. Section <ref> presents the uplink RSMA-based blocklength minimization problem with power allocation and the corresponding algorithm. Section <ref> provides the numerical results. Finally, we conclude this paper in Section <ref>. § SYSTEM MODEL 0.27in As shown in Fig. <ref>, we consider a two-user uplink RSMA system consisting of one base station (BS) and two users U1 and U2. In this paper, 1-layer RSMA is adopted to serve U1 and U2. The message W_1 of U1 is split into two sub-messages W_11 and W_12, which can be interpreted as creating 2 virtual users at U1 <cit.>. The messages W_11 and W_12 are independently encoded into streams s_11 and s_12, which are then respectively allocated with certain powers P_11 and P_12. Thus, the transmit signal at U1 is given by x_1 = √(P_11)s_11 + √(P_12)s_12. At U2, the message W_2 is directly encoded into s_2. By allocating a certain power P_2, the transmit signal at U2 is given by x_2 = √(P_2)s_2. Thus, the signal received at the BS is given by y = h_1 x_1+ h_2 x_2+ z, where h_i (i=1,2) denotes the channel coefficient of Ui and z denotes the additive white Gaussian noise (AWGN) at the BS with zero-mean and variance σ_n^2. We assume that the decoding order is s_11→ s_2→ s_12. The BS first regards s_12 and s_2 as interference to decode s_11. Thus, the signal to interference noise ratio (SINR) of the first decoded stream s_11, denoted by γ_11, can be expressed as γ_11 = P_11 G_1/P_12 G_1 + P_2 G_2 + σ_n^2, where G_i = |h_i|^2 (i=1,2) denotes the channel gain of Ui. Assuming that s_11 is successfully decoded, the BS removes s_11 and decodes s_2 while treating s_12 as noise. Through SIC, the SINR of the second decoded stream s_2, denoted by γ_22, can be expressed as γ_22 = P_2 G_2/P_12 G_1 + σ_n^2. Next, the BS removes s_2 and decodes s_12 when s_2 is successfully decoded. The SINR of the third decoded stream s_12, denoted by γ_12, can be expressed as γ_12 = P_12 G_1/σ_n^2. § PERFORMANCE ANALYSIS OF UPLINK RSMA IN THE FBL REGIME §.§ Blocklength Structure And Achievable Rate Region In the FBL regime, the blocklength is denoted by n=TB, where T represents the time span (TTI) and B represents the frequency resource occupied by the current block. As illustrated in Fig. <ref>, orthogonal multiple access (OMA), such as FDMA and TDMA, transmits signals in different frequency and time domains, i.e., different signals occupy different blocklengths <cit.>. As for NOMA and RSMA, they transmit signals in different power domains. Thus, NOMA and RSMA can share a common blocklength, which means the total required blocklength of RSMA and NOMA is smaller and leads to a lower delay when the bandwidth is fixed. In the IBL regime, RSMA can reach every point of the Gaussian MAC capacity region with the error probability ε→ 0. Thus, the achievable rate region of uplink RSMA in the IBL regime is given by R_1^ IBL = R_11^ IBL+R_12^ IBL≤ C(γ_1), R_2^ IBL≤ C(γ_2), and R_1^ IBL + R_2^ IBL≤ C(γ_ sum), where C (γ_i) = log_2(1 + γ_i) (i = 1,2, sum) denotes the Shannon capacity, γ_1 =P_1 G_1/P_2 G_2+σ_n^2, γ_2 =P_2 G_2/P_1 G_1+σ_n^2, γ_ sum = P_1 G_1 + P_2 G_2/σ_n^2, and P_1 = P_11+P_12. However, in the case of FBL regime, the error probability no longer approaches to 0 and the blocklength has an impact on the achievable rate. To investigate the suitable RSMA scheme for short packet transmissions, we need to further analyze the impact of blocklength on the achievable rate and capacity region of uplink RSMA. The achievable rate of s_i in the FBL regime, denoted by R_i(n, γ_i), can be approximated as <cit.> R_i(n, γ_i) ≈log_2(1 + γ_i) - √(V_i/n) Q^-1(ε_i)log _2e, i={11, 12, 22}, where V_i=1-(1+γ_i)^-2 and ε_i denote the channel dispersion and the predefined error probability of stream s_i (i =11,12,22), and Q^-1(·) denotes the inverse of Q-function. Thus, the achievable rate region of U1 and U2 in the FBL regime can be expressed as follows: R_1^ FBL≤ C(γ_1) -D_1, R_2^ FBL≤ C(γ_2) -D_2, R_1^ FBL + R_2^ FBL ≤ R_11(n, γ_11) + R_12(n, γ_12) + R_22(n, γ_22) = C(γ_ sum) - D_11 - D_12 - D_22, where D_i= √(1-(1+γ_i)^-2/n) Q^-1(ε_i)log_2e (i = 1,2,11,12,22). Proof: The proof is provided in Appendix. §.§ Error Probability And Effective Throughput According to the decoding order, the error probability of W_1, denoted by ε_1, can be expressed as ε_1 = ε_11 + ( 1- ε_11) ε_22 + ( 1- ε_11) ( 1- ε_22) ε_12. Since the reliability requirement in URLLC is relatively small (e.g., 10^-5∼ 10^-9), the product of two error probabilities can be omitted. Thus, the error probability of W_1 can be approximated as ε_1 ≈ε_11 + ε_12 + ε_22. Similarly, the error probability of W_2, denoted by ε_2, can be expressed as ε_2 = ε_11 + ( 1- ε_11) ε_22≈ε_11 + ε_22. Based on the above analysis, the effective throughput of U1 and U2, denoted by T_1 and T_2, can be given as follows: T_1 = ( 1 - ε_1) n [R_11(n, γ_11)+R_12(n, γ_12)], T_2 = ( 1 - ε_2) n R_2(n, γ_22). § UPLINK RSMA-BASED BLOCKLENGTH MINIMIZATION PROBLEM To satisfy stringent delay requirements in URLLC, we propose the uplink RSMA-based blocklength minimization problem with power allocation scheme for short packet transmissions under reliability and effective throughput demands. §.§ Problem Formulation In this paper, we aim to minimize the blocklength while meeting reliability and effective throughput requirements. Thus, the blocklength minimization problem, denoted by P1, can be expressed as P1: min_ P n s.t. P_11 + P_12≤ P_ t, P_2 ≤ P_ t, T_i ≥ T^ th_i, i = {1,2}, N_min≤ n ≤ N_max, where P = [ P_11, P_12, P_2], P_ t denotes the maximum transmit power, T^ th_i denotes the effective throughput threshold of Ui, N_min and N_max denote the minimum and maximum blocklength. Constraint (<ref>) guarantees the effective throughput demands, while the error probability of each stream is predefined to ensure the reliability requirements. The blocklength minimization problem P1 is non-convex due to constraint (<ref>) with coupled optimization variables. §.§ Problem Transformation To deal with the non-convex problem P1, we introduce slack variables δ = [δ_11, δ_12, δ_22] and τ = [τ_11, τ_12], where δ and τ are the lower bounds of SINR and effective throughput, respectively. With the introduced slack variables, P1 can be written as follows: P2: min_ P n s.t. P_11 + P_12≤ P_ t, P_2≤ P_ t, (1-ε_1) n [log_2(1+δ_1i) - E_1ν_1i] ≥τ_1i, i={1,2} (1-ε_2) n [log_2(1+δ_22)-E_2 ν_2] ≥ T^ th_2, P_11 G_1/P_12 G_1 + P_2 G_2 + σ_n^2≥δ_11, P_2 G_2/P_12 G_1 + σ_n^2≥δ_22, P_12 G_1/σ_n^2≥δ_12, τ_11+τ_12≥ T_ th, where E_1 = Q^-1(ε_1)/√(n)log_2e, E_2 = Q^-1(ε_2)/√(n)log_2e, and ν_1i = √(1-(1+δ_1i)^2) (i=1,2 ). Due to the non-convexity of constraints (<ref>)-(<ref>), P2 is still non-convex. Thus, we use the first order Taylor series to approximate the non-convex part in the constraints. Constraints (<ref>) and (<ref>) can be approximated at the point δ^(t) at the t-th iteration as follows: (1-ε_1) n [log_2(1+δ_1i)-E_1{[1-(1+δ_1i^(t))^-2]^1/2 + (δ_1i-δ_1i^(t)) [1-(1+δ_1i^(t))^-2]^-1/2 ×(1+δ_1i^(t))^-3}] ≥τ_1i, i={1,2}, and (1-ε_2) n [log_2(1+δ_22)-E_2{[1-(1+δ_22^(t))^-2]^1/2 + (δ_22-δ_22^(t)) [1-(1+δ_22^(t))^-2]^-1/2(1+δ_22^(t))^-3}] ≥ T^ th_2. Constraints (<ref>) and (<ref>) can be approximated at the point δ^(t) and P^(t) at the t-th iteration as follows: P_12 G_1 + P_2 G_2 + σ_n^2 - P_11 G_1/δ_11^(t) + ( δ_11 - δ_11^(t)) P_11^(t) G_1/(δ_11^(t))^2≤ 0, and P_12 G_1 + σ_n^2 -P_2 G_2/δ_22^(t) + ( δ_22 - δ_22^(t)) P_2^(t) G_2/(δ_22^(t))^2≤ 0. Based on the above first order Taylor series approximations, the non-convex problem P2 can be transformed into a convex problem as follows: P3: min_ P, δ, τ n s.t. (<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>). All the constraints are transformed into convex, thus P3 can be solved using the iterative alternating optimization (AO) method. At the t-th iteration, based on the optimal solution obtained from the (t-1)-th iteration ( P^(t-1),δ^(t-1), τ ^(t-1)), solving the convex problem P3 to get the optimal solution of the t-th iteration. Through alternating iteration, the corresponding minimum blocklength can be obtained until the blocklength converges. The detailed iterative algorithm flow is outlined in Algorithm 1. §.§ Comparison to NOMA In order to perform a comparative analysis with uplink RSMA, here we provide the performance analysis for two-user uplink NOMA, FDMA, and TDMA networks. In uplink NOMA, the messages W_1 and W_2 of U1 and U2 are encoded into streams s_1 and s_2. The BS first decodes s_1 of the strongest user U1 while treating s_2 as interference. After decoding s_1, the BS removes it and decodes s_2. Thus, the SINR of s_1 is γ_11^ N = P_1 G_1/P_2 G_2 + σ_n^2 and the SINR of s_2 is γ_22^ N = P_2 G_2/σ_n^2. Based on the SINRs and (<ref>), we can get the achievable rate R_1^ N(n, γ_11^ N) and R_2^ N(n, γ_22^ N) of s_1 and s_2 in NOMA. The corresponding error probability of each stream in NOMA is set as ε_i^ N (i=11,22). According to NOMA decoding order, the error probability of W_1 is ε_1^ N = ε_11^ N + ( 1- ε_11^ N) ε_22^ N≈ε_11^ N + ε_22^ N, and the error probability of W_2 is ε_2^ N = ε_22^ N. Therefore, the effective throughput of U1 in uplink NOMA is T_1^ N = ( 1 - ε_1^ N) n R_1^ N(n, γ_11^ N) and the effective throughput of U2 in uplink NOMA is T_2^ N = ( 1 - ε_2^ N) n R_2^ N(n, γ_22^ N). Based on the above analysis, the blocklength minimization problem for uplink NOMA can be expressed as follows: P4: min_ P^ N n s.t. P_1 ≤ P_ t,(<ref>),(<ref>),(<ref>), where P^ N=[ P_1, P_2]. §.§ Comparison to OMA We assume the bandwidth fraction is α^ F in uplink FDMA and the time fraction is α^ T in uplink TDMA. Thus, the blocklengths of U1 and U2 are n_1^ F =α^ F n and n_2^ F=(1-α^ F)n in FDMA, while n_1^ T =α^ T n and n_2^ T=(1-α^ T)n in TDMA, respectively. As a result, the SINRs of FDMA are given by γ_1^ F = P_1 G_1/α^ Fσ_n^2 and γ_2^ F = P_2 G_2/(1-α^ F) σ_n^2, while the SINRs of TDMA are given by γ_1^ T = P_1 G_1/σ_n^2 and γ_2^ T = P_2 G_2/σ_n^2, respectively. The corresponding error probabilities are given by ε_i^j (i={1,2},j={F,T}). Then, the achievable rate of U1 is R_1^j(n_1^j, γ_1^j) ≈α^j log_2(1 + γ_1^j) - √(V_1^j/n_1^j) Q^-1(ε_1^j)log_2e when j= F for FDMA and j= T for TDMA. Similarly, the achievable rate of U2 is R_2^j(n_2^j, γ_2^j) ≈(1-α^j) log_2(1 + γ_2^j) - √(V_2^j/n_2^j) Q^-1(ε_2^j)log_2e when j= F for FDMA and j= T for TDMA. Thus, the effective throughput is T_i^j = ( 1 - ε_i^j) n_i^j R_i^j(n_i^j, γ_i^j) (i={1,2},j={F,T}). Therefore, the blocklength minimization problem for OMA (j= F for FDMA and j= T for TDMA) can be expressed as follows: P5: min_ P^j n_1^j + n_2^j s.t. P_1 ≤ P_ t,(<ref>),(<ref>),(<ref>), where P^j=[ P_1, P_2] (j={F,T}). § NUMERICAL RESULTS In this section, we evaluate the achievable rate region performance and the proposed blocklength minimization of uplink RSMA. We set the channel gain G_1 = 1 for U1 and G_2 = 0.7 for U2 with G_1>G_2. Without loss of generality, we assume the noise variance σ^2=1. For NOMA, we set NOMA-12 with the decoding order s_1 → s_2 and NOMA-21 with the decoding order s_2 → s_1. We set the bandwidth fraction and time fraction as α^ F = α^ T = P_1/P_1+P_2 in FDMA and TDMA. To satisfy the reliability requirements in URLLC, the predefined error probability of each stream is set to ε = 10^-6 to ensure the overall error probability of uplink RSMA is lower than 10^-5. In addition, we set the maximum blocklength N_max=3000 and the minimum blocklength N_min = 100 <cit.>. Figure <ref> depicts the achievable rate region comparison of different MA strategies with different blocklengths as n set to 500, 1000, 2000, and infinite. The achievable rates of U1 and U2 both increase as blocklength increases, and therefore the achievable rate region also expands with the increase of blocklength. In the traditional IBL regime, uplink RSMA can achieve the Gaussian MAC capacity region, while NOMA without time sharing can achieve maximum rate only at one of the users. This is because by changing the transmit power allocation between two data streams s_11 and s_12, RSMA can bridge NOMA-12 (allocate all the transmit power to s_11) and NOMA-21 (allocate all the transmit power to s_12), which leads to a larger rate region without time sharing. FDMA can only reach the capacity region at one point with α^ F = P_1/P_1+P_2, while TDMA cannot reach the capacity region without variable transmit power. However, in the FBL regime, uplink RSMA cannot achieve the Gaussian MAC capacity region and the achievable rate region of RSMA is not always larger than that of NOMA. On the one hand, the error probability no longer approaches 0 with the FBL code, resulting in a decrease in the achievable rate. On the other hand, the signal s_1 is divided into two streams in uplink RSMA, which brings more channel dispersion terms than NOMA, resulting in a decrease in the total achievable rate. According to D = √(V/n) Q^-1(ε)log_2e, it can be observed that as the blocklength increases, the impact of channel dispersion gradually decreases. As shown in Fig. <ref>, the rate region of RSMA can gradually include NOMA with blocklength increasing. Figure 4 shows the blocklength comparison of different MA strategies versus the maximum transmit power with different effective throughput requirements. The blocklength gradually decreases as the maximum transmit power increases. As shown in Fig. <ref>, although RSMA and NOMA can use the common blocklength without dividing it to two users, RSMA can achieve a smaller blocklength. This is because NOMA decodes s_1 first and removes s_1 when decoding s_2, which can guarantee a high rate of s_2 but result in a relatively low rate of s_1. When T^ th_1 is large, U1 needs a larger blocklength to reach T^ th_1 than that of RSMA. In FDMA and TDMA, U1 and U2 cannot share the common blocklength. As a result, the blocklength is divided into U1 and U2 in the frequency domain and time domain, resulting in a relatively large blocklength. It can be seen from Fig. <ref> that the blocklengths of RSMA and NOMA are the same, which means NOMA can achieve the same rate as RSMA with the same blocklength. This is because when the effective throughput requirement is set as T^ th_1<T^ th_2, the guaranteed rate of s_2 can satisfy a larger requirement T^ th_ 2. However, RSMA can adapt to different throughput requirements and achieve the smallest blocklength, which means it is more suitable and flexible than NOMA for heterogeneous networks with different QoS requirements. Figure 5 plots the blocklength comparison of different MA strategies versus the error probability with different maximum transmit powers. The blocklength decreases as the error probability increases for the four MA strategies, while the blocklength of RSMA is always lower than that of NOMA, FDMA, and TDMA. In addition, for vertical businesses with different reliability requirements in URLLC, as long as their reliability requirements are met, the minimum blocklength can be selected to achieve the minimum delay. Figure 6 shows the sum of the achievable rates versus the blocklength. RSMA can achieve the highest rate in both IBL and FBL regimes. In the FBL regime, the sum of achievable rates increases with the blocklength increasing. Therefore, the minimum blocklength can be selected as long as the rate requirements are achieved to satisfy low-delay requirements in URLLC. In addition, for a fixed value of blocklength, the achievable rate of RSMA without power allocation (PA) is lower than those of RSMA, NOMA-12, and NOMA-21 with power allocation. This means that through power allocation can RSMA achieve the function of bridging NOMA-12 and NOMA-21, resulting in a higher rate with the same blocklength. § CONCLUSION In this paper, we solved the problem of minimizing the blocklength with the power allocation for uplink RSMA in URLLC. In particular, we analyze the performance of uplink RSMA in the FBL regime, in terms of the achievable rate region and the effective throughput. On this basis, we proposed the uplink RSMA-based blocklength minimization problem under the reliability and effective throughput constraints to satisfy the low-delay and reliability requirements in URLLC. Furthermore, we developed an alternating optimization algorithm to solve this non-convex problem to obtain the optimal power allocation and the minimum blocklength. Numerical results demonstrated that uplink RSMA cannot achieve the Gaussian MAC capacity region in the FBL regime. However, with the help of our proposed blocklength minimization scheme, uplink RSMA can significantly reduce the blocklength to achieve a lower delay compared to uplink NOMA, FDMA, and TDMA, showing the potential of uplink RSMA for URLLC. The achievable rate of U1 can be expressed as follows: R_11(n, γ_11) + R_12(n, γ_12) ≈log_2(1 + γ_11) - D_11 + log_2(1 + γ_12) - D_12 = log_2 (1 + P_11 G_1/P_12 G_1 + P_2 G_2 + σ_n^2) - D_11 + log_2 (1 + P_12 G_1/σ_n^2) - D_12 = log_2 [(P_11 G_1 + P_12 G_1 + P_2 G_2 + σ_n^2/P_12 G_1 + P_2 G_2 + σ_n^2) ·(P_12 G_1 + σ_n^2/σ_n^2) ] - D_11 - D_12 = log_2 [(P_1 G_1 + P_2 G_2 + σ_n^2/σ_n^2)/(P_12 G_1 + P_2 G_2 + σ_n^2/P_12 G_1 + σ_n^2)] - D_11 - D_12 = log_2 (1+P_1 G_1 + P_2 G_2/σ_n^2) - log_2 (1+P_2 G_2/P_12 G_1 + σ_n^2) - D_11 - D_12 = C(γ_ sum) - R_2(n, γ_22) - D_11 - D_12 - D_22. Thus, we have R_11(n, γ_11) + R_12(n, γ_12) + R_2(n, γ_22) = C(γ_ sum) - D_11 - D_12 - D_22. § ACKNOWLEDGMENT This work was supported in part by the Key Area R&D Program of Guangdong Province under Grant 2020B0101110003, in part by the National Key R&D Program of China under Grant 2021YFC3002102, and in part by the Key R&D Plan of Shaanxi Province under Grant 2022ZDLGY05-09. IEEEtran
http://arxiv.org/abs/2407.12206v1
20240716224349
A Language Modeling Approach to Diacritic-Free Hebrew TTS
[ "Amit Roth", "Arnon Turetzky", "Yossi Adi" ]
cs.CL
[ "cs.CL", "cs.SD", "eess.AS" ]
Computing k-means in mixed precision [ Received: date / Revised version: date ========================================== § ABSTRACT We tackle the task of text-to-speech (TTS) in Hebrew. Traditional Hebrew contains Diacritics, which dictate the way individuals should pronounce given words, however, modern Hebrew rarely uses them. The lack of diacritics in modern Hebrew results in readers expected to conclude the correct pronunciation and understand which phonemes to use based on the context. This imposes a fundamental challenge on TTS systems to accurately map between text-to-speech. In this work, we propose to adopt a language modeling Diacritics-Free approach, for the task of Hebrew TTS. The model operates on discrete speech representations and is conditioned on a word-piece tokenizer. We optimize the proposed method using in-the-wild weakly supervised data and compare it to several diacritic-based TTS systems. Results suggest the proposed method is superior to the evaluated baselines considering both content preservation and naturalness of the generated speech. Samples can be found under the following link: <pages.cs.huji.ac.il/adiyoss-lab/HebTTS/> § INTRODUCTION Hebrew, a low-resource language spoken by 9 million people worldwide <cit.>, presents unique challenges that constrain research and product development in speech technology. Specifically, Hebrew is a morphologically rich language, with the common use of prefixes and suffixes to modify words’ meanings and to add prepositions. On top of that, Hebrew uses Diacritics ('Niqqud') to create a one-to-one mapping between text and phonemes. 'Niqqud' is a system of Diacritical signs used to represent vowels or distinguish between alternative pronunciations of letters of the Hebrew alphabet. In practice, modern Hebrew text rarely contains Diacritics, one may find Diacriticized text in specialized texts such as dictionaries, poetry, or children's books. Hence, readers are expected to conclude the correct pronunciation and understand which phonemes to use, based on familiarity with the language itself. This makes it challenging for text-to-speech (TTS) systems to accurately learn the connection between text and speech. For example the words: mat*AnAh and mat:nEh, contain the same characters, but have completely different meaning and pronunciation, the first one means a `gift', while the second one means `conditioning'. As mentioned before, in modern Hebrew writing, one will probably not encounter Diacritics and the above word will appear as follows, mtnh. As a result, the reader should infer the right pronunciation by context only. Moreover, when considering spoken language modeling systems in automated pipelines, current Automatic Speech Recognition (ASR) systems, such as Whisper <cit.> and Massively Multilingual Speech (MMS) <cit.> do not output diacritics in their transcripts, hence TTS systems should either predict it or use a Diacritic-free synthesis system. Another major issue that holds progress in developing AI-based Hebrew TTS systems is the lack of datasets. As Hebrew is considered a low-resource language, public spoken benchmarks hardly exist. Previous efforts in constructing datasets in Hebrew were often relatively small <cit.>. The authors in <cit.> established the Corpus of Spoken Israeli Hebrew (CoSIH) with the goal of compiling a large database of recordings of spoken Israeli Hebrew in order to facilitate and enhance research in the field. Next, the authors in <cit.> released The Map Task Corpus (MaTaCOp) of Hebrew dialogues. The authors in  <cit.> collected naturally occurring speech and interaction in Modern Hebrew via telephone conversations during the years 2020–2021 and released the HUJI Corpus of Spoken Hebrew (HUJICorpus). More recently, the authors in  <cit.> released SASPEECH, a high-quality single-speaker Hebrew dataset to enhance Hebrew speech synthesis research. Although all of these prior work are important and valuable, the provided benchmarks are relatively small. CoSIH contains ∼12.3 hours of speech, the MaTaCOp corpus contains ∼5.3 hours, the HUJI Corpus has ∼3.8, and SASPEECH which is the largest one contains ∼30 hours of speech. For comparison a modern, contextualized TTS system in English is trained over ∼60k hours <cit.>. Recently, the authors of <cit.> and the authors of  <cit.> released two datasets denoted as ivrit.ai and HebDB respectively. The authors released weakly-supervised speech from local podcasts and provided the first large-scale dataset in Hebrew, which we leveraged to construct the model. Previous attempts were made to construct a TTS system in Hebrew. The Authors of <cit.>, proposed the MMS system. In their study, they develop speech technologies (ASR, TTS, Language ID) in more than 1,000 languages. Their TTS system is based on representation obtained from a pre-trained multi-lingual self-supervised model. Although providing impressive results, their Hebrew TTS system is based on predicting diacritics of the input text. More recently, the authors of <cit.> introduced the Overflow <cit.> model for Hebrew, together with the SPASEECH benchmark. The Overflow model is comprised of neural HMM together with normalizing flows. On top of the Overflow model, the authors in <cit.> suggested using the HiFi-GAN neural vocoder <cit.> to estimate the phase. Similarly to MMS, the system proposed by <cit.> is based on predicting diacritics of the input text, hence is sub-optimal and often produces wrong and unnatural content in the generated speech. Moreover, such dependency makes it difficult for these models to scale to large datasets as they both require predicting diacritics on top of automatically transcribed text. Unlike these methods, the proposed LM approach operates in a Diacritic-free manner, not propagating mistakes from the diacritic prediction models, and better leveraging the context of the input signal. Recent studies in speech and audio representation learning proposed learning discrete representation of the input signal <cit.>. Such representation can be later used for plenty of speech and audio synthesis tasks <cit.>. Specifically, the authors of <cit.> proposed optimizing an LM on top of such discrete speech representation, conditioned on a phonemic representation of the input text for the task of TTS. Following such an approach was found to produce high-quality and natural speech, with the ability to rapidly adapt to new speakers via acoustic prompting. As this approach is contextualized by nature it may serve as the ideal candidate for a Diacritic-free Hebrew TTS system. In this work, we study and propose a Language Modeling approach which operates over discrete representations of the speech signal to construct a Hebrew TTS system. We optimize an acoustic LM over a weakly supervised large-scale dataset containing in-the-wild recordings. We empirically demonstrate that following the LM approach makes the usage of diacritics in Hebrew redundant, hence yielding a diacritic-free approach. We study several text representation methods and found that using word-piece tokenization produces the best results overall. Results suggest the proposed method is superior to the evaluated baselines considering both content preservation and generation quality. Code, dataset, and models are publicly available under the following link: <https://pages.cs.huji.ac.il/adiyoss-lab/HebTTS/>. § METHOD Given a dataset D={x_i, y_i} where y_i is an audio sample and x_i is its corresponding transcription. We encode the audio into a sequence of discrete units. We follow the approach proposed by <cit.> and encode the audio using Residual Vector Quantization (RVQ). Formally, E(y) = C^T× N_cb where C represents the acoustic code matrix, where N_cb is the number of codebooks and T is the utterance length. A common paradigm in the TTS field is to represent text in its most basic form, i.e., phonemes <cit.>. As we aim at building a Diacritic-free system we can not use phonemes as text representations. As a result, we use a sub-word tokenizer in the form of word-piece tokenization. Such tokenizer was found beneficial in text encoders such as BERT <cit.>, and more relevant to our setup AlephBERT <cit.>. We experimented with several other tokenizers, however, found the word-piece to provide the best overall results (see Section <ref> for more details). Below we describe both text tokenization and model in more detail. We depict a general overview of the LM-based approach in Fig. <ref>. Text Tokens. We tokenize the text using a word-piece text tokenizer similar to the one proposed by <cit.>. Specifically, we leverage a pre-trained Hebrew text tokenizer that was trained using 98.7M Hebrew sentences. word-piece tokenizers were tested in different models <cit.> and performs similarly to Byte-Pair Encoding <cit.>. Given a training corpus C and a number of desired word-pieces t, the optimization problem is to select t word-pieces such that minimizes the number of word-pieces generated when tokenizing the entire corpus C. We start with a small character vocabulary and special tokens W, and apply merge rules for the elements. iteratively we compute for each pair w_1, w_2 ∈ W a score as seen in equation <ref> and merge the pair with the maximum score getting a new vocabulary W' = W ∪{(w_1, w_2)}. We follow this step with the new vocabulary until |W| = t. score = freq(e_1, e_2)/freq(e_1) × freq (e_2), By dividing the frequency of the pair by the product of the frequencies of each of its parts, the algorithm prioritizes the merging of pairs where the individual parts are less frequent in the vocabulary. Model. Recall, our goal is to produce a Diacritic-free Hebrew TTS system that can handle weakly transcribed spoken data. Hence, we proposed leveraging the abilities of language models to efficiently model long contexts. Inspired by recent LM-based approaches for TTS <cit.>, our model uses an LM approach that operates directly over discrete representation obtained from a pre-trained speech encoder. The model first receives a text prompt, x_i, and a 3-second enrolled recording as an acoustic prompt. We then, encode the acoustic prompt using the same speech encoder E, and process the text using the text tokenizer defined in sub-section <ref>. Recall, that the speech encoder, E, quantizes the utterance using RVQ module, hence it outputs a matrix of size T× N_cb. Meaning that, at each time step we are left with N_cb discrete codes. There are several alternatives in the literature to handle this complex input structure. For instance, the authors in <cit.> proposed to predict all codes at each time-step in parallel while introducing a delay pattern to better model the conditional probability distribution. The authors in <cit.> proposed flattening the whole sequence (resulting in a N_cb times larger sequence) and splitting its modeling across two LMs. In this work, we follow the approach proposed in <cit.>. In which, the first codebook, c_,:1, is modeled in an Auto-Regressive (AR) manner following the standard next token prediction framework. Specifically, we concatenate the word-piece tokens with the first codebook from the acoustic prompt, denoted by w, c_≤ t,1, to infer the next acoustic token c_t,1 of the target signal. The rest of the codebooks (2 to N_cb), are modeled using a non-autoregressive (NAR) model, where the network is trained to maximize the acoustic tokens likelihood derived from the i-th quantizer codebook, conditioned on the sum of all representations from previous codebooks. Overall, unlike, prior works which are mainly based on Mel-spectrogram as speech representations, diacritics for text, and relatively small and high-quality amounts of training data. Following the LM approach, allows us to leverage large amounts of in-the-wild recordings, using plain text, and operate on top of discrete learned speech representations. Table <ref> summarizes the main differences between the methods. § DATASET We use both the ivrit.ai dataset <cit.> together the HebDB dataset <cit.>. Both datasets consists of ∼4500 hours of speech gathered from local podcasts (∼1700 from HebDB and ∼2800 from ivrit.ai). These datasets are comprised of spontaneous dialogues, featuring multiple speakers discussing a wide range of topics including economy, politics, sports, culture, science, history, and music, among others. The podcast recordings are full episodes, thus containing lengthy audio tracks and various non-speech elements such as music, environmental sounds, and periods of silence. Such real-world conditions present challenges for model optimization and necessitate preprocessing steps. We apply the same pre-processing pipeline to both ivrit.ai dataset to all the dataset. Initially, we standardize all audio recordings to a consistent 16kHz, mono recordings, using julius [<https://github.com/adefossez/julius>] python package. Subsequently, we employ a Voice Activity Detection (VAD) model, namely  <cit.> to perform a voice activity detection and segment the waveforms into sentences, filtering out activated segments with a minimum duration of 1 seconds, separating audio segments by a minimal silence duration of 100ms and padding both sides of the segmented audio with 30ms of silence. Finally, we automatically transcribe the segmented speech using a pre-trained ASR model, specifically Whisper V2-Large <cit.>. After preprocessing our data, we are left with ∼4500 hours of natural dialogues with weakly labeled transcriptions. § EXPERIMENT SETUP §.§ Implementation details Our model contains 420M parameters and is trained on 8 NVIDIA A30 24GB GPUs with a total batch size of 144,000 acoustic tokens. We optimize the model using EDEN scheduler as used in <cit.> with a starting learning rate of 5 × 10^-2. We train the AR model for 1.2M steps and the NAR for 200k steps. For the audio tokenizer, we use the officially released pretrained version of EnCodec <cit.> sampled at 24Khz to generate acoustic tokens [<https://github.com/facebookresearch/audiocraft/blob/main/docs/ENCODEC.md>]. To improve the quality of the generated audio we use the pre-trained Multi Band Diffusion (MBD) vocoder <cit.>. For tokenization, we use the pretrained word-piece tokenizer of AlephBERT [<https://github.com/OnlpLab/AlephBERT>] with vocabulary size of 52k tokens. We train the model for audio length sequences between 1-18 seconds. We sample the 50 most likely tokens using top_k=50 and temperature = 1. We adopt the following public code [<https://github.com/lifeiteng/vall-e>]. §.§ Evaluation metrics We evaluate the proposed method considering both objective metrics and human study. We consider several axes: (i) content preservation in the form of Word Error Rate (WER), Character Error Rate (CER), and human study; (ii) speaker similarity using a pre-trained speaker verification model; and (iii) overall quality and naturalness via human study. We describe each of the evaluation metrics below. WER and CER. We calculated Word Error Rates (WERs) and Character Error Rates (CERs) between the input text and an automatic transcription generated by an ASR system. Specifically, we run this evaluation using 100 randomly sampled text prompts with diacritics from SASPEECH <cit.> dataset. We remove the diacritics for our model and compare with the transcribed text from Whisper V2-Large <cit.> model which provides state-of-the-art performance. We normalize the text by removing all punctuation from both original and transcribed text. To improve the robustness of the sampling process, we sample three audio generations for each input prompt and select the one with the best WER w.r.t the input text. To calibrate the results with the errors produced by the Whisper model, we additionally calculate WER and CER between the reference and transcribed text of the original recordings. Speaker similarity. For speaker similarity we measure the cosine similarity between the generated speaker and an enrollment set of five different recordings of the person to identify. To compute the cosine similarity we use a state-of-the-art pre-trained speaker verification model <cit.>. This similarity measure was found to be beneficial in prior work <cit.>. Human evaluation. We conduct two different human studies to evaluate the quality of the generated samples. Raters were asked to evaluate the quality of the generated speech considering both generation fidelity and naturalness of the speech signals on a scale between 1 – 5, where 1 has the lowest quality and 5 is the best quality. We evaluate 20 samples from each of the evaluated methods while we enforce at least 15 ratings for each sample. All raters are native Hebrew speakers. Although the Whisper model reaches state-of-the-art performance, its WER in Hebrew is still ∼27. Hence, we additionally ask raters to rate the accuracy between the generated speech and the written text. Same as before raters evaluated the content of the recordings on a scale of 1 – 5, where 1 is the least accurate and 5 has a perfect match. We conduct a human study to evaluate the proposed method against the baseline methods as well as to evaluate the text tokenization method. §.§ Baseline systems We compare the proposed method against two baseline systems: (i) Massively Multilingual Speech (MMS) <cit.> and Overflow <cit.>. The MMS model is based on a multi-lingual wav2vec2.0 <cit.> trained on ∼500k hours from 1,107 languages, while 25 hours in Hebrew. The Overflow model is based on a neural HMM combined with normalizing flows for describing highly non-Gaussian distribution of the acoustics. This model was trained over 30 hours of single-speaker, high-quality data, obtained from the `Hayot-Kiss' podcast <cit.>. Both methods are based on predicting Diacritics using an external model. In both methods, we use the official pre-trained models released by the authors and follow exactly their text pre-processing pipelines. § RESULTS We start by evaluating the proposed method against both MMS and Overflow. Results are summarized in Table <ref>. The proposed method provides superior performance to the evaluated baselines considering both objective metrics and human study. Notice, following the LM approach for Hebrew TTS additionally, allows fast adaptation to new speakers. The proposed method shows minor differences in performance when considering speech and unseen speakers. Interestingly, when considering WER, CER, and Speaker similarity, the Overflow method provides comparable performance to ours while being superior to the MMS model. The main difference between the methods is reflected in the naturalness of the generated speech. Moreover, it is worth mentioning that although the WER and CER are comparable across all methods (with MMS achieving worse WER and Overflow achieving worse CER), these are based on automatic transcriptions that do not take into account the pronunciation, meaning two different words can be transcribed to the same sequence characters while reflecting completely different pronunciation. However, when investigating the content metric under the human study we observe larger differences. The effect of the tokenizer. As there is no direct mapping between non-diacritic text to phonemes in Hebrew, it is not clear how one should represent the text for the system. A natural approach would be to use character tokenization (i.e., converting the text into a sequence of characters). Another alternative that gains popularity in textual language models is to use a word-piece tokenizer <cit.>. In this study, we follow the word-piece tokenizer approach. To better evaluate the effect of using different tokenization methods for the input text we trained two versions of the proposed method using both chars and word-piece tokenizer. We additionally experimented with contextualized representations obtained from hidden layers of a pre-trained text encoder model, namely AlephBERT <cit.>. Unfortunately, such text representation performs significantly worse than the other tokenizers, hence we do not report results for it. We measure WER and CER metrics, together with a human study measuring content preservation. Results are presented in Table <ref>. Results suggest that following the word-piece tokenizer provides superior performance to the character-based alternative. This result is being reflected across all the evaluated metrics, however similarly to the results in Table <ref>, we observe the larger gap when considering subjective content preservation study. § CONCLUSION In this work, we demonstrate how language models that operate over discrete speech tokens can act as Diacritics-free Hebrew TTS systems. Due to their naturally contextualized manner, language models can better handle ambiguous pronunciations obtained in the absence of diacritics. We empirically show that following the language modeling approach, trained at scale using weakly transcribed data, yields superior performance to non-contextualized, traditional TTS systems when considering context preservation, naturalness, and similarity to the speaker in the generated samples. Limitations. As the our method is based on auto-regressive LM its inference time is relatively long compared to other TTS systems. Moreover, due to its auto-regressive nature, the duration of the generated speech is determined by the model outputs an end-of-sequence token. Additionally, the model can skip words or invent new ones that did not appear in the text prompt. Although we did not observe such behavior significantly affecting model performance, this lack of controllability imposes another limitation when following the LM approach. Future work. To advance research in the field, more benchmarks are needed. For future work, we aim to tackle this task by constructing high-quality, large-scale speech data, directly dedicated for synthesis purposes. Acknowledgements This research work was supported by the Israel Innovation Authority, grant number 78563. IEEEtran
http://arxiv.org/abs/2407.12540v1
20240717132447
Modified Patankar Linear Multistep methods for production-destruction systems
[ "Giuseppe Izzo", "Eleonora Messina", "Mario Pezzella", "Antonia Vecchio" ]
math.NA
[ "math.NA", "cs.NA", "math.DS", "65L05 (Primary) 65L06, 34B18 (Secondary)" ]
customproof[1] theoremTheorem[section] lemma[theorem]Lemma defn[theorem]Definition corollaryCorollary[section] remark *remarkRemark definitionDefinition[section] Modified Patankar Linear Multistep methods for production-destruction systems https://orcid.org/0000-0003-1412-8702 < g r a p h i c s > Giuseppe IzzoMembers of the INdAM Research group GNCS Department of Mathematics and Applications University of Naples “Federico II” Via Cintia, I - 80126 Naples, Italy. https://orcid.org/0000-0003-4545-6266 < g r a p h i c s > Eleonora Messina^* Department of Mathematics and Applications University of Naples “Federico II” Via Cintia, I - 80126 Naples, Italy. https://orcid.org/0000-0002-1869-945X < g r a p h i c s > Mario Pezzella^* Institute for Applied Mathematics “Mauro Picone” C.N.R. National Research Council of Italy, Via P. Castellino, 111 - 80131 Naples - Italy. https://orcid.org/0000-0003-2746-7535 < g r a p h i c s > Antonia Vecchio^* Institute for Applied Mathematics “Mauro Picone” C.N.R. National Research Council of Italy, Via P. Castellino, 111 - 80131 Naples - Italy. Received February 2024; Accepted July 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Modified Patankar schemes are linearly implicit time integration methods designed to be unconditionally positive and conservative. In the present work we extend the Patankar-type approach to linear multistep methods and prove that the resulting discretizations retain, with no restrictions on the step size, the positivity of the solution and the linear invariant of the continuous-time system. Moreover, we provide results on arbitrarily high order of convergence and we introduce an embedding technique for the Patankar weights denominators to achieve it. § INTRODUCTION We address Production-Destruction Systems (PDS) of ordinary differential equations of the form y_i^'(t)=P_i(y(t))-D_i(y(t)), y_i(0)=y_i^0, i=1,…, N, where y(t)=(y_1(t),…,y_N(t))^𝖳∈ℝ^N is the vector of constituents at time t≥ 0. The right-hand side of (<ref>) incorporates the non-negative terms P_i(y(t)) and D_i(y(t)) defined as follows P_i(y)=∑_j=1^Np_ij(y) D_i(y)=∑_j=1^Nd_ij(y), i=1,…, N, where the functions p_ij(y)≥ 0 and d_ij(y)≥ 0 represent, respectively, the rates of production and destruction processes for each constituent and component. Here, we introduce the matrices P(y)={p_ij(y)}∈ℝ^N× N, D(y)={d_ij(y)}∈ℝ^N× N and equivalently reformulate the PDS (<ref>) as y^'(t)=(P(y(t))-D(y(t)))e, y(0)=y^0, where here, and in the following sections, e=(1,…,1)^𝖳 represents a vector of all ones with the appropriate number of components. The mathematical modeling of various real life phenomena leads to differential systems of the form (<ref>). PDS find relevant applications into chemical <cit.> and biogeochemical <cit.> processes, as well as into epidemic <cit.>, ecosystem <cit.> and astrophysical <cit.> models (see also <cit.> for a comprehensive list of applications). Our investigation is restricted to positive and fully conservative production-destruction systems, for which we assume that y_i^0>0 y_i(t)>0, ∀ t> 0, i=1,…, N and that for each component-wise positive vector z∈ℝ^N, p_ij(z)=d_ji(z) p_ii(z)=d_ii(z)=0, 1≤ i,j≤ N. The positivity condition (<ref>) is guaranteed as long as the initial value problem (<ref>) with positive initial value has a unique solution and the matrix D(y) vanishes as y→0=(0,…,0)^𝖳∈ℝ^N (further discussions on this topic can be found in <cit.>). General theoretical results about the existence, uniqueness and positivity of the solution of a production-destruction system have been outlined in <cit.> and <cit.>. The assumption (<ref>) leads to the linear invariant e^𝖳y(t) and to the following conservation law e^𝖳(y(t)-y^0)=0, ∀ t ≥ 0. As a matter of fact, for a fully conservative production-destruction system, the equality e^𝖳(P(y(t))-D(y(t)))e=0, ∀ t≥ 0, holds true and from (<ref>), (e^𝖳y(t))'=0. When applied to a positive and fully conservative PDS, a numerical method ought to satisfy a discrete counterpart of (<ref>) and (<ref>). However, the requirement for such properties in standard schemes usually yields a severe restriction on the stepsize. This motivates the interest in devising unconditionally positive methods which, provided the initial values are positive, produce positive numerical solutions independently of the steplength. Moreover, we will refer to a numerical method as unconditionally conservative if it retains the linear invariant of the system (<ref>) whatever the stepsize. The development of unconditionally positive and conservative numerical methods for production-destruction differential systems has been addressed in several scientific contributions (see, for instance, <cit.>). Particularly effective in this context is the class of Modified Patankar methods, arising from the manipulations of explicit schemes via the Patankar-trick <cit.>, with the aim of ensuring the desired preservation properties for the numerical solution. The origin of this class traces back to <cit.> where a modification of forward Euler and Heun’s methods resulted in a first and second order positivity-preserving and conservative scheme, respectively. More recently, the same approach has been extended to Runge-Kutta methods and a general definition of Modified Patankar Runge-Kutta (MPRK) schemes has been introduced. In <cit.>, MPRK schemes of second and third order were presented and analyzed. Strong-Stability-Preserving MPRK (SSPMPRK) methods were designed in <cit.> to solve convection equations with stiff source terms, starting from the Shu-Osher form <cit.> of Runge-Kutta discretizations. A comprehensive analysis through center manifold theory of the stability of the MPRK and SSPMPRK schemes was carried out in <cit.> and <cit.>, respectively. The modified Patankar approach was not solely limited to Runge-Kutta methods. A predictor-corrector modified Patankar scheme was presented in <cit.>. The proposed discretization relied on a MPRK method as a predictor and on a modified Patankar three-steps backward differentiation formula for the correction step. High order Modified Patankar Deferred Correction (MPDeC) schemes were outlined in <cit.> as effective integrators to mitigate common issues of MPRK discretizations, including oscillations around steady-state solutions and accuracy loss in cases of vanishing initial states. The geometrically conservative non-standard integrators proposed in <cit.>, specifically designed to preserve linear invariants and to ensure the positivity of more general biochemical systems, are effectively employed for addressing production-destruction systems. Two modified Patankar methods based on specific second and third order multistep schemes were presented in <cit.>. The leading purpose of this paper is to broaden the benefits of Patankar-type modifications to linear multistep methods, providing a general numerical framework for achieving high order of convergence. The manuscript is organized as follows: in Section <ref>, the class of unconditionally positive and conservative Modified Patankar Linear Multistep (MPLM) methods is formulated. With Section <ref> a theoretical investigation of the approximation error is performed and general results on arbitrarily high order of convergence are provided. Furthermore, a recursive embedding technique for the efficient computation of the Patankar weight denominators is presented. Numerical experiments are reported in Section <ref>, while some remarks and future perspectives, in Section <ref>, conclude the paper. § THE MODIFIED PATANKAR LINEAR MULTISTEP SCHEME Let α=(α_1,…,α_k)^𝖳∈ℝ^k and β=(β_1,…,β_k)^𝖳∈ℝ^k be the coefficients of an explicit k-steps Linear Multistep (LM) method, with k>1 positive integer. Assume that such a method is convergent of order p≥ 1, which implies that ∑_r=1^kα_r=1 ∑_r=1^k(r^qα_r-qr^q-1β_r)=0, 1≤ q≤ p. Consider h>0, t_n=nh for n≥ 0 and y^n=(y_1^n,…,y_N^n)^𝖳≈y(t_n), for n≥ k. From now on, all the inequalities involving vectors are considered component-wise. In analogy with the scientific literature on Patankar methods, we provide the following definition. Given the LM coefficients α≥ 0 and β≥ 0, the k-steps scheme y_i^n=∑_r=1^kα_ry_i^n-r+h∑_r=1^kβ_r∑_j=1^N(p_ij(y^n-r)y_j^nσ_j^n-d_ij(y^n-r)y_i^nσ_i^n), 1≤ i ≤ N, n≥ k, where y^0,…,y^k-1∈ℝ^N are given and σ_i^n∈ℝ, i=1,…,N, is referred to as a Modified Patankar Linear Multistep (MPLM-k) method if * σ_i^n are unconditionally positive for each i=1,…,N and n≥ k; * σ_i^n are independent of y_i^n for each i=1,…,N and n≥ k. The linearly implicit numerical method (<ref>) is devised by applying to a LM discretization of (<ref>) a modified version of the source term linearization technique <cit.> originally proposed by Patankar in <cit.>, with the aim of designing unconditionally positive and conservative schemes. More specifically, the terms σ_i^n=σ_i^n(y^n-1,…, y^n-k), n≥ k, i=1,…,N, are referred to as Patankar-Weight Denominators (PWDs). Notice that the discrete equation (<ref>) is general enough to encompass the linear methods presented in <cit.>. As a matter of fact, in case of k=3, α=(34,0,14)^𝖳, β=(0,32,0)^𝖳, k=4, α=(1627,0,0,1127)^𝖳, β=(169,0,0,49)^𝖳, (<ref>) coincides with the second and third order schemes of <cit.>, respectively. Throughout this paper we refer, when needed, to the following equivalent compact notation of (<ref>) y^n=∑_r=1^k α_ry^n-r +h∑_r=1^kβ_rM^n-r y^n, n≥ k, where M^n-r=( P(y^n-r)-(D(y^n-r)e) ) (S^n)∈ℝ^N× N, r=1,…,k, with S^n=(1/σ_1^n,…,1/σ_N^n)^𝖳∈ℝ^N, n≥ k. As already pointed out, the Patankar-type transformation is adopted to ensure positivity for the approximation of the solution to (<ref>). The following result provides some conditions on the coefficients of (<ref>), or equivalently of (<ref>), which lead to a positive numerical solution regardless of the discretization step-size. Let {y^n}_n≥ k be the approximation of the solution to (<ref>) computed by (<ref>). Suppose that (<ref>) and (<ref>) hold true and assume that * the given starting values y^0,…,y^k-1 are positive. Then, for all h >0 and n≥0, y^n>0. From (<ref>), the numerical method reads M y^n=∑_r=1^k α_ry^n-r, n≥ k, with M=I-h ∑_r=1^kβ_r M^n-r∈ℝ^N× N. From (<ref>), the entries of M^n-r are M^n-r_ii=-∑_j=1^N d_ij(y^n-r)/σ_i^n, M^n-r_ij=p_ij(y^n-r)/σ_j^n, 1≤ i,j≤ N, i≠ j, for n≥ k and r=1,…, k, which implies that M^𝖳 is a Z-matrix, since M_ii=1+h ∑_r=1^kβ_r ∑_j=1^N d_ij(y^n-r)/σ_i>0 M_ji=-h∑_r=1^kβ_r p_ji(y^n-r)/σ_i<0. Furthermore, from (<ref>), M^𝖳 is strictly diagonally dominant since ∑_j=1,j≠ i^N |M_ji|=h∑_r=1^kβ_r ∑_j=1^N d_ij(y^n-r)/σ_i<M_ii, 1≤ i,j≤ N, i≠ j. Therefore M^𝖳 is a M-matrix (see, for instance, <cit.>). It follows that M is a M-matrix as well, it is invertible and M^-1 has all non-negative entries, independently of n≥ k. Consequently y^n=M^-1∑_r=1^k α_ry^n-r and an inductive procedure, starting from <ref>, yields the result. An investigation of the discrete-time level preservation of the invariance property (<ref>) of a positive and fully conservative PDS is carried out with the following lemma. Let {y^n}_n≥ k be the approximation of the solution to (<ref>) computed by (<ref>). Suppose that (<ref>) and (<ref>) hold true and assume that * for the given starting values, e^𝖳y^0=…=e^𝖳y^k-1=η. Then, for each h >0 and n≥ 0, e^𝖳( y^n-y^0)=0. It suffices to show that e^𝖳y^n=η for each n≥ k. The conservativity property (<ref>) yields, independently of 1≤ r ≤ k and n≥ k, ∑_i,j=1^N(p_ij(y^n-r)y_j^nσ_j-d_ij(y^n-r)y_i^nσ_i) =∑_i,j=1^N(p_ij(y^n-r)y_j^nσ_j-p_ji(y^n-r)y_i^nσ_i)=0, then, from (<ref>), e^𝖳y^n=∑_r=1^kα_r(e^𝖳y^n-r). The result is therefore derived from this last relation and the first of (<ref>) by inductive arguments. Lemmas <ref> and <ref> establish conditions for the method (<ref>) to be unconditionally positive and conservative. As a matter of fact, if their hypotheses are fulfilled, the numerical solution computed by the MPLM-k scheme retains, with no restrictions on the step-length h, the properties (<ref>) and (<ref>) of the continuous-time PDS. Therefore to attain these properties we assume, from now on, that the starting values satisfy both <ref> and <ref>. § ERROR ANALYSIS AND CONVERGENCE In this section we analyze the error arising from the approximation of the continuous-time solution to (<ref>) by the numerical methods of the class (<ref>) and investigate the conditions that the Patankar weight denominators have to satisfy for attaining high order of convergence. A recursive practical technique, based on the embedding of different MPLM schemes, is then presented to efficiently compute the PWDs. In order to investigate the consistency of the discretization (<ref>), we address the corresponding local truncation error, here denoted by δ_i(h; t_n)=y_i(t_n)-∑_r=1^kα_ry_i(t_n-r) -h∑_r=1^kβ_r∑_j=1^N(p_ij(y(t_n-r))y_j(t_n)σ_j(y(t_n-1),…,y(t_n-k))-d_ij(y(t_n-r))y_i(t_n)σ_i(y(t_n-1),…,y(t_n-k))), for i=1,…,N and n≥ k. The following result provides a condition on the PWDs which ensures the order p consistency of the MPLM methods. (Sufficient Condition) Assume that the given functions describing problem (<ref>) belong to C^p(Ω_0), with p≥ 1 and Ω_0={z∈ℝ^N : 0≤ z_i≤e^𝖳y^0, i=1,…,N}. If the Patankar weight denominators satisfy σ_i(y(t_n-1),…,y(t_n-k)) =y_i(t_n)+𝒪(h^p), i=1,…, N, n≥ k, then the MPLM-k method (<ref>) is consistent with (<ref>), of order p. It suffices to prove that δ_i(h; t_n)=𝒪(h^p+1) for each i=1,…,N and n≥ k. From (<ref>) it follows that y_i(t_n)σ_i(y(t_n-1),…,y(t_n-k))=y_i(t_n)y_i(t_n)+𝒪(h^p)=1+𝒪(h^p), i=1,…, N and the local truncation error defined in (<ref>) reads, for n≥ k, δ_i(h; t_n)=δ_i^LM(h;t_n)+𝒪(h^p+1), i=1,…, N, δ_i^LM(h;t_n)=y_i(t_n)-∑_r=1^kα_ry_i(t_n-r)-h∑_r=1^kβ_r∑_j=1^N(p_ij(y(t_n-r))-d_ij(y(t_n-r))), is the local truncation error of the underlying LM method with coefficients α and β. Finally, since from (<ref>) δ_i^LM(h;t_n)=𝒪(h^p+1), we get the result. Theorem <ref> guarantees the consistency of the MPLM methods when the PWDs are selected to fulfil (<ref>). The following result, whose proof is reported in [sec:Conclusions]Appendix A, states that this condition represents the minimum and the least stringent requirement for attaining the order p consistency, as it constitutes a necessary prerequisite. (Necessary Condition) Assume that the MPLM-k method (<ref>) is consistent with (<ref>) of order p≥ 1. Then the Patankar weight denominators satisfy (<ref>). To investigate the convergence properties of the scheme (<ref>), we first establish some preparatory results. In what follows, given Ω⊂ℝ^N, we denote by Ω^l=Ω×…×Ω, l times. Let l≥ 1 be a positive integer and Ω be a compact subset of ℝ^N. Consider, for 1≤ i,j ≤ N and x=(x^1,…,x^l)∈Ω^l, the functions σ_i(x)∈ C^1(Ω^l), σ_i(x)>0, A(x)={a_ij(x) } a_ij(x)∈ C^1(Ω^l), I-A(x) independently of x. Then, the functions Z: x∈Ω^l →(S(x)) ∈ℝ^N× N, with S(x)=(1/σ_1(x),…,1/σ_N(x))^𝖳∈ℝ^N, F: x∈Ω^l → (I-A(x))^-1∈ℝ^N× N, g: x∈Ω^l → F(x) ∑_r=1^lα_r x^r ∈ℝ^N, are continuously differentiable on Ω^l. The first statement comes from ∂ Z∂ x^v_j(x^1,…,x^l)=-(S̅(x)) (Ŝ^vj(x)), [ j=1,…,N,; v=1,…,l, ] S̅(x)=(1σ_1^2(x),…,1σ_N^2(x))^𝖳∈ℝ^N, Ŝ^vj(x)=(∂σ_1(x)∂ x^v_j,…,∂σ_N(x)∂ x^v_j)^𝖳∈ℝ^N. Denote with A̅(x) the adjoint matrix of I-A(x), whose entries are continuous functions of the coefficients a_ij(x). Because of the assumptions, A̅(x) and D_A : x∈Ω^l → 1/A(x)∈ℝ are well posed and continuously differentiable functions. Therefore, from F(x)=D_A(x)A̅(x), it follows F∈ C^1(Ω^l). Finally, ∂g∂ x^v_j(x^1,…,x^l)=∂ F∂ x^v_j(x)∑_r=1^lα_r x^r+α_v (F_1j(x),…,F_Nj(x))^𝖳 , [ j=1,…,N,; v=1,…,l, ] yields the result. The following result, whose proof is deducted from the arguments in <cit.>, facilitates the analysis of the approximation error of (<ref>). Let l≥ 1 be a positive integer. Consider a sequence of non-negative numbers {a_n}_n∈ℕ_0 and assume that there exist b≥ 0 and c_r≥ 0, r=1,…,l, such that c=∑_r=1^l c_r> 1 and a_n≤ b + ∑_r=1^l c_ra_n-r for n≥ l. Then a_n≤(a^*+bc-1)exp(n(c-1)), n≥ 0, where a^*=max_0≤ j ≤ l-1a_j. For n=0,…, l-1, the bound (<ref>) directly follows from the definition of a^* and the non-negativity of the termes involved. To prove it for n≥ l, we firstly show by induction that a_n≤ c^na^*+b∑_j=0^n-1c^j. When n=l, from a_l≤ ca^*+b and l≥ 1, we get the result. Consider now n>l and assume that the statement holds for j=l,…, n-1. It follows that a_n≤ b+∑_r=1^l c_r(c^n-ra^*+b∑_j=0^n-r-1c^j)≤ b+∑_r=1^l c_r(c^n-1a^*+b∑_j=0^n-2c^j)≤ c^na^*+b∑_j=0^n-1c^j. Therefore, it turns out that a_n≤ c^na^*+b (c^n-1)/(c-1) for n≥ l. Finally c^n≤ exp(n(c-1)), which completes the proof. A thorough analysis of the global discretization error of (<ref>) leads to the following convergence result. Let y(t) be the continuous-time solution to (<ref>) for t∈ [0,T], with T>0 and let {y^n}_n≥ 0 be its approximation computed by the k-steps MPLM scheme (<ref>) with h=T/n̅. Define Ω={x∈ℝ^N : μ≤ x_i≤e^𝖳y^0, i=1,…,N}, with μ positive constant. Assume that the given functions describing problem (<ref>) belong to C^p(Ω), with p≥ 1 and that * the starting values satisfy y(t_m)-y^m=𝒪(h^p), m=0,…,k-1; * the PWDs are continuously differentiable functions on Ω^k and satisfy (<ref>). Then, the method (<ref>) is convergent of order p. Because of the properties of the continuous and the numerical solution to (<ref>) outlined in (<ref>) and Lemma <ref>, there exists μ>0 such that y(t) and y^n belong to Ω, for all t≥0 and n=0,…,n̅. Define, for x=(x^1,…,x^k)∈Ω^k and S^n(x)=(1/σ_1^n(x),…,1/σ_N^n(x))^𝖳∈ℝ^N, the functions Φ^nr : x∈Ω^k → (P(x^r)-(D(x^r)e)) (S^n(x) )∈ℝ^N× N G^nr : (x,x^k+1)∈Ω^k+1 → Φ^nr(x) x^k+1∈ℝ^N, [ r=1,…,k,; n=k,…,n̅. ] The global discretization error e(h;t_n)=y(t_n)-y^n then satisfies e(h;t_n)= ∑_r=1^kα_re(h;t_n-r)+h∑_r=1^kβ_r (G^nr(y(t_n-k),…,y(t_n))-G^nr(y^n-k,…,y^n) ) +δ(h;t_n), n=k,…,n̅, where the components of δ(h;t_n)=(δ_1(h;t_n),…,δ_N(h;t_n))^𝖳∈ℝ^N are defined in (<ref>). Because of the regularity assumptions on the known functions and the first result of Lemma <ref>, Φ^nr∈ C^1(Ω^k) and G^nr∈ C^1(Ω^k+1), for each r=1,…,k and n=k,…,n̅. Therefore, from the mean value theorem G^nr(y(t_n-k),…,y(t_n))-G^nr(y^n-k,…,y^n) =J_G^nr(ξ^n-k_r,…,ξ^n_r) [ e(h;t_n-k); ⋮; e(h;t_n) ], where J_G^nr(ξ^n-k_r,…,ξ^n_r)∈ℝ^N× (Nk+N) is the Jacobian of G^nr. Let J_G^nr(ξ^n-k_r,…,ξ^n_r)=( J_G^nr^(0)(ξ^n-k_r,…,ξ^n_r)_N× N | … | J_G^nr^(k)(ξ^n-k_r,…,ξ^n_r)_N× N). Then, for each n=k,…, n̅ and r=1,…,k, J_G^nr^(j)(ξ^n-k_r,…,ξ^n_r)≤J_G^nr(ξ^n-k_r,…,ξ^n_r)≤ J, j=0,…,k, with J positive constant depending on the bounds of the known functions and their derivatives on Ω^k+1. Substituting (<ref>) into (<ref>) leads to the discrete equation e(h;t_n)=∑_r=1^kα_re(h;t_n-r)+h∑_r=1^kβ_r∑_j=0^k J_G^nr^(j)(ξ^n-k_r,…,ξ^n_r) e(h;t_n-j)+δ(h;t_n), for n=k,…, n̅. Denoted B=J∑_r=1^kβ_r, for a sufficiently small h, from (<ref>) e(h;t_n)≤∑_r=1^kα_r+hB1-hBe(h;t_n-r)+0≤ n ≤n̅maxδ(h;t_n)1-hB, n=k,…, n̅. Because of the first relation in (<ref>), ∑_r=1^kα_r+hB/1-hB=1+h (k+1)B/1-hB> 1, so that from Lemma <ref> it follows e(h;t_n)≤(0≤ m ≤ k-1maxy(t_m)-y^m + max_0≤ n ≤n̅δ(h;t_n)h (k+1)B) exp((k+1)TB/1-hB), for 0≤ n ≤n̅ and T=n̅h. Because of the assumptions on the initial values and the order p consistency of (<ref>) (Theorem <ref>), we have max_0≤ n ≤n̅e(h;t_n)≤ C h^p, with C positive constant not depending on h, which yields the result. §.§ The σ-embedding technique Theorem <ref> provides sufficient conditions for the order p convergence of the numerical method (<ref>), but give no clues on how to compute unconditionally positive and conservative Patankar weight denominators satisfying (<ref>). To achieve this goal, here we introduce an embedding technique based on a recursive use of MPLM methods. More specifically, for p=1, we just consider the Modified Patankar Euler (MPE) method (see <cit.> for further details), corresponding to (<ref>) with k=1, α_1=β_1=1, σ_i^n=y_i^n-1, n≥ 1, i=1,…,N, and then investigate MPLM-k schemes for p≥ 2 and k>1. Let σ^n=σ^n(p-1) be PWDs satisfying the condition (<ref>). Our approach consists in recursively computing σ^n(p-1) by a k̃≤ k steps, order p-1 convergent MPLM-k̃ method, whose coefficients are denoted by α^(p-1)∈ℝ^k̃ and β^(p-1)∈ℝ^k̃, as follows σ^n(p-1)(y^n-1 (p),…,y^n-k̃(p))= (I-h∑_r=1^k̃β_r^(p-1)M^n-r(p))^-1∑_r=1^k̃α_r^(p-1)y^n-r (p), for n≥ k, where M^n-r(p)=( P(y^n-r(p))-(D(y^n-r(p))e) ) (S^n(p-2))∈ℝ^N× N, r=1,…,k and S^n(p-2)=(1/σ_1^n(p-2),…,1/σ_N^n(p-2))^𝖳∈ℝ^N. Here y^n(p), n≥ k, represents the numerical solution computed by the MPLM-k method. With the following result we prove that the PWDs computed adopting the σ-embedding technique (<ref>) meet the condition (<ref>), which is sufficient for to the convergence of the MPLM-k scheme. Let y(t) be the continuous-time solution to (<ref>) for t∈ [0,T], with T>0 and let {y^n(p)}_n≥ 0 be its approximation computed by the k-steps MPLM scheme (<ref>) with h=T/n̅. Assume that the given functions describing problem (<ref>) belong to C^p(Ω), with p≥ 1 and Ω in (<ref>). Assume that * the starting values satisfy y(t_m)-y^m(p)=𝒪(h^p), m=0,…,k-1; * the σ-embedding strategy is implemented and the PWDs are computed by a k̃-steps, order p-1 convergent MPLM-k̃ scheme as detailed in (<ref>). Then, the PWDs functions σ^n(p-1), n=k,…,n̅, are continuously differentiable on Ω^k̃ and satisfy (<ref>). Therefore, the numerical method (<ref>) is convergent of order p. The case p=1 corresponds to the first order convergent MPE method (<ref>). For p≥ 2, we prove the result by induction. Firstly, we show that the MPLM-k scheme (<ref>) is quadratically convergent (p=2) if the PWDs are computed by the one step Modified Patankar Euler discretization (<ref>). As a matter of fact, the consistency condition (<ref>) comes from <cit.>, since σ^n(1)(y(t_n-1))= (I-hΦ(y(t_n-1)))^-1y(t_n-1)=y(t_n)+𝒪(h^2), n≥ 1, where Φ(x)=(P(x)-(D(x)e)) (1/x_1,…,1/x_N)∈ℝ^N× N. Furthermore, from Lemma <ref>, σ^n(1)∈ C^1(Ω). Therefore, all the hypotheses of Theorem <ref> are fulfilled and the second order convergence is established. Consider p>2 and assume the statement to be true for each s=1,…,p-1. In this case, the PWDs σ^n(p-1), n=k,…,n̅, are computed by an order p-1 convergent, then consistent <cit.>, MPLM-k̃ method, accordingly to (<ref>). Therefore, from Theorem <ref>, the condition (<ref>) directly follows. Furthermore, from the inductive hypotheses, the functions σ^n(p-2)∈ C^1(Ω^k̃), n=k,…,n̅ and hence, from Lemma <ref>, σ^n(p-1)(x)=(I-h∑_r=1^k̃β_r^(p-1)Φ^nr(p-2)(x))^-1∑_r=1^k̃α_r^(p-1)x^r∈ℝ^N, Φ^nr(p-2)(x)=(P(x^r)-(D(x^r)e)) (S^n(p-2)(x))∈ℝ^N× N, S^n(p-2)(x)=(1σ_1^n(p-2)(x),…,1σ_N^n(p-2)(x))^𝖳∈ℝ^N, x=(x^1,…,x^k̃)^𝖳, is continuously differentiable on Ω^k̃ as well. Finally, an application of Theorem <ref> yields the result. Theorem <ref> establishes a practical and general framework to compute the PWDs by embedding different MPLM methods, starting from the modified Patankar Euler discretization. This machinery allows the construction of arbitrarily high order unconditionally positive and conservative numerical methods. However, the σ-embedding technique in (<ref>) is specifically designed for positive PDS and cannot be implemented if any of the components of the initial value is zero. This issue has been addressed and overcome in <cit.> for modified Patankar Runge-Kutta schemes by replacing zero components of the initial value by small quantities like realmin≈2.26· 10^-308. Our numerical experiments of Section <ref> confirm the effectiveness of this expedient also for the modified Patankar linear multistep methods. § NUMERICAL EXPERIMENTS In this section we report some experiments and compare the performances of the method (<ref>) with that of some well-established modified Patankar schemes. Our investigation is conducted employing the modified Patankar linear multistep methods of order from 2 to 6, whose coefficients are reported in Table <ref>. Here, we adopt the notation MPLM-k(p) to indicate a k-steps, order p scheme. The numerical simulations of some test problems performed by the methods (<ref>)-Table <ref> with the σ-embedding technique provide experimental evidence of convergence up to the sixth order. A self-starting embedding procedure is here implemented for computing accurate starting values satisfying <ref> and <ref>. To assess the order of the MPLM-k(p) schemes, we consider the maximum absolute error E(h) and the experimental rate of convergence p̂ defined as follows E(h)=max_0≤ n ≤ T/hy^n(ref)-y^n_∞, p̂=log_2(E(h)E(1/2h)). For each test problem, the reference solution y^n(ref) in (<ref>) is obtained by the built-in Matlab function with absolute and relative tolerances of =ε_mach≈2.22· 10^-16 and =ε_mach· 10^2, respectively. A direct comparison with a third order Modified Patankar Runge-Kutta (MPRK3) method (see <cit.> with γ=0.5) and with high order Modified Patankar Deferred Correction (MPDeC) schemes (see <cit.>), highlights the competitive performances of the MPLM-k discretizations. Incidentally, in the case of the MPDeC integrators, the computational efforts required to obtain the coefficients are disregarded. All the numerical experiments are conducted on a single machine equipped with an Intel Core i7-7700HQ Octa-Core processor operating at 2.80GHz and supported by 8.00 GB of RAM. Both MPRK3 and MPLM-k algorithms are implemented and executed using MATLAB (version R2020b). Additionally, the MPLM-k schemes are implemented and executed with Julia (version 1.8.5) and compared against the MPDeC codes available in the repository <cit.>. §.§ Test 1: linear test Our first example consists in the following linear test y^'_1(t)=-a y_1(t)+y_2(t), y^'_2(t)=-a y_1(t)-y_2(t), a=5, t∈[0,2] y^0=[ 0.9; 0.1 ], proposed in <cit.>. The system (<ref>) describes the exchange of mass between two constituents and fits the form of a fully conservative PDS, where the production and destruction terms in (<ref>) are given by p_12(y)=y_2, p_21(y)=ay_1, d_12(y)=ay_1, d_21(y)=y_2. Since p_ij(y) is continuously differentiable with bounded derivatives and lim_y→0p_ij(y)=0, for i,j∈{1,2}, the existence of a unique and positive solution to (<ref>) is guaranteed by <cit.>. The approximation of the continuous-time solution to (<ref>), computed by the MPLM-10(6) method with h=2^-5, is reported in Figure <ref>. In compliance with Lemmas <ref> and <ref>, the numerical solution is positive and the linear invariant of the PDS (<ref>) is retained. In Table <ref> we list the maximum errors E(h) on the integration interval [0,2] and the experimental rate of convergence p̂ for all the methods listed in Table <ref>. From Table <ref>, as well as from Figures <ref> and <ref>, it is clear that the experimental order agrees with the theoretical one established in Theorem <ref>. Furthermore, the work precision diagram of Figure <ref> shows the mean execution time over 10 runs against the approximation error for the MPLM-k(p) methods with 2≤ p≤ 6. It is evident that, for p≥ 4, the methods in Table <ref> are computationally more efficient than the benchmark scheme MPRK3, in terms of the accuracy-computational cost trade off. The MPLM methods in Table <ref> are then also compared to the modified Patankar deferred correction integrators outlined in <cit.>, utilizing the Julia implementation of the MPDeC schemes provided in <cit.>. For the sake of comparison, we consider the mean relative error taken over all time steps and all constituents, ε(h) = 1/N∑_i=1^N( √(n̅∑_n=1^n̅(y_i^n(J)-y_i^n)^2)n̅∑_n=1^n̅y_i^n(J)), n̅=T/h, as defined in <cit.>. Here, the reference solution y^n(J) in (<ref>) is obtained by the built-in Julia solver (implicit Runge-Kutta of variable order between 5 and 13) with absolute and relative tolerances of 10^-14. It turns out that for the [Test1]Test 1, the MPDeC methods exhibit superior performance with respect to the MPLM schemes, as shown in Figure <ref>. §.§ Test 2: nonlinear test For our second test problem, we consider the non-stiff nonlinear system <cit.> y^'_1(t)=-y_1(t)y_2(t)y_1(t)+1, y^'_2(t)=-y_1(t)y_2(t)y_1(t)+1-ay_2(t), y^'_3(t)=-ay_2(t), [ a=0.3,; t∈ [0,30], ] y^0=[ 9.98; 0.01; 0.01 ], obtained from (<ref>) by taking p_ij(y)=0=d_ji(y) for all combination of i,j=1,…,3 other than p_21(y) =d_12(y)=y_1y_2y_1+1, p_32(y) =d_23(y) =ay_2. The existence and the uniqueness of a positive solution to (<ref>) comes from <cit.>. As a matter of fact, given 0<y=(y_1,y_2,y_3)^𝖳≤e^𝖳y^0, ∂ p_ij/∂ y_l(y)≤max{a,e^𝖳y^0}, p_ij(y)y→0⟶0, i,j,l∈{1,2,3}. The PDS (<ref>) models an algal bloom and might be interpreted as a geobiochemical model for the upper oceanic layer in spring, when nutrient rich surface water is captured in the euphotic zone. During the process, nutrients at time t, y_1(t), are taken up by the phytoplankton y_2(t) according to a Michaelis–Menten formulation. Concurrently, the phytoplankton biomass is converted to detritus y_3(t), with a loss fixed rate a>0, due the effects of mortality and zooplankton grazing. Thus the system is considered closed and the total biomass remains constant accordingly to the conservation law (<ref>). The outcomes of the numerical integration of (<ref>) by MPLM-10(6) with h=1.88· 2^-5 are shown in Figure <ref>. The experimental results of Table <ref> comply with the theoretical findings for this test as well and, from Figure <ref>, it is clear that for p≥ 3 the MPLM-k(p) methods outperform the MPRK3 discretization. For the sake of brevity, here we omit the plots comparing our methods with the MPDeC schemes, which exhibit superior performances also in this test. §.§ Test 3: Brusselator test Our next experiment addresses a typical nonlinear chemical kinetics problem modeled by the original Brusselator system<cit.> y^'_1(t)=-k_1 y_1(t), y^'_2(t)=-k_2 y_2(t)y_5(t), y^'_3(t)=k_2 y_2(t)y_5(t), y^'_4(t)=k_4y_5(t), y^'_5(t)=k_1y_1(t)-k_2y_2(t)y_5(t)+k_3y^2_5(t)y_6(t)-k_4y_5(t), y^'_6(t)=k_2y_2(t)y_5(t)-k_3y^2_5(t)y_6(t), y^0=[ 10; 10; 0; 0; 0.1; 0.1 ]. The differential system (<ref>) falls in the form (<ref>), setting p_32(y) = d_23(y) = k_2 y_2 y_5, p_45(y) = d_54(y) = k_4 y_5, p_51(y) = d_15(y) = k_1 y_1, p_56(y) = d_65(y) = k_3 y^2_5 y_6, p_65(y) = d_56(y) = k_2 y_2 y_5, and p_ij(y) = d_ji(y) = 0 for all other combinations of i and j in [1,6]∩ℕ. Resorting to <cit.> the existence of a unique non-negative solution to (<ref>) can be proved. For this test problem the σ-embedding technique (<ref>) is not suitable for use due the presence of zero components in the initial value y^0. As already pointed out, we set y^0_3=y^0_4=≈2.26· 10^-308. The simulation of the system (<ref>) by the MPLM-10(6) method with h=1.25· 2^-5, assuming t∈ [0,10] and k_l=1, l=1,…,4, results in the numerical solution of Figure <ref>. Accordingly to the theoretical findings of Lemmas <ref> and <ref>, both the positivity and the conservativity are guaranteed. Furthermore, the results of Table <ref> and Figures <ref> and <ref> underline the effectiveness of the choice to slightly variate the initial state. The work precision diagram of Figure <ref> confirms that, assuming the same mean execution time, the MPLM-k(p) methods for p≥ 3 attain higher accuracy than the MPRK3 scheme. Figure <ref> shows, for p≥ 3, a comparison between the MPLM-k(p) and the MPDeC(p) methods in terms of the accuracy-cost trade off, highlighting the advantages of the former in terms of computational efficiency. Here, the mean error is computed by following (<ref>). The outcomes of the comparison, which differ from those obtained in [Test1]Test 1 and [Test2]Test 2, align with the inherent characteristics of each scheme. Specifically, MPDeC methods address multiple linear systems at each step, whereas MPLM methods necessitate an initialization procedure and the calculation of the PWDs. Consequently, for systems with smaller dimensions, the former demonstrate better efficiency. Conversely, as the number of components increases, MPLM methods exhibit reduced computational complexity and become more advantageous. §.§ Test 4: SACEIRQD COVID-19 model Our fourth test problem deals with the modified Susceptible-Infected-Recovered-Dead epidemic model S^'(t)=-(α+β I(t)+σ A(t)N_P+η) S(t), A^'(t)=-τ A(t) +ξ E(t), C^'(t)=α S(t)-μ C(t), E^'(t)=(β I(t)+σ A(t)N_P+η) S(t)+μ C(t)-(γ + ξ)E(t), I^'(t)=τ A(t)+γ E(t) -δ I(t), R^'(t)=λ Q(t), Q^'(t)=δ I(t)-λ Q(t)-k_d Q(t), D^'(t)=k_d Q(t). Originally introduced in <cit.> to analyze COVID-19 data, it incorporates the effect of asymptomatic infections and the influence of containment, isolation and quarantine measures on the spread of the disease. The non-negative states variables of the model (<ref>) represent the sizes at time t (days) of the eight disjoint compartments in which a closed population of N_P individuals is partitioned: susceptible (S), asymptomatic (A), confined (C), exposed (E), infected (I), recovered (R), quarantined (Q), dead (D). The absence of migration turnover in the model leads to the conservation law S(t)+A(t)+C(t)+E(t)+I(t)+R(t)+Q(t)+D(t)=N_P, ∀ t≥ 0. We refer to <cit.> for the details on the physical interpretation of the positive constants α, β, γ, δ, σ, η, τ, ξ, μ, λ and k_d. In order to reformulate the system (<ref>) as a PDS, we introduce the function y(t)=(S(t),A(t),C(t),E(t),I(t),R(t),Q(t),D(t))^𝖳 and the production-destruction terms as follows p_24(y)=d_42(y)=ξ y_4, p_31(y)=d_13(y)=α y_1, p_41(y)=d_14(y)=y_1(η+β y_5+σ y_2/N_P), p_43(y)=d_34(y)=μ y_3, p_52(y)=d_25(y)=τ y_2, p_54(y)=d_45(y)=γ y_4, p_67(y)=d_76(y)=λ y_7, p_75(y)=d_57(y)=δ y_5, p_87(y)=d_78(y)=K_d y_7 and p_ij(y) = d_ji(y) = 0 for all other combinations of i,j ∈ [1,8]∩ℕ. In <cit.> the parameters of the model were fitted to the COVID-19 time series datasets of infected, recovered and death cases for different countries. Here, we adopt for the parameters of the model the values provided by the authors for Italy, i.e. N_P=6.046 · 10^7, α=0.0194, β=7.567, μ=2.278 · 10^-6, η=9.180 · 10^-7, σ=1.4633 · 10^-3, τ=1.109 · 10^-4, ξ=0.263, γ=0.021, δ=0.077, λ_0=0.157, λ_1=0.025, k_d0=0.779, k_d1=0.061, y^0=(60459997,0,0,1,1,0,1,0)^𝖳. Furthermore, we set λ=10^-4λ_0 ∫_0^10^4 e^-λ_1 t dt and k_d=10^-4k_d0∫_0^10^4 e^-k_d1 t dt. The mathematical statements concerning the positivity (Lemma <ref>) and conservativity (Lemma <ref>) of the MPLM methods are confirmed in Figure <ref>, where the numerical simulation of (<ref>) by the MPLM-10(6) for t∈ [0,180] is reported. For this test, to avoid the issues of null components in the initial value, we set y_i^0=≈2.26· 10^-308 for i=2,3,6,8. Furthermore, due to the presence of large values in the components of the solution, we consider the relative maximum error e(h)=E(h)max_0≤ n ≤ T/hy^n(ref)_∞, with E(h) defined in (<ref>). From Table <ref> and Figures <ref> and <ref> it is clear that the numerical solution behaves in accordance with the theoretical results and the experimental order of convergence coincides with the expected one. Moreover, the work precision diagram of Figure <ref> confirms, also for this test, the trends observed in [Test3]Test 3. The efficient integration of (<ref>)-(<ref>) using the MPDeC codes in <cit.> is not feasible and results in a numerical solution exhibiting several . To overcome this issue, we consider the system (<ref>) with the same parameters in (<ref>) but with modified initial values y̅^0=(60459997,10^-10,10^-10,1,1,10^-10,1,10^-10)^𝖳, y̅̅̅^0=10^4 e. The work precision diagrams associated with the MPLM and MPDeC simulations of [Test4]Test 4, initialized with y(0)=y̅^0 and y(0)=y̅̅̅^0, are depicted in Figures <ref> and <ref>, respectively. The plotted data indicate that, at least for p>3, MPLM methods require fewer computational resources than MPDeC schemes to achieve a predefined level of accuracy. On the basis of the comparative analysis we performed, the MPLM integrators emerge as more suitable for accurately simulating real-world scenarios of the infectious disease outbreak model (<ref>). Furthermore, even in the unrealistic case of y(0)=y̅̅̅^0, they demonstrate superior efficiency compared to MPDeC methods. §.§ Test 5: spatially heterogeneous diffusion equation With the aim of assessing the performance of MPLM schemes on larger-scale problems, we turn our attention to the following Partial Differential Equation (PDE) ∂ u(x,t)∂ t-∂∂ x(𝔇(x) ∂ u(x,t)∂ x)=0, 0≤ x ≤ L, 0≤ t ≤ T. The PDE (<ref>) serves as a versatile mathematical framework for modeling a broad spectrum of phenomena extending beyond chemical diffusion and heat conduction <cit.>. For instance, in <cit.>, it is derived from the Black–Scholes model, thereby broadening its applications to qualitative finance. Similarly, in <cit.>, the same equation is utilized to describe nutrient uptake by plant root hairs. Furthermore, as argued in <cit.>, it may be applicable in simulating the onset of corrosion in concrete bridge beams due to chloride ion diffusion. Our experiment is conducted within a conservative setting, wherein we define initial conditions and Neumann zero-flux boundary conditions as follows u(x,0)=f(x), ∂ u/∂ x(0,t)=∂ u/∂ x(L,t)=0. In this scenario, the conservation law ∫_0^L u(x,t) dx = ∫_0^L f(x) dx, ∀ t≥ 0, holds true since integration with respect to x of both sides of (<ref>) leads to d/dt∫_0^L u(x,t) dx = 𝔇(L) ∂ u(L,t)/∂ x - 𝔇(0) ∂ u(0,t)/∂ x = 0. In order to obtain a numerical solution to equation (<ref>)-(<ref>) that preserves positivity and satisfies a discrete equivalent of (<ref>), we introduce a finite volume semi-discretization for the spatial variable (see, for instance, <cit.>) and subsequently employ modified Patankar schemes to integrate the resulting system of ordinary differential equations. §.§.§ Spatial semi-discretization and conservative PDS Let Δ x>0 and {x_j}_j ≥ 0 be a uniform mesh such that x_j=(j+1/2)Δ x, represents the center of a one-dimensional cell with edges x_j-1/2 and x_j+1/2. Consider the approximations v_j(t) of the average solution value over each grid cell, defined as v_j(t) ≈ u_j(t)=1Δ x∫_x_j-1/2^x_j+1/2 u(x,t) dx, j=0,…, N_x, with Δ x N_x =L. Integrating the PDE (<ref>) over the j-th cell and dividing by Δ x yields the exact differential rule u^'_j(t)=- 1/Δ x(F_j+1/2(t)-F_j-1/2(t)), j=0,…, N_x, where F_j±1/2(t)= -𝔇(x_j±1/2)∂_x u(x_j±1/2,t) denotes the flux through the edges of the cell. A semi-discrete scheme is then derived by a midpoint approximation of the fluxes in (<ref>). Specifically, for the interior cells (j=1,…, N_x-1), we set v^'_j(t)=𝔇(x_j+1/2)v_j+1(t)-(𝔇(x_j+1/2)-𝔇(x_j-1/2))v_j(t)+𝔇(x_j-1/2)v_j-1(t)Δ x^2, while for the boundary ones the conditions (<ref>) become v^'_0(t)= 𝔇(x_1/2)v_1(t)-v_0(t)Δ x^2, v^'_N_x(t)= 𝔇(x_N_x-1/2)v_N_x-1(t)-v_N_x(t)Δ x^2. Here, fictitious cells with centers x_-1 and x_N_x +1, located just beyond the computational domain, have been introduced for deriving (<ref>). The second order semi-discretization scheme (<ref>)-(<ref>) corresponds to the linear system of ordinary differential equations v^'(t) =1Δ x^2[ - 𝔇_1/2 𝔇_1/2 ; 𝔇_1/2 -𝔇_1/2-𝔇_3/2 𝔇_3/2 ; ⋱ ⋱ ⋱ ; 𝔇_N_x-3/2 -𝔇_N_x-3/2-𝔇_N_x-1/2 𝔇_N_x-1/2; 𝔇_N_x-1/2 -𝔇_N_x-1/2 ]·v(t) =A(Δ x) ·v(t), with v(t)=(v_0(t),…,v_N_x(t))^𝖳∈ℝ^N_x+1 and 𝔇_j=𝔇(x_j), for each j. The following result proves, for the solution to (<ref>), the semi-discrete counterpart of the conservation law (<ref>). Let u(x,t) be the continuous solution to (<ref>)-(<ref>) for (x,t)∈ [0,L]×[0,T], with positive L and T. Let {v_j(t)}_j ≥ 0 be its approximation (in the sense of (<ref>)) computed by (<ref>) with Δ x=L/N_x. Then, independently of Δ x>0, Δ x ∑_j=0^N_x v_j(t)=Δ x ∑_j=0^N_x f(x_j), ∀ t≥ 0. Given f=(f(x_0),…,f(x_N_x))^𝖳, the equality Δ x e^𝖳v(0)=Δ x e^𝖳f directly comes from the boundary conditions (<ref>). Therefore, for assuring (<ref>), it suffices to show that (e^𝖳 v_j(t))^'=0. The matrix A∈ℝ^(N_x+1)× (N_x+1) is symmetric, hence e^𝖳v^'(t)=e^𝖳 (A v(t))=(A e)^𝖳v(t)=0^𝖳v(t)=0, which yields the result. The differential system (<ref>) can be rewritten as a PDS of the form (<ref>) with y(t)=v(t)∈ℝ^N_x+1, D(y(t))=P(y(t))^𝖳∈ℝ^(N_x+1)× (N_x+1) and P(y)=1Δ x^2[ 0 y_2 𝔇_1/2 ; y_1 𝔇_1/2 0 y_3 𝔇_3/2 ; ⋱ ⋱ ⋱ ; y_N_x-2𝔇_N_x-3/2 0 y_N_x𝔇_N_x-1/2; y_N_x-1𝔇_N_x-1/2 0 ]. Thus, once the equivalence of (<ref>) with a fully conservative PDS is established, the property (<ref>) proved with Theorem <ref> automatically follows from (<ref>). §.§.§ Simulation results For our numerical experiments, we consider the PDE (<ref>)-(<ref>) with L=1, T=60 and initial condition f(x)=2-2sin^2(π2-14). Moreover, a space-dependent diffusion coefficient 𝔇(x)=D_0 (x-2/3)^2 tan^-1(2x-3)/(2x-3)+10^-5, D_0=10^-2, is introduced to simulate heterogeneous diffusion phenomena (see, for instance, <cit.> and references therein). Here, the solution to the PDE is approximated by integrating the PDS (<ref>)-(<ref>) with modified Patankar methods. Since the stability investigation of the semi-discretization (<ref>) falls outside the scope of this work, we empirically adjust the temporal step size h to be smaller than the spatial one Δ x. The outcomes of the simulation by the MPLM-7(5) scheme with h=5· 10^-4 and Δ x=5· 10^-3, are presented in Figure <ref>. Table <ref> presents the mean errors, as defined in (<ref>), along with the execution times for the Julia implementations of both the MPLM and MPDeC methods. The numerical residual r(h)=Δ x max_0≤ n≤T/h(∑_j=0^N_x|v_j(t_n)-f(x_j)|), on the semi-discrete conservation law (<ref>) is there reported, as well. The experimental results reveal that the MPLM methods demonstrate superior performances (cf. Figure <ref>) and exhibit a more regular behavior for the residual r(h). § CONCLUSIONS AND PERSPECTIVES In this manuscript, we introduced accurate linearly implicit time integrators specifically designed for production-destruction differential systems. Notably, the extension of the modified Patankar technique to multistep schemes has resulted in conservative numerical methods which retain, with no restrictions on the discretization steplength, the positivity of the solution and the linear invariant of the system. We carried out a theoretical investigation of the properties of the Patankar weight denominators which ensure the consistency and convergence of the proposed schemes. Additionally, we devised an embedding technique to practically compute the PWDs and achieve arbitrarily high order of convergence. The numerical tests conducted on various problems provided experimental confirmation of the theoretical findings. The comparison with the third-order modified Patankar Runge-Kutta method presented in <cit.> highlighted the superior performance of the proposed MPLM-k(p) integrators. Furthermore, the MPLM-k(p) schemes proved to be competitive with the high order modified Patankar deferred correction discretizations in <cit.>, especially in the case of high-dimensional systems and vanishing initial states. Given the results of this paper, MPLM-k(p) methods demonstrate considerable potential for the numerical integration of production-destruction systems. However, several aspects require deeper analysis, which we plan to address in future work. First of all, the optimization of coefficients selection for the underlying linear multistep method may be investigated, considering various combinations of k and p to ensure both positivity constraint and high order of convergence. The possibility of an extension to include negative coefficients may be considered, as well. Furthermore, since the methods of Table <ref> exhibit reduced efficiency on stiff ODEs tests such as the Robertson problem <cit.>, the necessity arises of a comprehensive MPLM stability analysis. In this regard, the implementation of variable stepsize approaches and local error control strategies may reveal of high interest. § APPENDIX A. CONCISTENCY NECESSARY CONDITION In section <ref> we introduced, with Theorem <ref>, a condition on the Patankar weight denominators which leads to order p consistent MPLM schemes. Here, our objective is to prove Theorem <ref> showing that (<ref>) represents a necessary requirement for the consistency, as well. To do that, we consider the particular production-destruction system (<ref>) with P_i(y)=μ y_i, i=j^*,j=i^*, 0, D_i(y)=μ y_i, i=i^*,j=j^*, 0, where i^* and j^* are fixed indices in {1,…,N} and μ is a given positive constant. It can be proved that y_i^*(t)=exp(-μ t), y_j^*(t)=2-exp(-μ t), y_l(t)=1, [ 1≤ l≤ N,; l≠ i^*, l ≠ j^*, ] is the unique solution of the positive and fully conservative PDS (<ref>)-(<ref>). The following investigation outlines the behaviour of the MPLM discretizations of the class (<ref>) applied to (<ref>)-(<ref>). Them_Con_Neces The order p convergence of the underlying LM discretization and the consistency hypothesis on the MPLM-k method (<ref>), imply δ_i^LM(h;t_n)=𝒪(h^p+1) δ_i(h;t_n)=𝒪(h^p+1), n≥ k, i=1,…,N, where the local errors δ_i^LM(h;t_n) and δ_i(h;t_n) are defined in (<ref>) and (<ref>), respectively. Subtracting the former from the latter yields ∑_r=1^kβ_r∑_j=1^N(p_ij(y(t_n-r))(1-y_j(t_n)σ_j(y(t_n-1),…,y(t_n-k)))) - ∑_r=1^kβ_r∑_j=1^N(d_ij(y(t_n-r))(1-y_i(t_n)σ_i(y(t_n-1),…,y(t_n-k))))=𝒪(h^p), for i=1,…,N and n≥ k. Furthermore, for the particular PDS (<ref>), taken i=i^*, μ∑_r=1^kβ_r y_i^*(t_n-r)(1-y_i^*(t_n)σ_i^*(y(t_n-1),…,y(t_n-k)))=𝒪(h^p). Therefore, the result comes from the positivity of the system and the arbitrariness of the choice of 1≤ i^*≤ N. § ACKNOWLEDGMENT This work was supported by the Italian MUR under the PRIN 2022 project No. 2022N3ZNAX and the PRIN 2022 PNRR project No. P2022WC2ZZ, and by the INdAM under the GNCS Project E53C23001670001. 1 axelsson1996iterative O. Axelsson, Iterative Solution Methods, Cambridge University Press, 1994. DOI: https://doi.org/10.1017/CBO978051162410010.1017/CBO9780511624100. Bonaventura2017 L. Bonaventura and A. Della Rocca, "Unconditionally Strong Stability Preserving Extensions of the TR-BDF2 Method," Journal of Scientific Computing, vol. 70, no. 2, pp. 859-895, Feb. 2017. DOI: https://doi.org/10.1007/s10915-016-0267-910.1007/s10915-016-0267-9. BURCHARD2006 H. Burchard, K. Bolding, W. Kühn, A. Meister, T. Neumann, and L. Umlauf, "Description of a flexible and extendable physical–biogeochemical model system for the water column," Journal of Marine Systems, vol. 61, no. 3, pp. 180-211, 2006. DOI: https://doi.org/10.1016/j.jmarsys.2005.04.01110.1016/j.jmarsys.2005.04.011. MPEuler H. Burchard, E. Deleersnijder, and A. Meister, "A high-order conservative Patankar-type discretisation for stiff systems of production–destruction equations," Applied Numerical Mathematics, vol. 47, no. 1, pp. 1-30, 2003. DOI: https://doi.org/10.1016/S0168-9274(03)00101-610.1016/S0168-9274(03)00101-6. Burchard2005 H. Burchard, E. Deleersnijder, and A. Meister, "Application of modified Patankar schemes to stiff biogeochemical models for the water column," Ocean Dynamics, vol. 55, no. 3, pp. 326-337, 2005. DOI: https://doi.org/10.1007/s10236-005-0001-x10.1007/s10236-005-0001-x. CAMPOS2021751 E. L. Campos, R. P. Cysne, A. L. Madureira, and G. L. Q. Mendes, "Multi-generational SIR modeling: Determination of parameters, epidemiological forecasting and age-dependent vaccination policies," Infectious Disease Modelling, vol. 6, pp. 751-765, 2021. DOI: https://doi.org/10.1016/j.idm.2021.05.00310.1016/j.idm.2021.05.003. Chartres B. Chartres and R. Stepleman, "A General Theory of Convergence for Numerical Methods," SIAM Journal on Numerical Analysis, vol. 9, no. 3, pp. 476–492, 1972. DOI: https://doi.org/10.1137/070904310.1137/0709043. NSFD_PDS_1 D. Dimitrov and H. Kojouharov, "Dynamically consistent numerical methods for general productive–destructive systems," Journal of Difference Equations and Applications, vol. 17, no. 12, pp. 1721-1736, Dec. 2011. DOI: https://doi.org/10.1080/1023619100378194710.1080/10236191003781947. Bridge M. P. Enright and D. M. Frangopol, "Probabilistic analysis of resistance degradation of reinforced concrete bridge beams under corrosion," Engineering Structures, vol. 20, no. 11, pp. 960-971, 1998. DOI: https://doi.org/10.1016/S0141-0296(97)00190-910.1016/S0141-0296(97)00190-9. fasham1990nitrogen M. J. R. Fasham, H. Ducklow, and S. M. McKelvie, "A nitrogen-based model of plankton dynamics in the oceanic mixed layer," Journal of Marine Research, vol. 48, pp. 591-639, Aug. 1990. DOI: https://doi.org/10.1357/00222409078498467810.1357/002224090784984678. FORMAGGIA2011 L. Formaggia and A. Scotti, "Positivity and Conservation Properties of Some Integration Schemes for Mass Action Kinetics," SIAM Journal on Numerical Analysis, vol. 49, no. 3, pp. 1267-1288, 2011. DOI: https://doi.org/10.1137/10078959210.1137/100789592. Hairer E. Hairer, S. Norsett, and G. Wanner, Solving Ordinary Differential Equations I: Nonstiff Problems, vol. 8, Springer Berlin Heidelberg, 1993. DOI: https://doi.org/10.1007/978-3-540-78862-110.1007/978-3-540-78862-1. anderHeiden1982 U. An der Heiden and M. C. Mackey, "The dynamics of production and destruction: Analytic insight into complex behavior," Journal of Mathematical Biology, vol. 16, no. 1, pp. 75-101, 1982. DOI: https://doi.org/10.1007/BF0027516210.1007/BF00275162. HENSE20102330 I. Hense and A. Beckmann, "The representation of cyanobacteria life cycle processes in aquatic ecosystem models," Ecological Modelling, vol. 221, no. 19, pp. 2330-2338, 2010. DOI: https://doi.org/10.1016/j.ecolmodel.2010.06.01410.1016/j.ecolmodel.2010.06.014. Higham2008 D. J. Higham, "Modeling and Simulating Chemical Reactions," SIAM Review, vol. 50, no. 2, pp. 347-368, 2008. DOI: https://doi.org/10.1137/06066645710.1137/060666457. Huang2019 J. Huang and C.-W. Shu, "Positivity-Preserving Time Discretizations for Production–Destruction Equations with Applications to Non-equilibrium Flows," Journal of Scientific Computing, vol. 78, no. 3, pp. 1811-1839, Mar. 2019. DOI: https://doi.org/10.1007/s10915-018-0852-110.1007/s10915-018-0852-1. Huang2019II J. Huang, W. Zhao, and C.-W. Shu, "A Third-Order Unconditionally Positivity-Preserving Scheme for Production–Destruction Equations with Applications to Non-equilibrium Flows," Journal of Scientific Computing, vol. 79, no. 2, pp. 1015-1056, May 2019. DOI: https://doi.org/10.1007/s10915-018-0881-910.1007/s10915-018-0881-9. IzginSSPMPRK J. Huang, T. Izgin, S. Kopecz, A. Meister, and C.-W. Shu, "On the stability of strong-stability-preserving modified Patankar-Runge-Kutta schemes," ESAIM: M2AN, vol. 57, no. 2, pp. 1063-1086, 2023. DOI: https://doi.org/10.1051/m2an/202300510.1051/m2an/2023005. GeCo_Stab T. Izgin, S. Kopecz, A. Martiradonna, and A. Meister, "On the dynamics of first and second order GeCo and gBBKS schemes," Applied Numerical Mathematics, vol. 193, pp. 43-66, 2023. DOI: https://doi.org/10.1016/j.apnum.2023.07.01410.1016/j.apnum.2023.07.014. Izgin1 T. Izgin, S. Kopecz, and A. Meister, "Recent Developments in the Field of Modified Patankar-Runge-Kutta-methods," PAMM, vol. 21, no. 1, pp. e202100027, 2021. DOI: https://doi.org/10.1002/pamm.20210002710.1002/pamm.202100027. Izgin3 T. Izgin, S. Kopecz, and A. Meister, "On the Stability of Unconditionally Positive and Linear Invariants Preserving Time Integration Schemes," SIAM Journal on Numerical Analysis, vol. 60, no. 6, pp. 3029-3051, 2022. DOI: https://doi.org/10.1137/22M148031810.1137/22M1480318. Izgin2 T. Izgin, S. Kopecz, and A. Meister, "On Lyapunov stability of positive and conservative time integrators and application to second order modified Patankar-Runge-Kutta schemes," ESAIM: M2AN, vol. 56, no. 3, pp. 1053-1080, 2022. DOI: https://doi.org/10.1051/m2an/202203110.1051/m2an/2022031. Joyner C. D. Joyner, "Black-Scholes Equation and Heat Equation," 2016, Honors College Thesis, Georgia Southern University. [Online]. Available: https://digitalcommons.georgiasouthern.edu/honors-theses/548https://digitalcommons.georgiasouthern.edu/honors-theses/548. refId0 J. S. Klar and J. P. Mücket, "A detailed view of filaments and sheets in the warm-hot intergalactic medium - I. Pancake formation," A&A, vol. 522, pp. A114, 2010. DOI: https://doi.org/10.1051/0004-6361/20101404010.1051/0004-6361/201014040. Kopecz2018 S. Kopecz and A. Meister, "On order conditions for modified Patankar–Runge–Kutta schemes," Applied Numerical Mathematics, vol. 123, pp. 159-179, 2018. DOI: https://doi.org/10.1016/j.apnum.2017.09.00410.1016/j.apnum.2017.09.004. Kopecz2018second S. Kopecz and A. Meister, "Unconditionally positive and conservative third order modified Patankar–Runge–Kutta discretizations of production–destruction systems," BIT Numerical Mathematics, vol. 58, no. 3, pp. 691-728, 2018. DOI: https://doi.org/10.1007/s10543-018-0705-110.1007/s10543-018-0705-1. Kopecz2019 S. Kopecz and A. Meister, "On the existence of three-stage third-order modified Patankar–Runge–Kutta schemes," Numerical Algorithms, vol. 81, no. 4, pp. 1473-1484, 2019. DOI: https://doi.org/10.1007/s11075-019-00680-310.1007/s11075-019-00680-3. Kou2005 S. C. Kou, B. J. Cherayil, W. Min, B. P. English, and X. Sunney Xie, "Single-Molecule Michaelis-Menten Equations," The Journal of Physical Chemistry B, vol. 109, no. 41, pp. 19068-19081, 2005. DOI: https://doi.org/10.1021/jp051490q10.1021/jp051490q. Lambert_Vecchio J. D. Lambert, Computational Methods in Ordinary Differential Equations, John Wiley, 1972, Introductory Mathematics for Scientist and Engineers. Bruss R. Lefever and G. Nicolis, "Chemical instabilities and sustained oscillations," Journal of Theoretical Biology, vol. 30, no. 2, pp. 267-284, 1971. DOI: https://doi.org/10.1016/0022-5193(71)90054-310.1016/0022-5193(71)90054-3. Piante D. Leitner, S. Klepsch, M. Ptashnyk, A. Marchant, G. J. D. Kirk, A. Schnepf, and T. Roose, "A Dynamic Model of Nutrient Uptake by Root Hairs," The New Phytologist, vol. 185, no. 3, pp. 792-802, 2010. DOI: https://doi.org/10.1111/j.1469-8137.2009.03013.x10.1111/j.1469-8137.2009.03013.x. LeVeque_2002 R. J. LeVeque, Finite Volume Methods for Hyperbolic Problems, Cambridge University Press, 2002. DOI: https://doi.org/10.1017/cbo978051179125310.1017/cbo9780511791253. MARTIRADONNA2020 A. Martiradonna, G. Colonna, and F. Diele, "GeCo: Geometric Conservative nonstandard schemes for biochemical systems," Applied Numerical Mathematics, vol. 155, pp. 38-57, 2020. DOI: https://doi.org/10.1016/j.apnum.2019.12.00410.1016/j.apnum.2019.12.004. Torlo_Rep P. Öffner and D. Torlo, "Remi group / Deferred Correction Patankar scheme. GitLab," Aug. 2019. [Online]. Available: https://git.math.uzh.ch/abgrall_group/deferred-correction-patankar-schemehttps://git.math.uzh.ch/abgrall_group/deferred-correction-patankar-scheme. Torlo2020 P. Öffner and D. Torlo, "Arbitrary high-order, conservative and positivity preserving Patankar-type deferred correction schemes," Applied Numerical Mathematics, vol. 153, pp. 15-34, 2020. DOI: https://doi.org/10.1016/j.apnum.2020.01.02510.1016/j.apnum.2020.01.025. patankar1980numerical S. V. Patankar, Numerical Heat Transfer and Fluid Flow, Hemisphere Publishing Corporation (CRC Press, Taylor & Francis Group), 1980. physics3020028 R. Schlickeiser and M. Kröger, "Analytical Modeling of the Temporal Evolution of Epidemics Outbreaks Accounting for Vaccinations," Physics, vol. 3, no. 2, pp. 386-426, 2021. DOI: https://doi.org/10.3390/physics302002810.3390/physics3020028. Semeniuk K. Semeniuk and A. Dastoor, "Development of a global ocean mercury model with a methylation cycle: Outstanding issues," Global Biogeochemical Cycles, vol. 31, no. 2, pp. 400-433, 2017. DOI: https://doi.org/10.1002/2016GB00545210.1002/2016GB005452. Sen2021 D. Sen and D. Sen, "Use of a Modified SIRD Model to Analyze COVID-19 Data," Industrial & Engineering Chemistry Research, vol. 60, no. 11, pp. 4251-4260, Mar. 2021. DOI: https://doi.org/10.1021/acs.iecr.0c0475410.1021/acs.iecr.0c04754. Shu_Form C.-W. Shu and S. Osher, "Efficient implementation of essentially non-oscillatory shock-capturing schemes," Journal of Computational Physics, vol. 77, no. 2, pp. 439-471, 1988. DOI: https://doi.org/10.1016/0021-9991(88)90177-510.1016/0021-9991(88)90177-5. TORLO2022 D. Torlo, P. Öffner, and H. Ranocha, "Issues with positivity-preserving Patankar-type schemes," Applied Numerical Mathematics, vol. 182, pp. 117-147, 2022. DOI: https://doi.org/10.1016/j.apnum.2022.07.01410.1016/j.apnum.2022.07.014. Calore B.-L. Wang and Y.-W. Mai, "Transient one-dimensional heat conduction problems solved by finite element," International Journal of Mechanical Sciences, vol. 47, no. 2, pp. 303-317, 2005. DOI: https://doi.org/10.1016/j.ijmecsci.2004.11.00110.1016/j.ijmecsci.2004.11.001. Warns A. Warns, I. Hense, and A. Kremp, "Modelling the life cycle of dinoflagellates: a case study with Biecheleria baltica," Journal of Plankton Research, vol. 35, no. 2, pp. 379-392, Dec. 2012. DOI: https://doi.org/10.1093/plankt/fbs09510.1093/plankt/fbs095. Molly M. Wolfson, E. R. Liepold, B. Lin, and S. A. Rice, "A comment on the position dependent diffusion coefficient representation of structural heterogeneity," The Journal of Chemical Physics, vol. 148, no. 19, pp. 194901, 2018. DOI: https://doi.org/10.1063/1.502592110.1063/1.5025921. NSFD_PDS_2 D. Wood, D. Dimitrov, and H. Kojouharov, "A nonstandard finite difference method for n-dimensional productive–destructive systems," Journal of Difference Equations and Applications, vol. 21, no. 3, pp. 1-19, Mar. 2015. DOI: https://doi.org/10.1080/10236198.2014.99722810.1080/10236198.2014.997228. zhu2022MPLM F. Zhu, J. Huang, and Y. Yang, "Bound-preserving discontinuous Galerkin methods with modified Patankar time integrations for chemical reacting flows," Communications on Applied Mathematics and Computation, vol. 2023, no. 2, pp. 06, 2023. DOI: https://doi.org/10.1007/s42967-022-00231-z10.1007/s42967-022-00231-z.
http://arxiv.org/abs/2407.12944v1
20240717182134
Enhanced optical properties of MoSe$_2$ grown by molecular beam epitaxy on hexagonal boron nitride
[ "C. Vergnaud", "V. Tiwari", "L. Ren", "T. Taniguchi", "K. Watanabe", "H. Okuno", "I. Gomes de Moraes", "A. Marty", "C. Robert", "X. Marie", "M. Jamet" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
AIP/123-QED Univ. Grenoble Alpes, CEA, CNRS, Grenoble INP, IRIG-Spintec, 38000 Grenoble, France Univ. Toulouse, INSA-CNRS-UPS, LPCNO, 31077 Toulouse, France Univ. Toulouse, INSA-CNRS-UPS, LPCNO, 31077 Toulouse, France National Institute for Materials Science, Tsukuba 305-0047, Ibaraki, Japan National Institute for Materials Science, Tsukuba 305-0047, Ibaraki, Japan Univ. Grenoble Alpes, CEA, IRIG-MEM, 38000 Grenoble, France Univ. Grenoble Alpes, CEA, CNRS, Grenoble INP, IRIG-Spintec, 38000 Grenoble, France Univ. Grenoble Alpes, CEA, CNRS, Grenoble INP, IRIG-Spintec, 38000 Grenoble, France Univ. Toulouse, INSA-CNRS-UPS, LPCNO, 31077 Toulouse, France Univ. Toulouse, INSA-CNRS-UPS, LPCNO, 31077 Toulouse, France Univ. Grenoble Alpes, CEA, CNRS, Grenoble INP, IRIG-Spintec, 38000 Grenoble, France matthieu.jamet@cea.fr. § ABSTRACT Transition metal dichalcogenides (TMD) like MoSe_2 exhibit remarkable optical properties such as intense photoluminescence (PL) in the monolayer form. To date, narrow-linewidth PL is only achieved in micrometer-sized exfoliated TMD flakes encapsulated in hexagonal boron nitride (hBN). In this work, we develop a growth strategy to prepare monolayer MoSe_2 on hBN flakes by molecular beam epitaxy in the van der Waals regime. It constitutes the first step towards the development of large area single crystalline TMDs encapsulated in hBN for potential integration in electronic or opto-electronic devices. For this purpose, we define a two-step growth strategy to achieve monolayer-thick MoSe_2 grains on hBN flakes. The high quality of MoSe_2 allows us to detect very narrow PL linewidth down to 5.5 meV at 13 K, comparable to the one of encapsulated exfoliated MoSe_2 flakes. Moreover, sizeable PL can be detected at room temperature as well as clear reflectivity signatures of A, B and charged excitons. Enhanced optical properties of MoSe_2 grown by molecular beam epitaxy on hexagonal boron nitride M. Jamet July 22, 2024 ================================================================================================ Hexagonal boron nitride (hBN) has been identified as a key material to encapsulate 2D materials and enhance their electronic properties such as carrier mobility <cit.> or photoluminescence (PL) <cit.>. Moreover, for its flatness, inertness and large bandgap, hBN constitutes an ideal substrate for the van der Waals epitaxy of other 2D materials like transition metal dichalcogenides (TMD) of general formula MX_2 with M=Mo, W and X=S, Se. It also exhibits a 3-fold symmetry matching the one of TMDs in the 2H phase, which could prevent the formation of twin domains and boundaries to reach high quality TMD monolayers. Previous works reported the molecular beam epitaxy (MBE) growth of MoSe_2 monolayers on hBN flakes exfoliated from the bulk crystal <cit.>. In Ref., Poh et al. have grown MoSe_2 monolayer on hBN at two different temperatures 250°C and 500°C, at deposition rates of 0.6 and 1.3 ML/h respectively while keeping a constant Se:Mo ratio of 20. They found that the film grown at 500°C exhibit highly oriented large grains that coalesced to form continuous film. They could measure the photoluminescence at room temperature showing the neutral exciton line with a full width at half maximum (FWHM) of ≈60 meV at room temperature. In Ref., Pacuski et al. have shown very narrow photoluminescence lines of MBE grown MoSe_2 monolayer (6.6 meV for the neutral exciton line at 10 K) demonstrating the ability of MBE to synthesize high quality TMD monolayers with electronic properties comparable to or even better than that of exfoliated TMD flakes. In this work, the authors proceeded in multi-steps alternating growths at 300°C to favor nucleation and annealings at 750°C to improve the crystal quality and clean the surface from the second layer. The Se:Mo ratio was between 100 and 1000. Here, we adopted another growth strategy. The full growth of one MoSe_2 layer on exfoliated hBN flakes was achieved at high temperature (800°C) in order to target both high crystalline quality and a limited formation of bilayers. However, due to the ultralow surface energy of hBN, the nucleation rate is negligible at such high temperature. To circumvent this issue, we optimized the nucleation step at 300°C with a low deposition rate. By this, we obtained a reasonable density of monolayer-thick nuclei at the hBN surface and we could proceed with the growth of the MoSe_2 monolayer at 800°C at very low deposition rate. We obtained neutral exciton photoluminescence linewidths of the order of 5.5 meV at 13 K, comparable to the one obtained in Ref. as well as clear signature of A, B and charged excitons in the reflectivity spectrum. Finally, we detected a sizeable photoluminescence signal at room temperature comparable to the one of mechanically exfoliated flakes with a FWHM of the order of 40 meV <cit.>. This work demonstrates the potential of MBE to grow high quality TMD monolayers on large areas constituting model systems to further study optically proximity effects. The sample preparation starts with the mechanical exfoliation of hBN flakes from high quality crystals (NIMS Tsukuba) using a dry-stamping technique <cit.> onto a 4×4 mm SiO_2(90nm)/Si substrate. The surface is covered with tens of hBN flakes in average with varying thickness (few nm up to 100 nm) and size (few μm up to 200 μm). Prior to the growth, the substrate is heated in ultrahigh vacuum (UHV) at 800°C during one hour to desorb all the contaminations from the surface, in particular the organic ones left by the mechanical exfoliation process. The base pressure in the molecular beam epitaxy reactor is in the low 10^-10 mbar range. Molybdenum is evaporated using an electron gun. The deposition rate of Mo controlled by a quartz balance monitor is kept in the 0.0025-0.005 Å/s range. The Se partial pressure (measured at the sample position thanks to a retractable gauge) is 10^-6 mbar giving a Se:Mo ratio in the 20-40 range. Considering this large Se:Mo ratio and the volatile character of Se, the deposition rate of Mo mostly determines the deposition rate of MoSe_2. The deposited MoSe_2 films grown at low temperature (<800°C) are systematically annealed at 800°C under Se flux (at 10^-6 mbar) to improve the crystalline quality. The deposition temperature is varied between 200°C and 800°C. After the MBE growth, a second hBN layer (top hBN, a few nm thick) is transferred on top of the structure by repeating the dry-stamping exfoliation technique. Tens of samples of encapsulated MoSe_2 monolayers have been fabricated and their properties were investigated by optical spectroscopy. Photoluminescence (PL) and differential reflectivity measurements were performed in a home built micro-spectroscopy set-up around a closed-cycle, low vibration attoDry cryostat with a temperature controller (T = 4–300 K). For PL, a HeNe laser (λ=633 nm) was used for excitation with a typical power of 30 μW and integration time of 30 seconds. The white light source for reflectivity measurements is a halogen lamp with a stabilized power supply. The emitted and/or reflected light was dispersed in a spectrometer and detected by a Si-CCD camera. The excitation spot diameter is of the order of 1 μm. We first calibrate the MoSe_2 deposition rate on graphene/SiC since we thoroughly studied this system in previous works <cit.>. In this case, the deposition temperature was 300°C followed by in situ annealing during 15 minutes at 800°C under Se flux. The Mo deposition rate is 0.005 Å/s and the Se partial pressure 10^-6 mbar. The completion of one monolayer of MoSe_2 (1 ML_Gr) with 100% coverage corresponds to an equivalent Mo thickness of 2.8 Å. Using the exact same growth conditions on hBN flakes, we obtain the layer morphology shown in Fig. <ref>a (resp. Fig. <ref>b-c) for 1 ML_Gr on SiO_2 (resp. hBN). On SiO_2, we find 100 % coverage as expected with very small MoSe_2 grains with diameters ranging from 10 to 20 nm. However, the coverage on hBN flakes is only ≈42 % with monolayer MoSe_2 grains of ≈170 nm in diameter. A small fraction of second layer can also be observed in Fig. <ref>c on top of the monolayer grains. Taking into account the first and second layers, the total amount of deposited MoSe_2 on hBN flakes as measured by AFM is only ≈0.55 ML_Gr. Hence, the sticking coefficient of MoSe_2 monomers (if the reaction between Mo and Se atoms occurs in the gas phase) and/or the residence time of MoSe_2 monomers on hBN are clearly less than on graphene and SiO_2. Moreover, the low nucleation density demonstrates the much higher monomers mobility on hBN than on graphene or SiO_2. In order to reach 100 % coverage, we deposited 4.5 ML_Gr as shown in Fig. <ref>d. In this case, MoSe_2 grains coalesce but they are 5 monolayers thick and exhibit a "pyramid" like shape. Based on this result, we then vary the growth temperature T_g with the objective to obtain 100 % coverage of monolayer MoSe_2. First, in order to improve the coverage, we lowered T_g from 300°C down to 200°C keeping the same Mo deposition rate and Se flux to increase the nucleation density. However, as shown in the inset of Fig. <ref>a, as-grown MoSe_2 grains exhibit highly dendritic shape which is detrimental to reach 100 % coverage. Annealing up to 800°C does not change drastically the dendritic shape of the grains. In comparison, as grown grains at 300°C show a compact shape in Fig. <ref>a. We thus consider that T_g=300°C is the minimum growth temperature to obtain compact MoSe_2 grains. The second objective is to stabilize a single layer of MoSe_2 on hBN to study optical properties (only monolayer exhibits direct bandgap). Starting from T_g=300°C, we first study the effect of annealing to suppress the second and third MoSe_2 layers in Fig. <ref>a. The result is shown in Fig. <ref>b: annealing only smoothens the grain edges thanks to higher monomer mobility but the second and third MoSe_2 layers remain on top of the grains. The second strategy consists then to increase T_g to avoid the formation of multilayers during the growth. However, when increasing the growth temperature above 400°C, the nucleation density decreases down to zero and hBN flakes are no more covered with MoSe_2 after depositing 1 ML_Gr (not shown). Finally, we followed a two-step method consisting in depositing 1 ML-thick compact MoSe_2 grains at 300°C (nucleation step) and growing the rest of the film up to 1 ML MoSe_2 at higher temperature to avoid the formation of multilayers (growth step). For this purpose, we carried out a thorough study of MoSe_2 nucleation at T_g=300°C varying the content of deposited MoSe_2 in ML_Gr as shown in Fig. <ref>. It should be noted that all the samples of Fig. <ref> were grown at 300°C and annealed at 800°C during 15 minutes to smoothen the grain edges. Increasing the total amount of deposited MoSe_2 up to 2 ML_Gr in Fig. <ref>a-d clearly increases the thickness of the grains from 1 to 3 ML as plotted in Fig. <ref>e. We conclude that a deposited thickness less than ≈0.7 ML_Gr is necessary to obtain monolayer thick grains even though their density is very low (2 per 500×500 nm^2 in Fig. <ref>a). To ensure monolayer thick MoSe_2 grains, we first deposited 0.1 ML_Gr of MoSe_2 at 300°C and study the effect of growth temperature T_g on the grain thickness and morphology in Fig. <ref>. The results are shown in Fig. <ref>a-b (resp. Fig. <ref>c-d) for T_g=600°C (resp. 750°C). Considering the very low sticking coefficient and/or residence time of MoSe_2 monomers at the hBN surface at these temperatures, we deposited thicker films of 6.1 ML_Gr at 600°C and 6.5 ML_Gr at 750°C respectively. For T_g=600°C (resp. 750°C), the grains exhibit an average thickness of 5 ML (resp. 4 ML). In order to still reduce the grain thickness down to 1 ML, we selected the highest growth temperature of 800°C. The first remarkable observation is the sharp decrease of the Raman A_1g peak full width at half maximum (FWHM) above T_g=600°C down to ≈1.1 cm^-1 for T_g=800°C as shown in Fig. <ref>e-f. It demonstrates the superior crystalline quality of MoSe_2 grains grown at 800°C on hBN flakes. The results for T_g=800°C are summarized in Fig. <ref>. The AFM image of Fig. <ref>a-c (resp. Fig. <ref>d) corresponds to sample 1 (resp. sample 2). In sample 1 (resp. sample 2), MoSe_2 was deposited during 1.5 hour (resp. 3 hours) corresponding to a total thickness of 12 ML_Gr (resp. 24 ML_Gr) after 0.1 ML_Gr (resp. 0.2 ML_Gr) nucleation step. The growth step was carried out at T_g=800°C with a Mo deposition rate of 0.0025 Å/s and a Se flux of 10^-6 mbar. In Fig. <ref>a and  <ref>d, we observe 500 nm and 1 μm large monolayer thick MoSe_2 grains on hBN flakes. They are covered with clusters (bright spots) and lines of height 8 nm and 1 nm respectively which most probably correspond to Mo-rich nanostructures forming due to the low Se:Mo atomic ratio at the surface of hBN flakes at 800°C. However, we could not decrease (resp. increase) the Mo deposition rate (resp. Se flux) any further with our setup. Averaging over larger areas, the coverage is ≈5 % for sample 1 and ≈10 % for sample 2 meaning that the full coverage with 1 ML thick MoSe_2 would take approximately 30 hours. In Fig. <ref>c, we show the AFM image of a second flake of sample 1 where the coverage is clearly higher with the coexistence of MoSe_2 mono- and bilayers along with Mo-rich nanoclusters and nanowires. This illustrates the coverage and thickness dispersion observed from flake to flake. At this stage, we can only speculate that this is due to the hBN flake surface quality in terms of point defects or steps. Fig. <ref>e-h illustrate the effect of increasing the deposited thickness at the nucleation step to 0.7 ML_Gr corresponding to the 1 ML-2 ML thickness transition in the graph of Fig. <ref>e as well as the dispersion of coverage from flake to flake. The MoSe_2 grain thickness varies from 1 to 3 ML and the coverage from ≈34 % to almost 100 %. In summary, ultralow thickness at the nucleation step (0.1-0.2 ML_Gr) and high temperature growth of 800°C are necessary to stabilize monolayer thick MoSe_2 grains on hBN flakes. In the following, we study the optical properties of the samples grown in these conditions. Fig. <ref>a presents the photoluminescence spectrum for a hBN/MoSe_2/hBN structure at T=13 K. It evidences clearly two peaks corresponding respectively to the recombination of neutral exciton (X^0) and charged exciton (X^T), in agreement with previous results <cit.>. Remarkably the neutral exciton PL shows a narrow linewidth. The Full Width at Half Maximum (FWHM) of the neutral exciton line is ≈5.5 meV, demonstrating the high quality of the structure. Although a little larger, this line width is comparable with that obtained with hBN encapsulated MoSe_2 monolayer obtained by mechanical exfoliation <cit.>. The PL spectra do not change much for different points of the sample (see two typical points in Fig. <ref>a). This demonstrates the good growth homogeneity on this hBN flake. Note that the slight change of the PL peak of the neutral and charged exciton observed in Fig. <ref>a also occurs in exfoliated samples (probably induced by small local strain variations). Using the same excitation conditions, ML MoSe_2 flakes exfoliated from the bulk crystal exhibit almost two orders of magnitude larger PL intensity. However, as shown in Fig. <ref>, the monolayer coverage is very low in our samples, of the order of few percents, which can partly explain this difference in PL intensity. In Fig. <ref>c, a reasonable PL intensity at room temperature is obtained and shown at two different positions on the hBN flake. The FWHM is typically 40 meV. For comparison, we show in Fig. <ref>e the PL spectrum at T=5 K of hBN encapsulated MoSe_2 grown at 300°C and annealed at 800°C corresponding to the sample of Fig. <ref>b. This spectrum was recorded with a laser energy of 100 μW and an integration time of 60 seconds. The PL spectrum FWHM for the neutral exciton is ≈20 meV demonstrating that growth at high temperature greatly narrows the PL spectrum following the same trend as the FWHM of the A_1g Raman peak in Fig. <ref>e. PL spectra FWHM as a function of temperature for three different samples are shown in Fig. <ref>g. For hBN encapsulated MoSe_2 grown at 800°C (in blue), the FWHM varies from 5.5 meV at 13 K to 40 meV at room temperature. For hBN encapsulated MoSe_2 grown at 300°C and annealed at 800°C (in red), the FWHM varies from 20 meV at 5 K to 70 meV at 250 K. Finally, for non-encapsulated MoSe_2 monolayer directly grown on SiO_2/Si <cit.> (in green), the PL FWHM varies from 45 meV at 5 K to 80 meV at room temperature. We can conclude that high temperature growth on hBN flakes drastically improves the optical properties of MoSe_2. Fig. <ref>h displays the differential reflectivity spectrum of a sample grown at high temperature. The peaks corresponding to the absorption of A and B neutral exciton are observed together with the absorption of the charged exciton at lower energy (oscillation below 1.65 eV). The PL peaks of both the neutral and charged exciton are shown in the inset. This confirms that the MoSe_2 monolayer is doped. In summary, we studied the growth of MoSe_2 on hBN flakes by van der Waals epitaxy in order to obtain monolayer coverage and intense and narrow-linewidth photoluminescence. We found a PL FWHM of 5.5 meV at 13 K and 40 meV at 300 K for the neutral exciton line. These results represent an improvement compared to previous works by MBE. MoSe_2 grown by MBE at low temperature (300°C-500°C) shows PL FWHM of 6.6 meV at 10 K (Ref. ) and 60 meV at 300 K (Ref. ). The linewidth is comparable to the one of MoS_2 grown by chemical vapor deposition with a PL FWHM of 5 meV at 4 K (Ref. ) but still remains a factor two larger than the one of exfoliated MoSe_2 flakes encapsulated in hBN (≈2 meV at 4 K in Refs. ). To obtain these results, we had to address two main constraints. First, the very low sticking coefficient and/or residence time of MoSe_2 monomers at the surface of hBN flakes during the growth leads to very low or even zero nucleation density. To circumvent this issue, we systematically started with a nucleation step at low substrate temperature to obtain monolayer-thick MoSe_2 grains. Second, the growth mainly proceeds in a multilayer mode at low and medium substrate temperatures. In order to limit the formation of MoSe_2 bilayers, we thus carried out the growth step at very high temperature (800°C). Following this two-step method, we achieved the growth of high quality MoSe_2 monolayers with partial coverage exhibiting narrow-linewidth photoluminescence spectra at low temperature and sizeable signal at room temperature. We also demonstrated clear A and B neutral exciton and charged exciton signatures in reflectivity measurements at low temperature. This work demonstrates that molecular beam epitaxy in the van der Waals regime of TMDs on hBN provides high quality crystals comparable to exfoliated ones over large areas. It paves the way for the study of proximity effects in large area and high crystalline quality van der Waals heterostructures. The authors acknowledge the support from the European Union’s Horizon 2020 research and innovation Programme under grant agreement No 881603 (Graphene Flagship), No 829061 (FET-OPEN NANOPOLY) and No 101079179 (DYNASTY). The French National Research Agency (ANR) is acknowledged for its support through the ANR-18-CE24-0007 MAGICVALLEY and ESR/EQUIPEX+ ANR-21-ESRE-0025 2D-MAG projects. The LANEF framework (No. ANR-10-LABX-0051) is acknowledged for its support through the project 2DMAT. § DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request. * § REFERENCES
http://arxiv.org/abs/2407.12550v1
20240717133113
UniTE: A Survey and Unified Pipeline for Pre-training ST Trajectory Embeddings
[ "Yan Lin", "Zeyu Zhou", "Yicheng Liu", "Haochen Lv", "Haomin Wen", "Tianyi Li", "Yushuai Li", "Christian S. Jensen", "Shengnan Guo", "Youfang Lin", "Huaiyu Wan" ]
cs.LG
[ "cs.LG" ]
§ ABSTRACT Spatio-temporal (ST) trajectories are sequences of timestamped locations, which enable a variety of analyses that in turn enable important real-world applications. It is common to map trajectories to vectors, called embeddings, before subsequent analyses. Thus, the qualities of embeddings are very important. Methods for pre-training embeddings, which leverage unlabeled trajectories for training universal embeddings, have shown promising applicability across different tasks, thus attracting considerable interest. However, research progress on this topic faces two key challenges: a lack of a comprehensive overview of existing methods, resulting in several related methods not being well-recognized, and the absence of a unified pipeline, complicating the development new methods and the analysis of methods. To overcome these obstacles and advance the field of pre-training of trajectory embeddings, we present UniTE, a survey and a unified pipeline for this domain. In doing so, we present a comprehensive list of existing methods for pre-training trajectory embeddings, which includes methods that either explicitly or implicitly employ pre-training techniques. Further, we present a unified and modular pipeline with publicly available underlying code, simplifying the process of constructing and evaluating methods for pre-training trajectory embeddings. Additionally, we contribute a selection of experimental results using the proposed pipeline on real-world datasets. Spatio-temporal data mining, trajectory embedding, pre-training, self-supervised learning. Milky Way: New Galactic mass model for orbit computations J. Klačka 1 M. Šturc 1 E. Puha 1 Received 2024 ============================================================================= § INTRODUCTION A spatio-temporal (ST) trajectory is a sequence of sampled (location, time) pairs that captures the movement of an object. Figure <ref> shows an example vehicle trajectory 𝒯=⟨ (l_1,t_1), (l_2,t_2), …, (l_6,t_6)⟩, from which different movement-related information can be extracted, e.g., driving behavior, route preferences, and travel speed on the road segments traversed. Such information plays an crucial role in various Intelligent Transportation System (ITS) applications, including traffic condition prediction <cit.>, routing <cit.>, trajectory prediction <cit.>, anomaly detection <cit.>, etc. <cit.>. Efficient utilization of trajectory data in ITS applications increasingly relies on the use of machine learning to automate extraction of movement information. While traditional machine learning approaches like random forests <cit.> often struggle to capture the complex interdependencies within spatio-temporal data, and while manual feature engineering is often resource-intensive, deep learning approaches, such as recurrent neural networks <cit.>, excel at extracting information from trajectories. Still, the success of deep learning approaches depends on the availability of accurate and comprehensive trajectory embeddings, i.e., d-dimensional latent vectors that represent trajectories. To train trajectory embeddings, the end-to-end approach <cit.> integrates embedding training with task-specific supervision. This approach is often preferred because it is relatively simple to implement. However, this approach requires large-scale labeled trajectory data and suffers from limited transferability across tasks, as indicated in Figure <ref>. In contrast, the pre-training approach <cit.> represents a promising alternative. By training trajectory embeddings using task-invariant self-supervised tasks, the training can utilize the wealth of unlabeled trajectory data available, and the result embeddings can be shared across different tasks to improve overall effectiveness and efficiency, as indicated in Figure <ref>. Given the potential benefits, the pre-training approach has attracted increasing interest. Despite the growing interest, research on the pre-training of trajectory embeddings faces two key challenges that if addressed will accelerate advances. (1) Lack of a comprehensive survey. Pre-training is adopted widely for trajectory representation learning. While some methods are designed explicitly for learning universal trajectory embeddings that are shared across various downstream tasks <cit.>, many methods that focus on one specific downstream task also implicitly employ pre-training techniques for learning trajectory embeddings. For example, some methods use an auto-encoding pre-training framework to measure trajectory similarity and determine distances between trajectories within the embedding space <cit.>. However, such methods that use pre-training techniques implicitly are not widely recognized as pre-training methods. Consequently, the potential application of their result embeddings across different downstream tasks remains underexplored. Existing surveys and research on the pre-training of trajectory embeddings tend to focus on methods explicitly designed for learning universal embeddings, thus not considering the full scope of relevant methods. (2) Lack of a unified pipeline. Methods for the pre-training of trajectory embeddings are varied and often described and implemented using disparate frameworks. For example, methods based on contrastive learning <cit.> initiate the process by augmenting trajectories into categories of targets and positive and negative samples. These samples are subsequently transformed into latent embeddings by one or several encoders. The pre-training employs a contrastive loss function on these embeddings. In contrast, auto-encoding methods <cit.> initially utilize an encoder-decoder pair, followed by pre-training that applies a reconstruction loss to the output of the decoder. Moreover, the assessment of the effectiveness of embeddings is carried out on diverse tasks and under different experimental settings. Notably, most implicit methods for the pre-training of trajectory embeddings are evaluated using a single downstream task. These variations in methodologies and evaluation criteria complicates the analysis of existing approaches and represent a barrier to the straightforward development and implementation of new methods. To address these challenges and accelerate research on the pre-training of trajectory embeddings, we present a survey and a unified pipeline named Unified Trajectory Embeddings (UniTE). Initially, we present an extensive review of current methods used for the pre-training of trajectory embeddings, covering both explicit and implicit approaches. Following this, we present a unified and modular pipeline designed to standardize the implementation of existing methods and streamline the development of new ones. Additionally, the proposed pipeline facilitates use of embeddings in diverse downstream tasks, allowing for straightforward evaluation and comparison of different methods. The underlying code is publicly available at <https://github.com/Logan-Lin/UniTE>. Related Surveys. Despite the growing interest in the pre-training approach to trajectory representation learning and the increasing number of contributions on the pre-training of trajectory embeddings, there is a lack of surveys on this specific topic. While some existing surveys <cit.> provide a broad introduction to techniques, methodologies, and tasks utilized in trajectory data mining, they offer only limited coverage of the specific subject of pre-training. Recent surveys <cit.> have delved deeper into the field of trajectory deep learning, but again touch only briefly on the topic of pre-training of trajectory embeddings briefly. Surveys also exist that focus on specific aspects of trajectory data management or applications, such as trajectory prediction <cit.>, trajectory similarity computation <cit.>, and travel time estimation <cit.>. However, these surveys do not provide a detailed examination of the pre-training of trajectory embeddings. In summary, our main contributions are as follows: * We enrich the domain of the pre-training of trajectory embeddings by proposing the first comprehensive survey and unified pipeline on this topic. * We conduct an extensive survey spaning existing methods for the pre-training of trajectory embeddings, covering both explicit and implicit approaches. * We introduce a unified and modular pipeline for standardizing the implementation and evaluation of methods for the pre-training of trajectory embeddings. * We release the code and report on extensive experiments on several real-world trajectory datasets to illustrate the utility of UniTE. Overall, we hope that this survey and pipeline can accelerate research on trajectory pre-training. § PRELIMINARIES §.§ Pre-training of Embeddings Methods for the pre-training of embeddings <cit.> aims to equip models with prior knowledge of data features before adopting them for specific tasks. The main objective of pre-training embeddings is to encode input data into dense vectors that represent the underlying features in a more abstract and easily understandable form for downstream machine learning models. This is achieved by training the model on a pre-text task, such as predicting the next word in a sentence in natural language processing (NLP) tasks <cit.> or recognizing objects in images without labels in computer vision (CV) tasks <cit.>. This process enables downstream models to develop a generalized understanding of the data, which can greatly enhance its performance on downstream tasks, even when given relatively smaller amounts of task-specific data. The pre-training of embeddings has gained widespread popularity in different domains, particularly in the NLP and CV domains. In NLP, models like word2vec <cit.> and BERT <cit.> utilize pre-trained embeddings to achieve state-of-the-art performance at tasks such as text classification, question answering, and language generation. In CV, techniques such as pre-trained Convolutional Neural Networks (CNNs) <cit.> are employed for image classification, object detection, and more, by learning from large-scale image datasets like ImageNet. §.§ Definitions A spatio-temporal trajectory records the movement of an object during a certain time span. Formally, a trajectory is represented as 𝒯=⟨ (l_1,t_1), (l_2,t_2), …, (l_N,t_N) ⟩. Here, N is the length of the trajectory and l_i=(lng_i, lat_i) represents the location of the i-th point. The timestamp t_i captures the time when the i-th point was recorded. A trajectory dataset 𝕋 is a set of trajectories, where each trajectory 𝒯∈𝕋 has been collected in a specific geographical region and time frame. Given a trajectory 𝒯, its embedding is a fixed-length vector z_𝒯∈ℝ^d, where d is the dimensionality of the embedding. A trajectory encoder f_θ with learnable parameters θ is often used to map variable length trajectories to their embeddings, i.e., f_θ(𝒯) = z_𝒯. A road network is modeled as a directed graph 𝒢=(𝒱, ℰ), where 𝒱 is a collection of nodes v_i that correspond to either an intersection of road segments or the end of a segment and ℰ is a set of edges s_i∈ℰ that correspond to a road segment that connects two nodes. An edge s_i=(v_j,v_k) is characterized by its starting and ending nodes. §.§ Problem Statement Pre-training of Trajectory Embeddings. Given a trajectory dataset 𝕋, a method for the pre-training of trajectory embeddings aims to develop a trajectory encoder f_θ that maps a trajectory 𝒯 to its embedding vector z_𝒯. The encoder is optimized using a specific pre-training objective. The optimization process can be formulated as follows: θ = _θ∑_𝒯∈𝕋ℒ(f_θ(𝒯)), where ℒ is the pre-training objective, designed to be independent of any specific task. The trained trajectory encoder f_θ is then applied to downstream tasks, either through fine-tuning or using unsupervised schemes. § SURVEY ON THE PRE-TRAINING OF TRAJECTORY EMBEDDINGS We present a comprehensive inventory of existing methods for the pre-training of trajectory embeddings. This inventory includes explicit methods, designed to create universal trajectory embeddings for use in different downstream tasks, and implicit methods, crafted for specific downstream tasks while adopting pre-training techniques to acquire task-invariant embeddings. Considering the wide range of initial applications for these methods, we arrange them based on the pre-training frameworks they employ, as shown in Figure <ref>. We proceed to provide a concise overview of the features and implementation of these methods. §.§ word2vec-based Methods A classical language model from the NLP domain, word2vec <cit.>, employs a two-layer neural network to derive word embeddings from text corpora. The word2vec method operates under the distributional hypothesis, which assumes that words found in similar contexts tend to share similar meanings <cit.>. The method actualizes this hypothesis through two models: Continuous Bag-of-Words (CBOW) <cit.> and Skip-Gram <cit.>. The CBOW model predicts a target word w based on its context C(w) of surrounding words, while the Skip-Gram model reverses this approach, aiming to predict the context C(w) of a target word w. This approach is also applicable to the analysis of spatio-temporal trajectories, where locations with similar contexts are believed to perform similar functions <cit.>. Drawing inspiration from word2vec's success in semantic information capture and the parallel between sentences and trajectories, several trajectory embedding methods incorporate word2vec. It should be noted that these methods mostly focus on learning embeddings for individual locations in trajectories. We can subsequently aggregate these embeddings to represent trajectories as a whole. A straightforward aggregation method is mean pooling, where each feature dimension of the trajectory embedding vector is the mean of the same feature dimension of all point embeddings within the trajectory. §.§.§ FVTI FVTI <cit.> is designed to detect anomalies in trajectory data. It converts trajectories into embedding vectors, which are then used to calculate trajectory similarity. We focus on the trajectory embedding aspect of FVTI, which leverages the word2vec method. Given a trajectory 𝒯=⟨ (l_1,t_1), …, (l_N,t_N) ⟩, FVTI treats each trajectory point (l_i,t_i) similarly to how a word w_i is treated in a sentence, aiming to pre-train an embedding vector z_i for each point using word2vec. The embedding process is divided into two phases: a trajectory pre-processing phase and a embedding model construction phase. During the trajectory pre-processing phase, FVTI identifies three key features for each trajectory point: range, range rate, and speed. It then quantizes each continuous feature value into a discrete token by assigning it to one of a set of predefined intervals. Consequently, each trajectory point is conceptualized as a word comprised of these three discrete tokens, transforming an entire trajectory into a sentence. During the embedding model construction phase, FVTI employs the CBOW model of word2vec, employing a one-word context window. Each unique word constructed as above is assigned an embedding vector. §.§.§ GCM Similar to FVTI, GCM <cit.> is designed for detecting anomalies in trajectories. GCM maps trajectories to embedding vectors, which then serve as the basis for a binary classification to determine the presence of an anomaly. We consider the trajectory embedding component of GCM, which utilizes the word2vec model as its backbone. Like FVTI, GCM treats each trajectory point (l_i,t_i) in a trajectory 𝒯 as a word w_i in a sentence. It begins by partitioning the geographical region covered by all trajectories using a uniform grid, with each grid cell being linked to an embedding vector. Each trajectory point (l_i,t_i) is then converted into a discrete token based on its cell, taking the cell's embedding vector as its own. Following this, GCM employs the Skip-Gram architecture of word2vec to train embedding vectors of trajectory points. For a given trajectory 𝒯, the model is trained to optimize the probability of predicting the middle points (l_2,t_2), (l_3,t_3), …, (l_N-1,t_N-1) given the initial point (l_1,t_1) and the final point (l_N,t_N). §.§.§ POI2Vec POI2Vec <cit.> is designed to learn location representations and is also inspired by word2vec. It generates an embedding vector for each location, which can subsequently be used for analyzing locations and trajectories. POI2Vec leverages the CBOW model of word2vec and integrates spatial correlations between locations. The geographical region covered by all trajectories is hierarchically divided into spatial cells, with each cell corresponding to a node in the binary tree utilized for the hierarchical softmax calculation in CBOW. Locations are assigned to the tree based on the cell they belong to. POI2Vec then trains the embedding vectors for these locations by applying word2vec to the given trajectories, similar to the methodology used in FVTI. §.§.§ TALE TALE <cit.> is another location representation learning method based on word2vec. TALE learns embedding vectors for locations to support tasks such as trajectory prediction and location classification. TALE utilizes the CBOW model of word2vec while incorporating temporal correlations of locations based on their visit times in trajectories. A day is partitioned into equal time spans, each corresponding to a time node in a multi-branch tree. Each node also serves as the root of a Huffman sub-tree, built based on the frequencies of the locations visited during that time span. The tree structure is then used in the hierarchical softmax calculation in CBOW. Finally, TALE trains the embedding vectors of locations by applying word2vec to the trajectories. §.§ Masked Language Model-based Methods Masked Language Models (MLMs) have emerged as a sophisticated approach in the field of self-supervised learning. Their use in BERT <cit.> has advanced the state-of-the-art in NLP, providing a more nuanced understanding of language semantics and syntax. The fundamental concept behind MLMs involves randomly selecting a subset of tokens (words or subwords) within a sentence and replacing them with a special token (often represented as "[MASK]"). The objective of the model is then to predict the masked tokens, using only the surrounding unmasked tokens as context. This task prompts the model to acquire a deep representation of language that captures both the immediate context of a word and its broader linguistic function. BERT also introduces the Next Sentence Prediction (NSP) pretext task, often used in conjunction with MLMs. NSP involves providing the model with pairs of sentences and training it to determine whether the second sentence logically follows the first. During this task, the model receives one pair where the second sentence is a true continuation and another pair with a randomly selected sentence from the corpus. The model's goal is to predict whether the second sentence is a valid continuation of the first. This task helps the model develop an understanding of contextual relationships and coherence across sentences, enhancing its ability to comprehend and generate text in a coherent and contextually appropriate manner. Beyond NLP, MLMs and NSP have found application in domains such as spatio-temporal location and trajectory embedding. §.§.§ CTLE CTLE <cit.> is a multi-task pre-training framework that utilizes MLM to learn a contextual embedding for each location in a trajectory. These embeddings are then leveraged for trajectory prediction. In line with the MLM approach, CTLE undergoes pre-training by randomly selecting trajectory points (l_i,t_i) from a trajectory 𝒯 and replacing their location l_i and time t_i with a special mask token. The trajectory, with masked trajectory points, is fed to a Transformer-based trajectory encoder. The encoder generates a output sequence with embedding vectors corresponding to the masked trajectory points, which are then used to predict the original location and time features. Following pre-training, CTLE can be employed to obtain embeddings for the trajectory points in a trajectory. The embedding of an entire trajectory is obtained by applying mean pooling to the sequence of trajectory point embeddings. §.§.§ Toast Toast <cit.> is a road network representation method built on BERT <cit.> that aims to learn embeddings of both road segments and trajectories. Toast utilizes the MLM and NSP tasks from BERT for pre-training. It initially employs a set of random walks on a road network 𝒢 to generate sequences of road segments. These sequences are then used with the Skip-Gram variant of word2vec to obtain preliminary embeddings of the road segments. To apply BERT to trajectories, Toast employs a Transformer-based trajectory encoder that takes the preliminary road segment embeddings as input. The sequence of road segments corresponding to a trajectory 𝒯 is obtained by map-matching 𝒯 onto 𝒢. Toast then executes the MLM task by masking and reconstructing a portion of the road segments in a map-matched trajectory and performs the NSP task by determining whether a random walk sequence on 𝒢 corresponds to one of the trajectories in the dataset. Finally, the embedding of a trajectory is calculated by mean pooling the output from the trajectory encoder. §.§ Auto-encoding-based Methods The Auto-Encoding (AE) framework <cit.> is a fundamental concept in self-supervised learning. Its primary aim is to efficiently encode unlabeled data. This framework consists of two main components: an encoder and a decoder. Its training objective is to replicate its input in its output as closely as possible. The encoder compresses input data into compact low-dimensional vectors. Next, the decoder attempts to reconstruct the original data from the compressed low-dimensional vectors, thus recovering the original information from the reduced vector embeddings. This process encourages the encoder to identify and preserve the most critical features in the input vectors for its reconstruction. The AE framework enjoys broad application across numerous domains and facilitates dimensionality reduction <cit.>, denoising <cit.>, anomaly detection <cit.>, and the creation of generative models <cit.>. It is also useful for the pre-training of trajectory embeddings, mapping variable-length sequential data to fixed-length vectors. §.§.§ DTC DTC <cit.> is a deep learning-based trajectory clustering method. It learns embeddings of trajectories, which are then used for k-means clustering. To learn the embeddings, DTC utilizes the AE framework in two phases: a preprocessing phase and a pre-training phase. In the preprocessing phase, DTC adopts a procedure similar to that used by GCM. This involves partitioning the geographical region using a regular grid and mapping each trajectory point to a discrete token assigned to the grid cell it belongs to. During the pre-training, DTC employs the AE framework with an encoder and a decoder that are both recurrent neural networks (RNNs) <cit.>. Trajectories that represented by sequences of discrete tokens constructed in the preprocessing phase are treated as the input to the encoder and are the reconstruction targets of the decoder. The objective of this phase is to optimize the accuracy of the reconstructed trajectories. The output of the encoder are the trajectory vector embeddings. §.§.§ trajectory2vec Like DTC, trajectory2vec <cit.> employs the AE framework to derive trajectory embeddings to facilitate clustering. Here, the encoder and decoder are LSTM networks <cit.>. The trajectories are prepared as input to the encoder by extracting their moving behavior sequences. This is done using a sliding window technique, where the window has a set width and moves across a trajectory with an offset that is half the window's width. In each window, features such as the time interval, changes in position, speed, and the rate of turn are extracted. These features represent the moving behavior sequence of the trajectory and are then fed into the AE framework to learn the embedding vector of the trajectory. §.§.§ TremBR TremBR <cit.> leverages the AE framework for the pre-training of trajectory embeddings that are applicable to a variety of downstream tasks. This method encompasses stages: map-matching, road segment embedding, and trajectory embedding. Initially, it maps each trajectory 𝒯 onto the road network 𝒢 using map-matching. Each point in a map-matched trajectory is associated with a specific road segment and timestamp. In the next stage, TremBR pre-trains embeddings for the road segments using the CBOW architecture of word2vec, applying it to sequences of road segments obtained from the map-matched trajectories. The embedding vector of a road segment and its timestamp serve as the representation of a trajectory point. During the trajectory embedding stage, these point representations are arranged into sequences and fed into the encoder. The model performs pre-training with the objective of reconstructing the features of the map-matched trajectories accurately. The embeddings generated by the encoder are the final trajectory embeddings, ready for use in downstream tasks. §.§.§ CAETSC CAETSC <cit.> is a deep learning-based method for computing the similarity between trajectories. It maps trajectories to embedding vectors, which are then utilized for similarity computations. The process of CAETSC includes two steps: converting trajectories into image representations and then transforming these images into low-dimensional representations. The first step partitions the geographical region by a uniform grid. Each trajectory 𝒯 is then represented as an image, with each pixel corresponding to a grid cell. A pixel is marked to indicate whether the trajectory passed through its grid cell. In the second step, CAETSC employs the AE framework and implements an encoder and a decoder based on multi-layer Convolutional Neural Networks (CNNs) <cit.>. The image of the trajectory generated in the first step serves as the input for the encoder and is also the target output for the decoder. The embedding vectors produced by the encoder are the trajectory embeddings. §.§ Variational Auto-encoding-based Methods The Variational Auto-Encoder (VAE) framework <cit.>, building on the vanilla AE framework introduced in Section <ref>, incorporates elements from variational Bayesian methods. Unlike the AE framework that encodes input data into fixed embeddings directly, the VAE framework treats the data as a distribution in the embedding space. This means that a VAE encoder model a compressed embedding as a multivariate Gaussian distribution and that a VAE decoder reconstructs data by sampling from this distribution. This approach not only makes it possible to generate new data but also improves the framework's ability to learn meaningful embeddings. The versatility of the VAE framework has led to its application in diverse domains, ranging from generating synthetic images and text <cit.> to modeling complex distributions <cit.>. Several methods employ the VAE framework for learning robust trajectory embeddings. §.§.§ GM-VSAE GM-VSAE <cit.> aims to detect anomalous trajectories by leveraging the VAE framework to model trajectory data as Gaussian distributions in the embedding space. It exploits the ability of VAEs to generate data, enabling the online detection of anomalies in trajectories by comparing the original and generated trajectories. We consider the trajectory embedding process utilized by GM-VSAE. Employing the VAE framework, GM-VSAE utilizes an encoder and a decoder based on LSTM <cit.> networks. Given a trajectory 𝒯, the encoder processes the sequence of locations l_i in 𝒯, mapping this input into an embedding space characterized by a multivariate Gaussian distribution. The decoder then samples an embedding from this distribution and attempts to reconstruct the trajectory's locations. The model is jointly optimized by the reconstruction loss and regularization loss inherent to the VAE framework. §.§.§ TrajODE TrajODE <cit.> harnesses the VAE framework to learn trajectory embeddings that are versatile across multiple downstream applications. To enploy the VAE framework, TrajODE innovates by developing an encoder and a decoder that utilize Neural Ordinary Differential Equations (NeuralODEs) <cit.>. It introduces a novel spatio-temporal ODE approach to build its encoder and decoder, where the hidden states of an LSTM network are updated via an ODE solver across successive trajectory points and further updated by a spatio-temporal gating mechanism. The encoder maps each trajectory 𝒯 into an embedding space of multivariate Gaussian distributions. This embedding is then refined using a Continuous Normalizing Flow (CNF) <cit.>. Subsequently, the decoder draws an embedding vector from this refined distribution to reconstruct the original trajectory. §.§ Denoising Auto-encoding-based Methods The denoising auto-encoder (DAE) <cit.> framework advances the core concepts established in the AE framework introduced in Section <ref>. Unlike AE, whose aim is merely to recreate the input from compressed data representations, the DAE framework goes a step further. It aims to do so in a manner that is robust to noise. This is typically achieved by intentionally introducing noise into the data fed into the encoder, while the decoder is tasked with restoring the original, uncorrupted data. This equips DAEs with improved feature extraction and representation learning capabilities that are especially valuable in tasks that involve denoising. The utility of the DAE framework extends to the pre-training of trajectory embeddings, including in environments where trajectories may be noisy or sparse. §.§.§ t2vec The t2vec <cit.> deep learning-based method targets trajectory similarity computation. It adopts the DAE framework to learn robust trajectory embeddings, which in turn enable accurate similarity computation, even when dealing with noisy and sparse trajectories. This method's trajectory embedding process is divided into two primary stages: handling varying sampling rates and noise, and learning trajectory embeddings. Initially, t2vec randomly drops points from a trajectory 𝒯 and introduces spatial noise sampled from a Gaussian distribution into each trajectory point (l_i,t_i) in 𝒯. Subsequently, it maps each trajectory point to a discrete token by partitioning the geographical regions according to a uniform grid, similar to the technique used by GCM (see Section <ref>). In the following stage, t2vec implements an encoder and a decoder based on RNNs <cit.> to obtain embedding vector representations of trajectories. The encoder focuses on condensing trajectories, while the decoder reconstructs the original trajectories. A novel spatial proximity-aware loss function is proposed to pre-train the model. §.§.§ Robust DAA Robust DAA <cit.> is a trajectory clustering method that employs the DAE framework to learn robust trajectory embeddings suitable for clustering tasks. In a preprocessing step analogous to CAETSC (see Section <ref>), each trajectory 𝒯 is converted into an image representation, where pixels denote movement pattern features, such as straight lines and curves. Following this, Robust DAA employs the DAE framework, utilizing an encoder and a decoder based on the attention mechanism <cit.>. This process decomposes the image representation of a trajectory into two components: 𝒯_D, which contains the parts of the trajectory that the auto-encoder can effectively model, and 𝒯_N, which contains the noisy and irregular segments of the trajectory. The objective is to precisely reconstruct 𝒯_D while minimizing the impact of 𝒯_N. §.§.§ TrajectorySim TrajectorySim <cit.> targets trajectory similarity computation while using the DAE framework. The implementation of TrajectorySim is akin to that of t2vec, described in Section <ref>. To improve the model's robustness to varying sampling rates and spatio-temporal noise, TrajectorySim randomly drops points from a given trajectory 𝒯 and introduces spatio-temporal Gaussian noise to each trajectory point. For discretizing each trajectory point, TrajectorySim partitions the geographical region according to a uniform grid and also partitions the temporal span into uniform intervals. It then uses an RNN-based encoder-decoder pair, similar to t2vec, to compress trajectories into embedding vectors. For pre-training, TrajectorySim employs a spatio-temporal proximity-aware loss function. §.§ Contrastive learning-based Methods Contrastive learning <cit.> is a machine learning approach that has become prominent in self-supervised learning. It focuses on differentiating between similar (positive) and dissimilar (negative) data point pairs in order to learn embeddings. Essentially, the goal is for the model to learn to recognize and distinguish these pairs, thereby effectively learning meaningful embeddings. This method has proven to be highly effective across several domains, including in CV and NLP <cit.>. Additionally, its application in learning trajectory embeddings has been explored with promising results. §.§.§ PreCLN PreCLN <cit.> incorporates contrastive learning to enhance the pre-training of trajectory embeddings for trajectory prediction tasks. The embedding learning component of PreCLN has two main parts: a trajectory augmentation module and a dual-view contrastive learning module. The former generates two perspectives for each trajectory 𝒯, called the grid view and the map-matched view. The grid view translates the trajectory into a sequence of discrete tokens, employing a procedure similar to that of GCM. The map-matched view presents the map-matched counterpart of 𝒯, by applying a map-matching algorithm. Additionally, PreCLN utilizes techniques like multi-hop sampling and segmentation on both views to enrich the dataset. The dual-view contrastive learning module leverages a Transformer-based encoder <cit.> to process these views into embeddings. The method is designed to train the model to distinguish between matching views of the same trajectory (positive pairs) and contrasting views from different trajectories (negative pairs). §.§.§ TrajCL TrajCL <cit.> utilizes contrastive learning to pre-train trajectory embeddings for use in trajectory similarity computation. To generate two views for a trajectory, TrajCL employs trajectory augmentation, trajectory feature enhancement, and trajectory encoding. During trajectory augmentation, a trajectory is modified into two variants using strategies including point shifting, point masking, trajectory truncation, and trajectory simplification. In trajectory feature enhancement, the location features in the augmented trajectories are mapped into discrete grid cells, similar to the approach discussed in Section <ref>. Additionally, the spatial angles between consecutive pairs of trajectory points are calculated. The trajectory encoder, implemented using self-attention, is responsible for obtaining the embeddings of the two trajectory variants. These embeddings are then pre-trained using contrastive learning to maximize their similarities. §.§.§ START START <cit.> integrates constrative learning with the MLM task to learn trajectory embeddings that support a variety of downstream tasks. In the contrastive learning phase, START first map-matches each trajectory onto the road network. It then employs several trajectory augmentation techniques, including trimming, random masking, and feature corruption. A Transformer-based encoder <cit.> is used to map each augmentated trajectory into an embedding vector. The training process focuses on differentiating between augmentations from the same trajectory and those from different trajectories. Furthermore, START integrates the MLM task with the aim of predicting the masked trajectory points. The model is jointly trained by the loss function of contrastive learning and the MLM task. §.§.§ LightPath LightPath <cit.> is a path representation learning method. It combines contrastive learning and denoising auto-encoding for pre-training. Given a trajectory, LightPath map-matches it onto the road network to produce a sequence of road segments. To enhance computational efficiency, some of the segments in the map-matched trajectory are randomly removed with a certain drop ratio before the trajectory is fed into LightPath's Transformer-based trajectory encoder. For pre-training, LightPath employs contrastive learning by generating two views of a trajectory with different drop ratios. These two views are then fed into the main and auxiliary encoders to compute embeddings for contrastive learning. Additionally, LightPath utilizes denoising auto-encoding by reconstructing the original map-matched trajectory from the trajectory with removed segments using a Transformer-based decoder. §.§.§ MMTEC MMTEC <cit.> incorporates a novel pre-training loss inspired by information entropy theory into the contrastive learning framework to learn general trajectory embeddings that can enhance the performance of a wide range of tasks. The process begins with generating two distinct views for a given trajectory 𝒯: a travel semantics view, and a continuous spatio-temporal view. The travel semantics view is derived using an attention-based encoder applied to the map-matched counterpart of 𝒯. The continuous spatio-temporal view is obtained using a continuous trajectory encoder, which is based on Neural Controlled Differential Equations (CDE) <cit.>. MMTEC's training objective, the Maximum Multi-view Trajectory Entropy Coding, aims to maximize the information entropy of trajectory embeddings while maintain consistency between the two views. § A UNIFIED AND MODULAR PIPELINE To standardize the implementation and evaluation of methods for the pre-training of trajectory embeddings, we propose the UniTE pipeline. This pipeline modularizes pre-training methods into five key types of components: dataset, preprocessor, model, pre-training process, and downstream adaptor, as shown in Figure <ref>. We provide a detailed presentation of these components, which can be combined to implement the methods presented in Section <ref>. §.§ Dataset The dataset component is the core component of UniTE, offering real-world spatio-temporal trajectories for analysis, embedding pre-training, and evaluation. Each dataset 𝕋 of trajectories is accompanied by contextual information such as the road network 𝒢 that covers the region covered by the trajectories in 𝕋. Table <ref> lists trajectory datasets used frequently in studies of pre-training methods along with information and statistics. The Chengdu and Xian datasets, released by Didi[<https://gaia.didichuxing.com/>], include GPS trajectories of taxis operating in Chengdu and Xian, China. The Porto dataset, made available on Kaggle[<https://www.kaggle.com/competitions/pkdd-15-predict-taxi-service-trajectory-i/data>] for a taxi trajectory prediction contest, contains GPS trajectories of taxis in Porto, Portugal. T-Drive <cit.>, published by Microsoft, comprises GPS trajectories of taxis in Beijing, China. LaDe <cit.>, released by Cainiao, includes trajectories from last-mile deliveries in five cities. Foursquare-TKY and Foursquare-NYC, released by Foursquare[<https://sites.google.com/site/yangdingqi/home/foursquare-dataset>], are check-in trajectory datasets recording visits to locations in Tokyo, Japan, and New York City, USA, respectively. Gowalla, another check-in trajectory dataset released by Gowalla[<https://www.kaggle.com/datasets/bqlearner/gowalla-checkins>], covers global check-in records. Geolife, a trajectory dataset collected and released by Microsoft[<https://www.microsoft.com/en-us/research/publication/geolife-gps-trajectory-dataset-user-guide/>], consists of data recorded by mobile devices of 182 users during their daily activities. CFD, CSC, and YRE are vessel trajectory datasets used in the CAETSC study <cit.> that record the movement of vessels at sea. Some datasets include unique tokens that can be utilized by the tokenization preprocessor, such as road segments in taxi datasets or points of interest (POIs) in check-in datasets. Road segment data for taxi datasets can be obtained from publicly available services like OpenStreetMap[<https://www.openstreetmap.org/>], and POI information for check-in datasets can be retrieved from location-based services like AMap[<https://lbs.amap.com/api/javascript-api-v2>]. §.§ Preprocessor The preprocessor component is tasked with converting raw trajectory data into a structured format that is ready for encoding and decoding. This involves a variety of operations, ranging from straightforward normalization to more intricate map-matching techniques. The framework supports seven types of preprocessing operations. §.§.§ Normalization Preprocessor This preprocessor scales the continuous raw features within trajectories to a uniform range (such as [0,1] or [-1,1]), thus solving the problem that the value ranges of different features are too different. If the values of some features are too large and exceed by far the values of other features, the results of model training will be dominated by such features, and useful information contained in features with small values will be missed. The two primary methods employed are min-max normalization and z-score normalization. For a raw feature x, min-max normalization first subtracts its minimum value and then divides it by the range of x: x_norm = (x - x_min) / (x_max - x_min), where x_min and x_max represent the minimum and maximum values of x respectively, and the normalized result x_norm is in range [0, 1]. The z-score normalization makes the mean and standard deviation of the results 0 and 1 respectively, which facilitates rapid training. The formula is as follows: x_norm = (x - x̅) / STD(x), where x̅ is the average value of x and STD(x) is its standard deviation. §.§.§ Tokenization Preprocessor This preprocessor converts each point in a trajectory into a discrete token. Considering a 1-dimensional feature x, we define a series of buckets as: ⟨ (-∞, b_1), [b_1, b_2), [b_2,b_3), …, [b_M, +∞) ⟩ The token for x is determined by the bucket it falls into. For multi-dimensional features, each dimension is tokenized independently, upon which tokens are merged to form compound tokens. For instance, a point l_i has its longitude and latitude tokenized separately using Equation <ref>, upon which the tokens are combined into a dual-token representation. §.§.§ Pixelation Preprocessor This preprocessor converts a trajectory into an image, and can be viewed an extended case of the tokenization preprocessor applied to 2-dimensional points. It first partitions the spatial regions using a uniform grid with W× H cells. These cells are analogous to pixels in an image; thus, each trajectory is depicted as a 3D image T with dimensionality W× H× C, where C represents the number of channels. The value of a pixel T_i,j is determined by the characteristics of the trajectory points located in the cell corresponding to the pixel. This includes information such as the presence of the trajectory within cell (indicated by a mask), the timestamp, the speed, and the rate of turn. §.§.§ Sliding-Window Preprocessor This preprocessor captures segments of a trajectory using a window of predefined length and step size, transforming the raw data into fixed-length samples. For example, a window of duration δ includes trajectory points as follows: ⟨ (l_i,t_i), (l_i+1, t_i+1), …, (l_i+j,t_i+j) ⟩ s.t. t_i+j-t_i ≤δ∧ t_i+j+1-t_i > δ Points in these samples are then aggregated to compute higher-order features such as total distance traveled, time traveled, speed, and turning rate. §.§.§ Augmentation Preprocessor Trajectory representation learning often requires substantial amounts of trajectory data to work well. However, sufficient data is not always readily available. Data augmentation addresses this shortage by artificially increasing the cardinality of a dataset. This process involves generating new data points from existing ones, either by perturbing the data or using deep learning models to create new data in the latent space of the original data. This preprocessor generates multiple versions of a trajectory using two main augmentation techniques. Simulating varying conditions involves adding noise to trajectory features to mimic different traffic conditions. Specifically, a feature x is augmented by introducing Gaussian noise with standard deviation σ: x' = x + δ_x, δ_x ∼𝒩(0, σ) Dropout entails removing features or points from a trajectory to create modified versions. This can be achieved through several methods: * Eliminating a segment of a trajectory: For a trajectory 𝒯=⟨ (l_1,t_1), (l_2,t_2), …, (l_N,t_N) ⟩, we randomly select a starting point (l_s,t_s) and a segment length N' and then remove the corresponding subsequence: ⟨ (l_s,t_s), (l_s+1,t_s+1), …, (l_s+N'-1,t_s+N'-1) ⟩ s.t. 1 ≤ s ≤ N ∧ N' < N ∧ 1 ≤ N' ≤ N-s+1 * Substituting trajectory features with a mask token or zeros at random: A trajectory feature x can be masked or zeroed with probability p ∈ [0,1). A masking function mask(·) can generate the replaced trajectory features x_mask: x_mask = mask(x) = { x if k > p [m] or 0 otherwise., where k ∼ U(0,1) is a random number and [m] is the mask token. * Randomly omitting points: Each point (l_i, t_i) in a trajectory can be randomly deleted with probability p ∈ [0,1). * Resampling a trajectory at a longer interval: Given a trajectory 𝒯 sampled with a time interval t, a new trajectory 𝒯' can be obtained by sampling 𝒯 with time interval t', where t'>t, thus yielding a shorter trajectory. During resampling, trajectory points are typically aggregated within each target interval. §.§.§ Map-Matching Preprocessor GPS location data in a trajectory often does not align accurately with the road due to a range of technical issues. Therefore, it is beneficial to apply map-matching <cit.> to GPS data to associate it with the road network. Map-matching involves comparing a trajectory with the underlying road network to find the most likely road-network path of the trajectory, thereby transforming the GPS data into path data. This preprocessor uses map-matching algorithms to align a trajectory with a road network 𝒢. Each location l_i in the trajectory is mapped to an index s_i, indicating the road segment that is closest to l_i. Thus, the GPS location l_i is translated into the road segment s_i, allowing the current position of the trajectory to be described using roads and intersections. §.§.§ Spline Preprocessor The spline preprocessor is designed to reconstruct the continuous dynamics of trajectories using spline functions <cit.>. A spline function consists of segmented polynomials defined over a subspace, adhering to certain continuity conditions. Given n+1 timesteps t_0, t_1, …, t_n, where t_0 < t_1 < … < t_n, these steps are referred to as knots. For a specified integer k ≥ 0, a k-order spline function with knots t_0, t_1, …, t_n is a function S that meets two criteria: (1) S is a polynomial of order n ≤ k in each interval [t_i-1, t_i], and (2) S has a continuous derivative of order k-1 over [t_1, t_n]. Specifically, the preprocessor converts a given trajectory 𝒯 into a cubic Hermite spline T, ensuring that T_t_i = l_i for each trajectory point (l_i, t_i). Sequence of Input Trajectory Features can be obtained by applying one or multiple preprocessors on one trajectory 𝒯: X_𝒯=⟨ x_1, x_2, …, x_N ⟩, where x_i is one step of processed features. §.§ Model The model components constitute the majority of the learnable elements in a method for the pre-training of trajectory embeddings. The model components in UniTE include the feature embedder, encoder, decoder, and embedding postprocessor. Further, different instances of the components are provided; for example, different encoder and decoder component instances are provided for different neural architectures; see Figure <ref>. Among these, instances of the encoder component are responsible for mapping preprocessed trajectories to an embedding space. Optionally, a corresponding decoder component instance reconstructs trajectories from the embeddings. §.§.§ Feature Embedder Instances of the feature embedder component focus on mapping a preprocessed feature of a trajectory point into the embedding space. This allows for the establishment of information in trajectory points, facilitating the modeling of correlations between trajectory points in subsequent encoders and decoders. Below we introduce the different instances of the feature embedder component. FC embedder employs a fully-connected (FC) network to convert a feature x into a d-dimensional embedding e. The process for a single-layer FC embedder is described by: FC(x) = g( W x+ b), where W and b represent the embedder's weight and bias and g is a non-linear activation function. To increase the embedding capacity, multiple layers of FC embedders can be stacked. Index-fetching embedder forms a d-dimensional embedding vector for each unique token identified by the tokenization preprocessor mentioned in Section <ref>. This effectively yields an embedding matrix, with each row corresponding to the embedding of a specific token. The embedding vector for a discrete token x is retrieved from the x-th row of the matrix. Word2vec embedder pre-trains embedding vectors for a set of discrete tokens using word2vec <cit.>. It requires sequences of discrete tokens as training data, where the sequences capture contextual correlations between the tokens. Such sequences can be derived directly from trajectories. The sequences of road segments obtained by map-matching trajectories is an example. Alternatively, they can be generated through algorithms like random walks in the road network. After word2vec training, this embedder functions similarly to the index-fetching embedder, with each discrete token being assigned a pre-trained embedding vector. Fourier embedder utilizes learnable Fourier features <cit.> to transform a continuous feature x into a d-dimensional embedding e. The transformation is given as follows: Fourier(x) = 1/√(d)[cos wx ∥sin wx], where w∈ℝ^d/2 serves as a learnable mapping vector, ∥ denotes vector concatenation. This embedding technique uses the periodic nature of trigonometric functions to preserve periodicity in features. Sequence of Embedding Vectors is generated by applying one or more instances of embedder component on the sequence X_𝒯 of input trajectory features calculated in Equation <ref>. This resulting sequence of embedding vectors is as follows: E_𝒯 = ⟨ e_1, e_2, …, e_N ⟩, where each e_i represents an embedding vector created by instances of embedder component. Subsequently, the sequence E_𝒯 can be processed by instances of encoder and decoder components. §.§.§ RNN-based Encoder and Decoder RNNs <cit.> are particularly adept at handling sequential data, making them well-suited for analyzing trajectories. In an RNN-based encoder, a sequence E_𝒯 of embedding vectors is processed, and its final hidden state is considered as the embedding z_𝒯 of the corresponding trajectory 𝒯. This process can be expressed as follows: z_𝒯 = RNN( E_𝒯) Similarly, in an RNN-based decoder, the trajectory embedding z_𝒯 is used to reconstruct the trajectory sequence, typically in an auto-regressive manner. This process is defined as follows: 𝒯 = RNN( z_𝒯), where z_𝒯 acts as the initial hidden state for the RNN decoder. Both the encoder and decoder networks can employ one of three RNN variants: vanilla RNN, LSTM, or GRU. Vanilla RNN comprises an input, a hidden, and an output layer that can be extended in the temporal dimension. The functioning of a vanilla RNN at time t is given as follows: h_t =g_1( W e_t+ U h_t-1+ b) y_t = g_2( V h_t), where e_t is the input embedding vector of the input layer, h_t is the output of the hidden layer, y_t is the output of the output layer, g_1 and g_2 are non-linear activation functions, and W, U, V, and b are weights and biases. An RNN variant, the LSTM <cit.> neural architecture addresses the issues of exploding or vanishing gradients encountered in vanilla RNNs. LSTM introduces three gates to regulate the flow of information: a forget gate f, an input gate i, and an output gate o. The formulations of these gates at step t are as follows: f_t =σ( W_f e_t+ U_f h_t-1+ b_f) i_t =σ( W_i e_t+ U_i h_t-1+ b_i) o_t =σ( W_o e_t+ U_o h_t-1+ b_o), where σ denotes the Sigmoid activation function. Additionally, a new cell state c_t is introduced to retain historical information up to the current time step and nonlinearly pass information to the hidden state h_t. The cell state c_t and hidden state h_t are computed using the following equations: c_t =f_t⊙ c_t-1+i_t⊙c̃_t h_t =o_t⊙tanh( c_t), where ⊙ is the element-wise product, tanh is the Tanh activation function, c_t-1 is the cell state at the previous time step, and c̃_t is the candidate memory cell computed as follows: c̃_t=tanh( W_c e_t+ U_c h_t-1+ b_c) The GRU <cit.> neural architecture is a simpler RNN variant than the LSTM. It incorporates an update gate z and a reset gate r. Unlike the LSTM, GRU combines the cell state and the output into a single state h without introducing additional memory cells. The update gate z_t determines the amount of information that the current state h_t is to retain from the previous state h_t-1 and how much new information it is to receive from the candidate state h̃_t. The reset gate r_t decides whether the calculation of h̃_t depends on h_t-1. The operation of the GRU at step t is expressed as follows: z_t =σ( W_z e_t+ U_z h_t-1+ b_z) r_t =σ( W_r e_t+ U_r h_t-1+ b_r) h̃_t =tanh( W_h e_t+ U_h (r_t⊙ h_t-1)+ b_h) h_t =(1-z_t)⊙ h_t-1 + z_t⊙h̃_t In an RNN-based encoder, the final hidden state h_N is considered the encoded embedding vector z_𝒯 of the input sequence E_𝒯, with N representing the sequence length. Conversely, in an RNN-based decoder, the sequence of outputs ⟨ y_1, y_2, …, y_N ⟩ is viewed as the reconstructed trajectory 𝒯. §.§.§ Transformer-based Encoder and Decoder The advanced self-attention mechanism in Transformers <cit.> enables them to understand complex spatio-temporal relationships in trajectories. A Transformer-based encoder processes a sequence E_𝒯 of embedding vectors and generates a memory sequence M_𝒯 of the same length. To derive the embedding z_𝒯, a pooling operation is applied to M_𝒯. This process is formulated as follows: M_𝒯 = Transformer(E_𝒯) z_𝒯 = Pool(M_𝒯) Next, a Transformer-based decoder aims to reconstruct the trajectory sequence from M_𝒯 by using M_𝒯 as the query in the Attention mechanism. Thus, we get: 𝒯 = Transformer(M_𝒯, 𝒯_src), where 𝒯_src is the source trajectory guiding the generation of 𝒯. Multi-head attention is the key component of the conventional transformer architecture, enabling the network to focus on every token within the input sequence. To implement multi-head attention, the input is first projected to three matrices, query Q, key K, and value V, through linear transformations. This process in Equations <ref> and <ref> is formulated as follows: [Q_i, K_i, V_i] = E_𝒯[W_i^Q, W_i^K, W_i^V] [Q_i, K_i, V_i] = [M_𝒯W_i^Q, E_𝒯_srcW_i^K, E_𝒯_srcW_i^V], where E_𝒯_src is the embedding sequence of 𝒯_src, W_i^Q, W_i^K ∈ℝ^d × d_QK, and W_i^V ∈ℝ^d × d_V are the transformation matrices. Queries Q_i and keys K_i have the same dimensionality d_QK, while values V_i have dimensionality d_V. In practice, we usually set d_QK = d_V = d. The output matrix of the attention mechanism of the i-th head is then obtained as follows: Attention(Q_i, K_i, V_i) = softmax(Q_i K_i^T/√(d))V_i, where softmax is row-wise softmax normalization. To incorporate multiple aspects of correlation, multi-head attention first concatenates the output of multiple attention heads into a long vector, and then multiplies a weight matrix W^O with a fully-connected network to obtain the final output. This process is formulated as follows: MultiHeadAtt(X) = Concat( Attention(Q_1, K_1, V_1), …, Attention(Q_h, K_h, V_h))W^O, where h is the number of heads. Following the multi-head attention module, the Transformer extracts deepter features using a Feed Forward network that usually includes two linear layers and a non-linear activation function, mapping data to high-dimensional spaces and then to low-dimensional spaces. This process is formulated as follows: FFN(x) = W_2(g(W_1 x + b_1)) + b_2, where W_1, W_2, b_1, and b_2 represent the weights and biases of the linear layers, and g is a non-linear activation function. Moreover, Transformers use residual connections in each module separately. That is, the output of each transformer layer is: z_i^multi = LayerNorm(z_i-1 + MultiHeadAtt(z_i-1)) z_i = LayerNorm(z_i^multi + FFN(z_i^multi)), where z_i-1 and z_i represent the input and output of the i-th transformer layer, respectively. In a Transformer-based encoder, the output of the last transformer layer is regarded as the memory sequence M_𝒯. Conversely, in a Transformer-based decoder, the output of the last transformer layer is fed into a fully-connected prediction module to produce the reconstructed trajectory 𝒯. §.§.§ CNN-based Encoder and Decoder Convolutional Neural Networks (CNNs) <cit.> are well-suited for capturing intricate spatial patterns, making them suitable for analyzing trajectories with complex spatial features. In the context of trajectory analysis, a CNN-based encoder processes the image representation T of a trajectory 𝒯, which has been preprocessed by the pixelation preprocessor as discussed in Section <ref>. By employing multi-layered CNNs in conjunction with fully-connected and pooling layers, the encoder produces an embedding z_𝒯 as follows: z_𝒯 = CNN(T) Next, the CNN-based decoder reconstructs the trajectory image T from its embedding z_𝒯 by essentially reversing the encoding process. This decoding operation mirrors the structure of the encoder: T = CNN(z_𝒯) To elaborate further, the CNN network in Equation <ref> comprising two convolution layers, a pooling layer, and a fully-connected layer can be defined as follows: CNN(T) = FC(Pool( W_2 * g( W_1 * T))), where * denotes the convolution operation, Pool is the pooling operation, FC represents the fully-connected layer, W_1 and W_2 are convolution kernels, and g is a non-linear activation function. §.§.§ ODE-based Encoder and Decoder The NeuralODE family <cit.> represents a novel approach to capturing the continuous dynamics of data, thus offering a new perspective on trajectory modeling. Building upon the foundations laid by the RNN-based encoder discussed in Section <ref>, the ODE-based encoder updates its hidden states not only at each step of an input embedding e_i, but also between these steps through the use of an ODE solver. The update process is described as follows: h_i-1' = ODESolve( h_i-1, (t_i-1,t_i)), where h_i-1' is the newly updated hidden state, which is then further processed by the RNN cell. Similar to the RNN-based encoder and decoder, the ODE-based encoder compresses the embedding sequence E_𝒯 into a trajectory embedding z_𝒯, and the ODE-based decoder reconstructs trajectory 𝒯 from z_𝒯. CDE-based Encoder is an expanded variant of the ODE-based encoder based on NeuralCDE <cit.>. It employs a spline T, derived from the spline preprocessor introduced in Section <ref>, and performs integration over this spline across time from t_1 to t_N. This integration, carried out with a specially parameterized CDE integral kernel, produces the trajectory embedding z_𝒯. §.§.§ Embedding Postprocessor The embeddings calculated by the instances of encoder component can be optionally transformed by utilizing the embedding postprocessor component. Below we present different instances of the component. Variational postprocessor maps the output of an encoder to a multi-dimensional Gaussian space. Given an embedding z, it uses two fully-connected networks to compute the mean and variance of a Gaussian distribution: μ = W_μ z + b_μ, σ = W_σ z + b_σ, where W_μ, W_σ, b_μ, and b_σ are weights and biases. The processed embedding is then obtained from the computed Gaussian distribution: z' ∼𝒩( z'|μ, σ) NF postprocessor, also called the normalizing flows postprocessor, transforms simple probability distributions into more complex ones through a series of invertible and differentiable mappings. This method leverages the concept of normalizing flows to enhance the expressiveness of probabilistic models. At its core, a normalizing flow (NF) <cit.> consists of a sequence of invertible functions g_1, g_2, …, g_K. Each of these functions, also called transformations, is designed to be bijective and differentiable, ensuring that the overall transformation remains invertible and that the change in probability density can be tracked through the Jacobian determinant. Given an initial simple distribution, typically a multivariate Gaussian distribution 𝒩( z' | μ, σ) from the above variational postprocessor, the NF Postprocessor applies these transformations sequentially to derive a more complex distribution. This process can be stated as follows: z” = g_K ∘ g_K-1∘⋯∘ g_1( z'), where z' is a sample drawn from the initial Gaussian distribution. The transformed variable z” now follows a more complex distribution that is potentially better suited for subsequent downstream tasks. §.§ Pre-training Process The pre-training process is a vital component of the UniTE pipeline, ensuring that the learnable parameters of models are trained in a self-supervised manner, yielding trajectory embeddings that are useful for downstream tasks. We consider two key components that support the pre-training process: the loss function and pre-trainer components. Loss function component instances are designed to optimize the learnable parameters by penalizing deviations from desired behaviors. Loss functions can be classified broadly into reconstruction, contrastive, and regularization loss functions. Pre-trainer component instances play a crucial role in integrating various elements to effectively pre-train trajectory embeddings. We classify these into generative, contrastive, and hybrid types. §.§.§ Reconstruction Loss Reconstruction loss functions quantify the difference between the original input trajectories and their reconstructed counterparts as produced by the decoder. This metric is crucial for ensuring that both the encoder and decoder are effectively capturing and retaining important information contained in trajectories. The particular function used for calculating reconstruction loss varies based on the characteristics of the features being reconstructed. MSE and MAE loss, i.e., Mean Squared Error and Mean Absolute Error loss, are employed frequently for monitoring the reconstruction of continuous features. Given a reconstructed feature x̂ and the ground truth x, these losses are defined as follows: MSE(x, x̂) = (x - x̂)^2, MAE(x, x̂) = |x - x̂| Cross-entropy loss is used for the reconstruction of discrete tokens. It involves comparing the predicted probability distribution of tokens p̂(x) against the actual token x and is defined as follows: CE(p̂(x),x) = -p̂(x)_x + log(∑_c^C exp(p̂(x)_c)), where C is the total count of unique tokens. Distance loss is tailored for the reconstruction of spatial data. It computes the loss based on the geometric distance between predicted coordinates l̂_̂î and the ground truth l_i, often using the shortest path on the Earth's surface or on the road network as the distance function. §.§.§ Contrastive Loss Contrastive loss functions are used to refine embeddings by differentiating between similar (positive) and dissimilar (negative) pairs of data points. This technique aims to group embeddings of similar trajectories closely together and separate dissimilar trajectories. InfoNCE loss <cit.> enhances a model by maximizing the mutual information between positive pairs in contrast with a selection of negative samples. Given a set of trajectories {𝒯_1,𝒯_2, …, 𝒯_B} and a target trajectory 𝒯_i, a positive pair of embeddings z_𝒯_i and z'_𝒯_i is calculated, usually representing two different augmentations of 𝒯. Meanwhile, the embeddings z'_𝒯_j, j≠ i for the other trajectories are considered as negative samples. The goal of the InfoNCE loss is to effectively distinguish positive and negative samples, formulated as follows: ℒ_InfoNCE = -logexp( z_𝒯_i z'_𝒯_i^⊤/τ)/∑_j=1^B exp( z_𝒯_i z'_𝒯_j^⊤/τ), where τ is the temperature parameter. MEC loss <cit.> utilizes the principle of maximum entropy from information theory to direct the learning of general trajectory embeddings. Given a set of trajectories {𝒯_1,𝒯_2, …, 𝒯_B}, two sets of their embeddings Z^(1)∈ℝ^B× d and Z^(2)∈ℝ^B× d are produced through different preprocessing, embedding, and encoding steps. The MEC loss is then defined as follows: ℒ_MEC = B+d/2log ( I_B + d/Bϵ^2 Z^(1) Z^(2)^⊤), where I_B is an identity matrix of dimensionality B and ϵ is the upper bound of the decoding error. §.§.§ Regularization Loss Regularization loss functions are employed to prevent overfitting by encouraging the model to learn more generalized embeddings. These losses are typically applied to the learned embeddings or the latent states of models, serving to constrain the complexity of the model and improve its generalization capabilities. L1 and L2 loss are widely used for regularization purposes, and are also known as Lasso and Ridge regularization, respectively. They impose penalties according to the magnitude of the parameters. Given a trajectory embedding z_𝒯, these losses are calculated as follows: ℒ_L1 = ∑_i | z_𝒯_i|, ℒ_L2 = √(∑_i z_𝒯_i^2), where z_𝒯_i represents the i-th dimension of z_𝒯. The L1 loss encourages sparsity by driving many of the parameters to zero, while the L2 loss prevents large weights by penalizing the square of the magnitude of the parameters. ELBO loss is an essential component in the Variational Autoencoder (VAE) framework for regularizing the learned distribution of embeddings. The Evidence Lower Bound (ELBO) is used to approximate the likelihood of the data under the model. The ELBO loss consists of two main terms: a reconstruction loss and the Kullback-Leibler (KL) divergence. Given a trajectory 𝒯 and its corresponding embedding vector z_𝒯, the ELBO loss is defined as follows: ℒ_ELBO = 𝔼_q( z_𝒯|𝒯)[ log p(𝒯| z_𝒯) ] - KL( q( z_𝒯|𝒯) ∥ p( z_𝒯) ), where q( z_𝒯| x) is the variational posterior, p(𝒯| z_𝒯) is the likelihood, and p( z_𝒯) is the prior distribution of the embedding. The first term, 𝔼_q( z_𝒯|𝒯)[ log p(𝒯| z_𝒯) ], represents the reconstruction loss, which ensures that the model can accurately reconstruct the trajectory from the embedding vector. The second term, KL( q( z_𝒯|𝒯) ∥ p( z_𝒯) ), is the KL divergence, which regularizes the latent space to match a prior distribution (commonly a standard normal distribution). In summary, the L1 and L2 loss functions are applied to the model parameters to enforce sparsity and prevent large weights, respectively, while the ELBO loss function in VAEs regularizes the learned embeddings by balancing reconstruction accuracy and the regularity of the latent space distribution. §.§.§ Generative Pre-trainer The generative pre-trainer aims to enhance embeddings by determining some segments of trajectory data based on other segments. This process entails either reconstructing a trajectory from a modified or condensed form, or predicting future segments based on past segments. Given a trajectory 𝒯, one or several of the preprocessors mentioned in Section <ref> are utilized to generate a modified, augmented, or feature-enhanced version of the trajectory. This version is then processed through an encoder and a decoder. The encoder condenses the trajectory into an embedding, while the decoder reconstructs segments or the entire trajectory. The pre-trainer supervises the learnable parameters, applying the reconstruction loss metrics detailed in Section <ref> to the reconstructed trajectory. This pre-training procedure is repeated for each trajectory 𝒯 in the trajectory dataset 𝕋. §.§.§ Contrastive Pre-trainer The contrastive pre-trainer aims to refine embeddings by differentiating between similar (positive) and dissimilar (negative) pairs of trajectories. This method is dependent on being able to generate meaningful positive and negative examples. For a given trajectory 𝒯, instances of preprocessor component as those covered in Section <ref> are employed to produce multiple trajectory augmentations. These augmented versions are then fed into the encoders to extract their embeddings. Following this, the pre-trainer applies the contrastive loss metrics specified in Section <ref> to these embeddings to guide the learning of parameters. This process is performed iteratively for every trajectory 𝒯 in 𝕋. §.§.§ Hybrid Pre-trainer Hybrid pre-trainers amalgamate the methodologies of generative and contrastive pre-trainers, aiming to benefit from both reconstruction and discrimination tasks. It adheres to the procedures established by the generative and contrastive pre-trainers, while combining these through a loss that is a weighted summation of the losses from each pre-trainer type. §.§ Downstream Adapter The downstream adapter functions as an intermediary in-between pre-trained trajectory embeddings and their application in downstream tasks. It customizes the universal embeddings resulting from the pre-training to suit particular tasks, potentially enhancing the effectiveness of embeddings through fine-tuning. Moreover, the adapter offers a standardized and detailed approach for assessing and benchmarking the performance of different pre-training methods across different downstream tasks. §.§.§ Destination Prediction Adapter This adapter is dedicated to the task of forecasting the destination of a trajectory. When calculating a trajectory 𝒯's embedding z_𝒯, the last L points of 𝒯 are omitted. A fully-connected network then uses this embedding to predict the destination point's road segment s_N. The cross-entropy loss is applied to the predicted segment to refine the learnable parameters. Evaluation metrics for this adapter include Acc@1, Acc@5, Recall, and F1. Acc@1 and Acc@5 measure the percentages of correct top-1 and top-5 predictions, respectively. The Recall and F1 metrics calculate the recall and F1 scores for each class label—in this case, each road segment—and average the scores across all classes. §.§.§ Arrival Time Estimation Adapter Designed for estimating the arrival time at a destination from a trajectory, this adapter also omits the final L points of a trajectory 𝒯 when calculating the embedding z_𝒯. A fully-connected network employs the embedding to forecast the destination point's arrival time t_N. The Mean Absolute Error (MAE) loss or the Mean Squared Error (MSE) loss is used on the prediction to fine-tune parameters. Evaluation metrics for this adapter include MAE, Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE). §.§.§ Trajectory Classification Adapter This adapter is designed to predict the class label of a trajectory, such as its driver ID. Given a trajectory 𝒯, its entire sequence is used to compute its embedding vector z_𝒯. A fully-connected network then uses z_𝒯 to predict the class label of 𝒯. The model's parameters are fine-tuned using the cross-entropy loss. Evaluation Metrics for this adapter are the same as for the destination prediction adapter, as both perform classification. §.§.§ Similar Trajectory Search Adapter This adapter targets the unsupervised task of identifying the trajectory in a set of trajectories that is most similar to a target trajectory. Given a target trajectory 𝒯_t and a set of candidate trajectories {𝒯_1, 𝒯_2, …, 𝒯_B}, the similarity between the target's embedding z_𝒯_t and each candidate's embedding z_𝒯_i is calculated using cosine similarity. The trajectory with the highest similarity is deemed the most similar trajectory. §.§.§ Training Strategy There are two sources of supervision in the pipeline: the pre-training process covered in Section <ref> and the fine-tuning process guided by the adapters just presented. We incorporate three training strategies into the pipeline: * w/o finetune. This strategy reduces the emphasis on fine-tuning embedding models. After their pre-training, the parameters in the embedding models are kept fixed. * w/o pretrain. This strategy bypasses the pre-training of embedding models. The parameters in the embedding models are initialized randomly and updated directly using the task-specific loss function in the adapters. * full. This strategy includes both pre-training and fine-tuning. The parameters in the embedding models are first learned through their pre-training processes and then further refined using the task-specific loss functions in the adapters. It is important to note that the parameters in the prediction networks are always updated using the task-specific loss function in the adapters, regardless of the strategy. §.§ Building Existing Methods with UniTE UniTE is designed to modularize the implementation of existing and new methods for the pre-training of trajectory embeddings. Table <ref> shows how existing methods introduced in Section <ref> are realized with the modules in UniTE. § EXPERIMENTS To validate and illustrate the use of UniTE, we implement existing methods with UniTE and perform experiments to evaluate their effectiveness using the datasets and downstream adapters included in UniTE. §.§ Settings We conduct experiments on the Chengdu and Porto datasets, described in Table <ref>. For consistency, both datasets are standardized to a sampling interval of 15 seconds. Trajectories with fewer than six points are excluded. We sort the trajectories by their start time and split the datasets into training, evaluation, and testing sets in an 8:1:1 ratio. Both pre-training and fine-tuning are performed on the training set. The fine-tuning process includes an early-stopping mechanism that allows up to 10 epochs of tolerance, based on evaluation metrics from the evaluation set. The final metrics are calculated using the testing set. We evaluate a selection of existing trajectory embedding methods introduced in Section <ref>, implementing them in the UniTE pipeline. In terms of downstream adaptors, discussed in Section <ref>, we use destination prediction, arrival time estimation, and trajectory classification adapters. These cover downstream tasks in three representative scenarios: incomplete sub-trajectory, incomplete trajectory features, and complete trajectory. All the three types of training strategies covered in Section <ref> are applied in the experiments. §.§ Overall Performance Tables <ref> to <ref> compare the overall performance of selected methods at destination prediction, arrival-time estimation, and trajectory classification. Comparison within the same training strategy provides insights into the effectiveness of the learnable components and pre-trainers used. The "w/o finetune" strategy, which fixes the trainable parameters after pre-training, highlights the effectiveness of the chosen pre-trainer. We present the following observations: * The contrastive pre-trainer, which focuses on contrasting different views of one or multiple trajectories, enhances the performance of trajectory embeddings on tasks that rely on the global aspect of a full trajectory, such as trajectory classification. * The generative pre-trainer, which targets the reconstruction of the spatio-temporal features of a trajectory, improves the performance of trajectory embeddings on tasks that rely on local spatio-temporal correlations, such as trajectory prediction. * The hybrid pre-trainer, combining the benefits of both generative and contrastive pre-training, enables trajectory embeddings to perform competitively across multiple types of tasks, as demonstrated by START and LightPath. Discrepancies between different types of pre-trainers are also discussed in other studies on the pre-training of embeddings <cit.>. The "w/o pretrain" strategy learns a method's parameters end-to-end for a specific task, with the performance of embeddings influenced primarily by the preprocessor and learnable components. We observe the following: * Methods using RNN- or Transformer-based encoders generally outperform those using CNN-based encoders at destination prediction, as this task emphasizes modeling sequential correlations and accurately capturing spatial information of trajectories. * Methods that emphasize time features exhibit superior performance at the arrival time estimation that benefits from temporal information. Comparison between different training strategies evaluates the effectiveness of the pre-training and fine-tuning processes. We observe the following: * The "full" strategy, which involves pre-training followed by fine-tuning, aligns with most methods for the pre-training of trajectory embeddings and yields optimal performance in most cases. Pre-training helps methods gain a universal understanding of trajectories, while the fine-tuning process further adjusts the methods to fit the specific task better. * PreCLN, START, and LightPath perform better using the "w/o finetune" strategy at trajectory classification compared to the other two strategies. This may be due to the task-specific labels being of low quality, such as the uneven distribution of class labels in trajectory classification. This observation highlights one of the benefits of pre-training, which is to enhance performance when task-specific labels are insufficient to support effective end-to-end training. §.§ Efficiency Table <ref> reports on the efficiency metrics for the comparison methods. The model size and embedding time are affected primarily by the complexity of the preprocessors and learnable components within a method. Additionally, the pre-training time depends on the specific pre-trainer used by a method. We observe that methods combining tokenization or map-matching preprocessors with index-fetching feature embedders tend to have larger model sizes, as each token or road segment is assigned an embedding vector. Methods employing Transformer-based encoders generally exhibit higher pre-training and embedding times compared to those using RNN-based encoders, due to the higher computational costs of Transformers. Some methods, such as t2vec and TrajectorySim, show relatively high pre-training times, primarily because their preprocessors take longer time to run. Overall, understanding these efficiency metrics is crucial for selecting an appropriate method based on the available computational resources, the requirements of the task, and the preferred balance between accuracy and efficiency. § CONCLUSION We present UniTE, a comprehensive survey and a unified pipeline aimed at accelerating advances in methods for the pre-training of trajectory embeddings. The survey compiles an extensive list of existing methods, including those explicitly targeting universal trajectory embeddings and those that implicitly employ pre-training techniques tailored for specific tasks. The unified pipeline standardizes the implementation and evaluation of methods for the pre-training methods, facilitating the reproduction of existing methods and the development of new ones. Together, the survey and pipeline offer a thorough academic and technical resource, which we hope will accelerate research in this field. § ACKNOWLEDGMENTS § ACKNOWLEDGMENT This work was supported by the National Natural Science Foundation of China (No. 62272033). IEEEtran [ < g r a p h i c s > ]Yan Lin received the B.S. degree in computer science from Beijing Jiaotong University, Beijing, China, in 2019. He is currently working toward the Ph.D. degree in the School of Computer and Information Technology, Beijing Jiaotong University. His research interests include spatio-temporal data mining and representation learning. [ < g r a p h i c s > ] Zeyu Zhou received the B.S. degree in mathematics and applied mathematics from Beijing Jiaotong University, Beijing, China, in 2022. He is currently working toward the M.S. degree in the School of Computer and Information technology, Beijing Jiaotong University. His research interests focus on deep learning and data mining, particularly their applications in spatio-temporal data mining. [ < g r a p h i c s > ] Yichen Liu received the B.S. degree in computer science from Beijing Jiaotong University, Beijing, China, in 2023. She is currently working toward the M.S. degree in the School of Computer and Information technology, Beijing Jiaotong University. Her research interests focus on deep learning and data mining, especially their applications in spatio-temporal data mining. [ < g r a p h i c s > ] Haochen Lv received the B.S. degree in computer science from Beijing Jiaotong University, Beijing, China, in 2024. He is currently working toward the Ph.D. degree in the School of Computer and Information Technology, Beijing Jiaotong University. His research interests focus on spatial-tempora data mining and spatio-temporal graph. [ < g r a p h i c s > ]Haomin Wen received the B.S. degree in computer science and technology from Beijing Jiaotong University, Beijing, China, in 2019, where he is currently pursuing the Ph.D. degree in computer science with the School of Computer and Information Technology. His current research interests include spatial-temporal data mining and intelligent transportation technology. [ < g r a p h i c s > ]Tianyi Li received the Ph.D. degree from Aalborg University, Denmark, in 2022. She is an assistant professor at the Department of Computer Science, Aalborg University. Her research concerns primarily data management and analytics, intelligent transportation, machine learning, and database technology. [ < g r a p h i c s > ]Yushuai Li received the Ph.D. degree in control theory and control engineering from Northeastern University, Shenyang, China, in 2019. He is currently an assistant professor at the Department of Computer Science, Aalborg University. His research interests include machine learning, digital twin, digital energy, and intelligent transportation systems. [ < g r a p h i c s > ]Christian S. Jensen received the Ph.D. degree from Aalborg University in 1991 after 2 1/2 years of study at University of Maryland, and he received the Dr.Techn. degree from Aalborg University in 2000. He is a Professor at the Department of Computer Science, Aalborg University. His research concerns primarily temporal and spatio-temporal data management and analytics, including indexing and query processing, data mining, and machine learning. [ < g r a p h i c s > ]Shengnan Guo received the Ph.D. degree in computer science from Beijing Jiaotong University, Beijing, China, in 2021. She is an associate professor at the School of Computer and Information Technology, Beijing Jiaotong University. Her research interests focus on spatial-temporal data mining and intelligent transportation systems. [ < g r a p h i c s > ]Youfang Lin received the Ph.D. degree in signal and information processing from Beijing Jiaotong University, Beijing, China, in 2003. He is a professor with the School of Computer and Information Technology, Beijing Jiaotong University. His main fields of expertise and current research interests include big data technology, intelligent systems, complex networks, and traffic data mining. [ < g r a p h i c s > ]Huaiyu Wan received the Ph.D. degree in computer science and technology from Beijing Jiaotong University, Beijing, China, in 2012. He is a professor with the School of Computer and Information Technology, Beijing Jiaotong University. His current research interests focus on spatio-temporal data mining, social network mining, information extraction, and knowledge graph.
http://arxiv.org/abs/2407.13714v1
20240718171059
Mapping Inter-City Trade Networks to Maximum Entropy Models using Electronic Invoice Data
[ "Cesar I. N. Sampaio Filho", "Rilder S. Pires", "Humberto A. Carmona", "José S. Andrade Jr" ]
physics.soc-ph
[ "physics.soc-ph", "physics.data-an" ]
Departamento de Física, Universidade Federal do Ceará, 60451-970 Fortaleza, Ceará, Brazil Centro de Análise de Dados e Avaliação de Políticas Públicas, Instituto de Pesquisa e Estratégia Econômica do Ceará, 60822-325, Fortaleza, Ceará, Brazil Laboratório de Ciência de Dados e Inteligência Artificial, Universidade de Fortaleza, 60811-905 Fortaleza, Ceará, Brazil Centro de Análise de Dados e Avaliação de Políticas Públicas, Instituto de Pesquisa e Estratégia Econômica do Ceará, 60822-325, Fortaleza, Ceará, Brazil Departamento de Física, Universidade Federal do Ceará, 60451-970 Fortaleza, Ceará, Brazil Centro de Análise de Dados e Avaliação de Políticas Públicas, Instituto de Pesquisa e Estratégia Econômica do Ceará, 60822-325, Fortaleza, Ceará, Brazil []soares@fisica.ufc.br Departamento de Física, Universidade Federal do Ceará, 60451-970 Fortaleza, Ceará, Brazil Centro de Análise de Dados e Avaliação de Políticas Públicas, Instituto de Pesquisa e Estratégia Econômica do Ceará, 60822-325, Fortaleza, Ceará, Brazil § ABSTRACT We analyze the network of transactions among cities based on the electronic invoice database for the municipalities in the Ceará state, Brazil. This database consists of approximately 3.7 billion records, each containing 43 fields of information, registered during the period between the years 2016 to 2019. All the transactions are grouped in a unique dataset and represented as an asymmetrical adjacency matrix corresponding to a directed graph with connections weighted by the number of transactions among cities. Due to the large size of Ceará state, 148,894.442 km^2 <cit.>, its unequal distribution of wealth, and spatially heterogeneous population density, we initially determine communities of cities based on their mutual intensity of trades and then verify to which extent their economic interests somehow reflect what we define as a “community cohesiveness”. For the first task, we use the Infomap algorithm to detect the partition which provides the shortest description length and captures the optimal community structure of the network in terms of its associated flow dynamics. Surprisingly, the partition identified has five modules, whose two-dimensional geographical projections are all simply-connected domains, i.e., consisting of single pieces without holes. Having described the topological properties of the transaction network, we proceed with the analysis of our database from the perspective of traded products by building bipartite structures represented in terms of adjacency matrices between municipalities and products, considering both the contexts of selling and buying. We then make use of the revealed comparative advantage (RCA) concept, widely used in foreign trade analyses, to define a non-monetary and binary activity index that is capable to distinguish the relative advantage of a city in a class of goods or services as evidenced by trade flows. Finally, through the pairwise Maximum Entropy Model, we can associate to the largest communities previously characterized, their corresponding binary Ising-like Hamiltonian models. The local fields and couplings computed for a given community are those that best reproduce the average product activities of its cities as well as the statistical correlations between the product activities of all pairs of its cities. In an analogy with critical phenomena, our results reveal that each community operates at a “temperature” that is close to the corresponding “critical point”, suggesting a high degree of “economic cohesiveness” in its trade network of cities. Mapping Inter-City Trade Networks to Maximum Entropy Models using Electronic Invoice Data José S. Andrade Jr. July 22, 2024 ===================================================================================================== § INTRODUCTION Economic geography investigates the way different regions and countries are interconnected through trade and investment, as well as how these relationships affect growth, development and inequality. Understanding the complex relationships between the enormous number of economic activities, people, firms and places requires the definition of metrics necessarily involving dimensionality reduction techniques that together are referred to as Economic Complexity (EC). The Economic Complexity Index (ECI) <cit.> and the Economic Fitness Index (EFI) <cit.> are two examples of these metrics. Both of them are based on the examination of the intrinsic interdependencies between countries and regions by exploring international trade data. In this way, researchers have gained insight, for example, into which countries are important hubs in global trade network, how products can be compared in terms of their relative complexity and distinctiveness. Through the concept of proximity between products, Hidalgo and Haussmann <cit.> proposed that the export basket of developing countries should expand more efficiently, from an economical point of view, to new products that are “close” to the ones already being exported. One key concept present in all these approaches is that the rarity of the products a country exports, their technological complexity and diversity should be closely related to the country’s installed infrastructure, such as transportation network, energy systems and intellectual capital <cit.>. Using the index Revealed Comparative Advantage (RCA) <cit.> to construct a bipartite network of countries and products, it is possible to quantify the complexity and diversity of products, so that countries exports baskets and their Gross Domestic Product (GDP) per capita can be empirically related, aiming to forecast countries growth <cit.>. However, it is important to note that there are many other factors influencing a country's GDP per capita, including, for example, its natural resources, political stability, extreme events and more. In this way, while the export basket indeed represents a crucial factor impacting economic growth, it is certainly not the only one. In Ref. <cit.> the authors review how machine learning techniques apply to economics with the goal of understanding the systemic interactions that influence various socioeconomic outcomes, and discuss how big data and machine learning are instrumental in this emerging field of economic complexity. Brummitt et al. <cit.> introduced a machine learning technique, which they named Principal Smooth-Dynamics Analysis (PriSDA), to explore the dynamics of economic growth. Their findings emphasized that product diversity, particularly in more sophisticated products, serves as a significant driver of income growth. In a parallel vein, Albora et al. <cit.> employed supervised learning techniques on the UN-COMTRADE database, using the Harmonized System 1992 classification (HS) <cit.>. Their research highlights that the ability to forecast the introduction of new products is essential for effective economic planning. In Ref. <cit.>, the authors introduce a novel approach by applying convolucional neural networks to high-resolution satellite imagery. This method is used to estimate economic livelihood indicators, such as consumption expenditure and asset wealth, in five developing African countries: Nigeria, Tanzania, Uganda, Malawi, and Rwanda. Although EC metrics were initially proposed based on international trade data, they were further developed and applied to non-export data sets and subnacional entities. Operti et al. <cit.> introduced a novel algorithm they termed Exogenous Fitness, as an evolved form of the previously established Fitness metric <cit.>. Focused on Brazilian states, they assessed regional competitiveness through the export basket. By combining Exogenous Fitness scores with GDP per capita, the authors distinguish between two economic regimes among these states: one with high predictability and another with low predictability. The study also compares Exogenous Fitness rankings to those from Endogenous Fitness and the Economic Complexity Index, offering a comprehensive view of regional economic dynamics. In this work we apply concepts of EC in the scale of municipalities using electronic invoice trade data. To make an analogy with international trade, we treat sales from one municipality to others as exports. One relevant consideration that naturally arises is that, at this scale, it is not guaranteed that the traded products are in fact produced locally. As a consequence, the spatial correlations at the state level between exports and installed capabilities is expectedly weaker, which led us to approach the trade network from a partitioned point of view, i.e., by considering the potential formation of communities. Community structure plays a pivotal role in understanding the intricate dynamics of complex networks, encompassing diverse domains such as social and biological networks. Its significance lies in the substantial implications it holds for the propagation of information, distribution of resources, and spreading of influence within the network <cit.>. In this study, we initially investigate the phenomenon of community formation induced by the exchange of products between municipalities in the trade network. By examining this interplay, we aim to shed light on the underlying mechanisms by which the “flow” of products induces the formation of communities. Further, in order to understand the economic patterns commercial activities at the scale of cities, we infer “pairwise interactions” between pair of cities through the products they trade. Particularly, using the binary municipality-product matrices of the communities detected from the network of transactions, we find that often the product baskets of different cities are strongly correlated, both for sales and purchases. We therefore exploit these statistical correlations and the average activities of the cities using the Maximum Entropy Model (MEM) developed in information theory. This method provides a conceptual framework based on statistical physics models for representing a given natural process in terms of “interactions” between its elementary units using experimental data <cit.>. More precisely, the principle of maximum entropy encapsulates the core concept behind the Inverse Ising Problem solution, or the so-called Boltzmann machine, wherein an underlying “Hamiltonian” associated with a given complex system can be inferred from the observed statistical correlations among its constituent parts. In this way, MEM has been applied to systems that can be mapped to Ising-like models, that is, models in which the interacting elements are in an active or inactive state, i.e., a network of moments of dipole with states of spins that are up or down under the action of an external field and their mutual interactions. In a neuronal network, for example, interactions between pairs of neurons that react to some stimuli are deduced from their firing patterns <cit.>. MEM have also been successful in the characterization of protein-protein interactions <cit.> and genetic interaction networks from gene expression patterns <cit.>. Other complex systems have been analyzed in terms of the Boltzmann machine, for example the collective responses exhibited by flocks of birds <cit.> and, more recently, the emergence of collective behavior from the eye movement patterns of a group of people while watching commercial videos <cit.> or reading texts <cit.>. In Ref. <cit.>, Bury conducts an analysis of stock market data utilizing a maximum entropy model. Within this framework, market indices are conceptualized as time-dependent binary spin states, each characterized by either bullish or bearish behavior. The author focuses on two distinct financial systems: one that encompasses eight European indices, and another that consists of a selection of stocks from the Dow Jones index. Employing criteria outlined in Ref. <cit.>, the presence of criticality in these finite systems is investigated. Specifically, a system is deemed to be approaching a critical state if a peak is observed in the temperature-dependent variance of the likelihood, also known as the heat capacity, near its operational point. The study concludes that neither system functions in a strictly critical state. The European indices generally operate in proximity criticality, except during market downturns. In contrast, the Dow Jones system is found to function significantly far from criticality. This paper is organized as follows. In Section II, we present the network of transactions between municipalities in the Ceará state, in the Northeast of Brazil. In Section III, we employ a community detection scheme based on the flow dynamics in the network of transactions to identify regions in the state that are organized by the strength of the local trade quantified by the number of commercial transactions between different municipalities. In Section IV, we apply the MEM to investigate the internal “cohesiveness” of the largest communities focusing on the Ising-like models deduced from the product baskets for sales and purchases of their corresponding cities. Finally, in Section V we present the general conclusions of the work. § NETWORKS OF TRANSACTIONS The Ceará state, in northeaster Brazil, has 184 municipalities, with an estimated population of 9,240,580 people. The network of transactions between pair of cities in Ceará is built from a database of all electronic invoices <cit.> registered in the state between the years 2016 to 2019, which contains approximately 3.7 billion records, each one corresponding to a traded product, with 43 fields of information. Precisely, we consider all the transactions of products that the cities in Ceará carried out among themselves and with the other cities of Brazil, thus disregarding all transactions circumscribed to a given city. We then applied a thorough process of standardizing and sanitizing the data and deflated all monetary values starting from January 2016, corresponding to our database's first month. Furthermore, we only considered transactions with values larger than 2000 USD. Once these preprocessing steps are completed, we group all the transactions in a unique dataset and build an asymmetrical adjacency matrix corresponding to a directed graph with connections weighted by the number of transactions between pair of cities. At this point, with the purpose of characterizing the economic dynamics of the Ceará state in the time window from 2016 to 2019, we proceed with the definition of a complex network based on the data of only internal sales and purchase operations among cities. The network corresponds to a unique, densely connected component with C=185 nodes representing the municipalities, and 14479 directed-weighted edges. The direction of the edge is from the city that sells to the city that buys one or more given products. To each directed edge between a pair of cities i and j, a weight is associated which corresponds to the total number of sells (buys), n_ij, or buys (sells), n_ji, operations from i (j) to j (i) performed during the time window. From this network, we can then compute the numbers of internal selling and internal buying connections of each city i, k^S_i and k^B_i, respectively, as well as the sum of their corresponding weights, W_i^S=∑_j=1^k_i^S n_ij and W_i^B=∑_j=1^k_i^B n_ji. Figures <ref>b and <ref>c show the dependence of the weights of the nodes on their degrees in a double logarithmic plot. As depicted, the relationship between these two variables seems to be well described in terms of a power-law model for both the selling and the buying data sets. In order to attenuate the effect of dispersion of the data points on the parameter estimation, we utilized the RANSAC algorithm <cit.>, which statistically identifies the outliers and fits the model considering only the inlier points. Therefore, among the data points obtained for all cities relating the total number of their internal sales transactions (out-transactions), W^S, with the total number of their internal sales connections (out-degree), k^S, we select only the inliers (green circles) to perform a least-squares fit to the power law, W^S≃(k^S)^β, and find the exponent β=1.87± 0.03. Following the same procedure for the inliers in the plot of the in-transactions, W^B, against the in-degree, k^B, the least-squares fit to the power-law, W^B≃(k^B)^β, gives the exponent β=2.19± 0.01. The values of these exponents imply that the weights of the cities' connections grow disproportionately faster than their corresponding degrees. Such a superlinear scaling behavior contrasts with the linear one, W(k) ∼ k, expected for the case in which the weights of the edges n are statistically uncorrelated with the degree of the nodes from where they depart (selling) or to where they arrive (buying) <cit.>. In order to illustrate this condition, we performed additional calculations preserving the degrees of each node i, k_i^S and k_i^B, but shuffling the values of the weights n_ij between randomly chosen pairs of edges in the network. In this way, strong correlations, if present in the original network, should disappear. Indeed, the results shown in Fig. <ref> of the supplemental material indicate that the effect of suppressing strong correlations is to recover a linear relation between the total node weights and their degrees. Given Ceará state's vast area, alongside its uneven wealth distribution and diverse population density patterns, our approach is twofold. First, we aim to identify clusters of cities by examining the intensity of their mutual trades. Subsequently, we will assess how closely their economic interests align with what we term as community “cohesiveness”. For the initial task, we have chosen the Infomap algorithm, <cit.> since it is compatible with the most relevant and inherent feature of the system under investigation here, namely, a network of flows, in particular, flows of financial resources based on trade operations among cities. Accordingly, this flow-based algorithm makes use of an information-theoretic method to detect network communities with a very high computational performance. Figure <ref>a shows the map of the Ceará state in Brazil colored according to the communities detected via the Infomap algorithm. The algorithm identifies a partition with the minimal average per-step description length of the random walk. Interestingly, the two-dimensional geographical projections of the resulting five modules clearly show that they are all simply-connected domains,i.e., well-delimited single pieces without holes <cit.>. This result demonstrates the consistency and corroborates the adequacy of the flow-based algorithm, which is in evident contrast with other approaches relying on pairwise interactions and the network formation process, as it is the case, for example, with the generalized modularity <cit.>. For a comparison, we show in with Fig. <ref>b the communities detected using the stochastic block model <cit.>. Clearly, the large number of communities, their heterogeneity in space, and lack of contiguity among their constituting pieces indicate that the method is unable to capture the economic dynamics embedded in the trade network of sales and buying among cities. In Fig. <ref> we show the trade share matrix after clustering the municipalities using the Infomap, with the communities delimited by black continuous lines. The matrix has color-coded entries according to the number of transactions between a pair of nodes, from blue to red, corresponding to low and high values, respectively. Moreover, it is non-symmetrical, with rows reaching the cities that sell to the ones buying in the columns. In each community, we highlighted the leading cities in terms of population and monetary resources. These cities, including Fortaleza, the capital of Ceará, present a wide spectrum transactions that virtually extends to all other cities in the state. § COMMUNITY DETECTION Having studied the topological properties of the networks of transactions, we next analyze the database of electronic invoices from the perspective of the traded products. To this end, we build bipartite networks, one for selling and another for buying, in which the two types of nodes are the cities of Ceará state and the products they trade, sell or buy, respectively, with any other city in the country. Following <cit.>, the products are identified in the electronic invoices using the HS <cit.>. The presence of a link in these networks should reflect the relative importance of the selling or buying transactions of a given city c involving the product p in a balanced context of all transactions among all cities and all products traded in the system. This is achieved here by adopting the concept of Revealed Comparative Advantage (RCA) index [6], quantified in terms of the complexity of products and the diversification of the cities' baskets of traded products. More precisely, in order to determine the relevant transactions, we adopt the following model of activation of products: let q_c,p^S be the monetary value associated with a city c selling a product p to a different municipality in the state of Ceará or any other city in Brazil. Thus T_c^S=∑_p^P q_c,p^S corresponds to the total sales of the city c, the summation running over all P products traded in the state. On the other hand, the amount T_p^S = ∑_c^Cq_c,p^S adds up to the total sales of a product p, the summation now running over all C municipalities in the state of Ceará. Likewise, the quantity T_tot^S=∑_c,p^C,P q_c,p^S amounts to the total monetary value associated with the sales of all products P by all cities C in the state during the investigated time window. From these three quantities, we can compute the quotient of quotients, Q_c,p^S=q_c,p^S/T_c^S/T_p^S/T_tot^S, where the numerator corresponds to the relative value of the sales of product p by city c, compared to all sales of this city. The denominator is the relative of value of all sales of the product p to all sales of the state. Equation (<ref>) is the usual definition of an element of the RCA matrix <cit.> which we are here extending to the trades among municipalities. If Q_c,p^S > 1, it means that product p is relatively more important to the sales of city c then the relative importance of all sales of this product by all cities. In this case, we say that the municipality c is a “seller” of product p. We apply the same reasoning for Q_c,p^B representing the monetary value associated with the purchases of product p by the municipality c. In this case, if Q_c,p^B > 1, it means that buying product p is relatively more important to the city c than the relative importance of all purchases of this product by all cities of the state, so that the municipality c is a “buyer” of product p. From the RCA matrix for sales, Q^S, obtained using Eq. (<ref>), we can construct the corresponding binarized municipality-product matrices σ^S, with the activity elements defined by σ_c,p^S = 1, Q_c,p^S≥ 1 -1, Q_c,p^S < 1, for each city c=1, 2, 3, … , C and product p=1, 2, 3, … , P. The same transformation is applied to the RCA matrix for buying, Q^B, so that the binary matrix σ^B with activities elements σ_c,p^B can also be obtained. The diversity of the basket of products for sales of a given city c, defined as D_c^S=∑_p^P(σ_c,p^S+1)/2, corresponds to the number of relevant products the city sells, while the ubiquity for sales of a given product p, defined as U_p^S=∑_c^C(σ_c,p^S+1)/2, corresponds to the number of cities whose baskets of relevant products for sales include the product p. In the same fashion, the diversity of the basket for buying of a city c, D_c^B, and the ubiquity for buying of a product p, U_p^B, can be readily obtained from the elements of the binary matrix σ^B. Figure <ref> shows the raster plots of the municipality-product matrices σ^B and σ^S, with the cities sorted in the descending order in terms of their respective diversity (from top to bottom), and the products sorted in the descending order (from left to right) in terms of their respective ubiquity. Upon examination, the municipality-product matrix for buying transactions (Fig.<ref>a) is markedly denser than the one for selling transactions (Fig.<ref>b). This observation is straightforward to grasp. First notice that both matrices share the same set of municipalities and the same product basket. The observed asymmetry reveals the diversification of consumption behaviors. Specifically, municipalities and their inhabitants engage in purchasing a wide range of products, many of which are not produced locally. On the other hand, production trends are far more specialized. Municipalities tend to focus on manufacturing specific products, influenced by factors such as the availability of resources, specialized expertise, labor considerations, and historical contexts. When focussing on commerce, economies of scale become evident. For some products, distribution is more efficient when undertaken by a small number of municipalities dealing in significant volumes. Such disparities in transactional behaviors are responsible for the observed asymmetry. For the case in study, the most frequent classes of products are “Appliances for Agriculture”, “Rice Cultivation” and “Mineral Fuels” for buying, while “Building Bricks”, “Mineral Water”, and “Fruit and Vegetable Conserves” are the most frequent classes for selling. The cities of Fortaleza, Juazeiro do Norte, Maracanaú, Quixadá, and Eusébio are the cities with the most diverse baskets of products for selling transactions, while Fortaleza, Juazeiro do Norte, Maracanaú, Sobral, and Caucaia have the higher diversities in the case of buying. § MAXIMUM ENTROPY MODEL In this study we aim to investigate the statistics of product activities of the cities from the perspective of a pairwise Maximum Entropy Model. We follow the approach proposed in Refs. <cit.> to build an Ising-like model for a given community, which is capable of replicating the average activities and pairwise correlations in terms of the product baskets for each city in this community. We then evaluate the behavior of the resulting model in the analogous framework of the corresponding thermal equilibrium properties. Accordingly, for a given community, from the observed product series of the activity of a city i, as defined in Eq. (<ref>), we can calculate its product-average activity as, σ_i^obs = 1/P∑_p=1^Pσ_i,p, as well as the covariances between the product series of the activities (for selling or buying) of each pair of cities i and j, Cov_ij^obs= σ_iσ_j^obs - σ_i^obsσ_j^obs, where σ_iσ_j^obs =1/P∑_p=1^Pσ_i,pσ_j,p. Moreover, to model the observed activities, we consider that σ correspond to Ising-like variables on a fully connected network of C sites. Therefore, in analogy with Statistical Mechanics, σ = σ_1,p,…, σ_C,p would be descriptive of “the configuration of the community” with respect to a given product p. The probability distribution P(σ) that represents our system is the one that maximizes the entropy, S = -∑_σ P(σ)lnP(σ ), while reproducing our observations, i.e., σ_i^obs for all C cities and all C(C-1)/2 values of Cov_ij^obs. Given these two additional constraints, it can be readily shown [53] that the form of P(σ) is the Boltzmann’s probability distribution for a temperature T=T_0=1, P(σ) = 1/Z e^-ℋ(σ)/T, where Z = ∑_σ e^-ℋ( σ)/T is the partition function and ℋ is analogous to a Hamiltonian with the same form of the Ising model, ℋ(σ) = -∑_i=1^C h_i σ_i - ∑_i,j > i^C J_ijσ_iσ_j. It should be noted that this model is derived directly from empirical data through the maximum entropy principle, rather than being presumed as a simplified representation of the underlying dynamics. As such, this method constitutes a precise mapping and not merely a figurative comparison. Equations (<ref>) and (<ref>) together are designed to calculate, based on the observed inter-city correlations, the likelihood of all possible states across the entire network of cities. This represents a baseline model, implying that the actual network might exhibit greater complexity than what the maximum entropy model indicates, but certainly not less. This Ising-like correspondence naturally leads us to interpret h_i as the action of a local external stimulus on the product activity of the city i, analogous to a “random field”, and J_ij as a “coupling coefficient” between cities i and j. Such pairwise couplings or interactions between the product activities of cities give rise to the observed correlations between them. At this point, we compute the local fields h_i and the interactions J_ij by directly solving the inverse problem given by Eq. (<ref>). The local fields h_i and interaction constants J_ij are obtained through the following iterative scheme: J_ij (n+1) = J_ij (n) - η (n) [Cov_ij^MC-Cov_ij^obs], h_i (n+1) = h_i (n) - η (n) [σ_i^MC - σ_i^obs], where n is the iteration parameter and we start with n=1 and h_i(n=1)=0. The covariance Cov_ij^MC between two sites i and j of the Ising-like network of Eq. (<ref>) is given by Cov_ij^MC = σ_iσ_j^MC - σ_i^MCσ_j^MC, where the statistical average ⋯^MC is obtained by performing a Monte Carlo simulation of the model Eq. (<ref>) at temperature T_0 = 1 using h_i (n) and J_ij (n). The function η (n) is a learning rate which decays like 1/n^0.4 <cit.>. Typically, we iterate till n = 80000. Once we infer the values of h_i and J_ij that better reproduce the observed product-average activities ⟨σ_i⟩^obs and covariances Cov_ij^obs, while maximizing the entropy, the Boltzmann probability distribution of Eq. (<ref>) characterizes the statistics of the product activities of the cities composing a given community dataset. By solving Eqs. (<ref>) and (<ref>) simultaneously, we obtain for each community and for both buying and selling cases, their corresponding local fields h_i and coupling constants J_ij. As shown in Fig. <ref>, the distributions of the fields h_i indicate that they are predominantly negative, with those for selling transactions being systematically more negatively skewed than those for buying. Comparatively, as shown in Fig. <ref>, the distributions of the coupling interaction constants J_ij are symmetrically centered around zero. Additionally, for all three communities they can be accurately characterized as Gaussian's with mean values close to zero, but with standard deviations that are systematically smaller for buying operations than for selling. To assess the efficacy of the Ising-like model described by Eq. (<ref>) in replicating measured averages derived from observational data, we compare in Fig. <ref> the final product-averaged magnetizations obtained from the MEM, σ_i^MC, with their observational counterparts, σ_i^obs, for each city i within the three largest communities labeled I, II, and III, respectively. The same is shown in Fig. <ref>, but for the covariances Cov_ij^MC against Cov_ij^obs between every pair of cities i and j in the aforementioned communities. Clearly, the concordance between the model-generated and observational metrics is excellent across all examined cases. To provide a more conclusive test of the model, we compared the three-point activity correlations generated by the model, T^MC_ijk, with those observed, T^obs_ijk, where T_ijk = (σ_i-σ_i) (σ_j-σ_j) (σ_k-σ_k). As depicted in Fig. <ref>, both the model's predictions and the observed triplet correlations exhibit a strong correlation across all cases, that is, for both the buying and selling scenarios in the three communities. Moreover, the quantitative alignment between them is also quite satisfactory, with relative errors, < (T^MC_ijk - T^obs_ijk)/T^obs_ijk>, of -0.33%× 10^-3, -0.093%, and 0.12% for the buying scenarios, and -0.54 %, 0.004%, and -0.062% for the selling scenarios, in relation to Communities I, II, and III, respectively. After demonstrating that the Ising-like model adequately represents the product-averaged properties of the three largest communities in both selling and buying scenarios, we explore deeper into the implicit thermodynamic analogy introduced by the MEM methodology <cit.>. Within this framework, using the learned J_ij and h_i parameters along with Eqs. (<ref>) and (<ref>), we conduct Monte Carlo simulations at temperatures T other than the operational temperature T_0=1. Given the magnetization for each configuration, defined as M({σ})=|1/C∑_i^Cσ_i|, the order parameter for a ferromagnetic phase transition can be represented by the ensemble average at a constant temperature T. This is mathematically expressed as: M(T) = ⟨ M({σ}) ⟩_T^MC. In this equation, the statistical average ⟨⋯⟩_T^MC is calculated using Monte Carlo simulations at temperature T. Additionally, considering the energy per city for each configuration as E({σ})=1/Cℋ({σ}), the fluctuation-dissipation theorem leads to the following expression for the specific heat: C(T) = 1/T^2( ⟨ E({σ})^2⟩ - ⟨ E({σ}) ⟩^2). In Fig. <ref>a and <ref>b of the supplemental material, we present the average magnetization M(T) as a function of temperature T for the buying and selling activities of the three largest communities, respectively. The magnetization starts to rapidly decrease near the temperature T=1.0, indicating a change in behavior that is analogous to a phase transition from ordered to disordered spin configurations with critical temperatures T_c that, for all cases, are close to the operational temperature T_0. The macroscopic ordering diminishes for all analyzed communities as the temperature ascends. Notably, at temperatures significantly above the critical ones, the magnetizations observed for the selling curves of all three communities saturate at larger values compared to the buying cases. This behavior can be attributed to the local fields distributions showcasing a pronounced skewness toward negative values in the selling scenario, as shown in Fig. <ref>, thereby mitigating the extent of its order-disorder transition. Figures <ref>a and <ref>b depict the temperature-dependent specific heat, C(T), computed for the three largest communities in relation to their buying and selling product baskets, respectively. In both scenarios, each curve peaks at a temperature T_c, which, while exceeding the operating temperature T_0=1, remains in close proximity to it (refer to Table <ref> for specific numerical values). Given this observation, it is reasonable to assert that the learning dynamics of the six Boltzmann machines predominantly take place within the “critical region”. This “critical temperature” T_c serves as a threshold, distinguishing the ordered phases (for T < T_c) from the disordered phases (for T > T_c). A comparison between Figs. <ref>a and <ref>b indicates that the energy fluctuations for buying are more pronounced than for selling. Also shown in Figs. <ref>a and <ref>b is the dependence of the specific heat on temperature, obtained after randomly shuffling the product series associated with the buying and selling activities of Community I, respectively. This random shuffling can significantly reduce the intrinsic correlations present in the sequence of “spins”. Notably, under these conditions, the pronounced peak in the specific heat becomes significantly subdued, effectively undermining the ferromagnetic phase transition originally observed in the system. § CONCLUSIONS In summary, we have shown that the directed-weighted trade network among the municipalities exhibits a nontrivial dependence between the weights of the nodes on their degrees. From the least-squares fit of the sales inlier data to the power law, W^S∼(k^S)^β, we obtained β = 2.07 ± 0.01, while the fitting of the purchasing inlier data to W^B∼(k^B)^β, resulted in β = 1.77 ± 0.02. In the topological analysis, the use of the Infomap algorithm has revealed five communities within the trade network among municipalities. Strikingly, the two-dimensional geographical projections of these modules clearly show that they are all simply-connected domains. This reflects that the trade network is inherently characterized by the flow of products. This finding highlights the lasting significance of geographic proximity in shaping trade patterns, even in the era of globalized economies. Such contiguous trade communities, identified purely based on trade flow data, suggest that regional economic interactions are strongly influenced by spatial factors. This has significant implications for regional economic policy, emphasizing the potential for more targeted and regionally coherent economic strategies. For comparison, during the writing of this paper, the Ceará state has 14 administrative macro-regions that were established by extensive studies taking into account technical criteria related to natural resources, social solidarity, and polarization around urban centers <cit.>. The contrast between these findings and the results from other algorithms like the stochastic block model not only reinforces the uniqueness of Infomap's approach in capturing the flow-based nature of trade but also points to the nuanced complexities inherent in trade network analysis. This understanding can guide future policy decisions and economic planning, emphasizing the relevance of local and regional contexts in economic networks. Finally, conceptualizing cities and their trading patterns in terms of buying and selling products as spin systems has enabled us to harness well-developed theories from physics to interpret complex economic behaviors. Specifically, for the purchasing trade operations, we employed a Boltzmann machine that uses a Hamiltonian analogous to that of a Spin-Glass model, which maximizes the system's entropy subject to the observed “magnetizations” and “spin-spin correlations”. Our findings indicate that the system operates very close to a critical point, poised for a phase transition from ordered states (“ferromagnetic”), where cities exhibit clustered buying behaviors, to disordered states (“paramagnetic”), where the decision to buy a given product appears random. The same analyzes is applicable to selling operations. Being close to a critical point is particularly revealing, since they are hallmarked by scale-invariant fluctuations, suggesting that our economic system is on the verge of a shift, highly sensitive to both external and internal disruptions. Furthermore, this proximity to a critical point implies that minor changes in a single city's economic strategy (or product purchases) could ripple through and potentially catalyze large-scale alterations in the broader economic landscape. Such a dynamic signifies a marketplace characterized by rich, interconnected activities with an inherent potential for rapid evolution. These insights could have profound implications for understanding the resilience and adaptability of economies as well as for informing strategic economic policies. § ACKNOWLEDGEMENTS We thank the Brazilian agencies CNPq, CAPES, FUNCAP and the National Institute of Science and Technology for Complex Systems (INCT-SC) for financial support. 57 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [ce_(2023)]ce_area https://www.ibge.gov.br/cidades-e-estados/ce.html title Ceará Cidades e Estados IBGE (year 2023), note accessed 2023-09-02NoStop [Hidalgo and Hausmann(2009)]Hidalgo2009 author author C. A. Hidalgo and author R. Hausmann, title title The building blocks of economic complexity, https://doi.org/10.1073/pnas.0900943106 journal journal Proc. Natl. Acad. Sci. U. S. A. volume 106, pages 10570 (year 2009)NoStop [Hidalgo(2021)]Hidalgo2021 author author C. A. Hidalgo, title title Economic complexity theory and applications, https://doi.org/10.1038/s42254-020-00275-1 journal journal Nat. Rev. Phys. volume 3, pages 92 (year 2021)NoStop [Tacchella et al.(2012)Tacchella, Cristelli, Caldarelli, Gabrielli, and Pietronero]Tacchella2012 author author A. Tacchella, author M. Cristelli, author G. Caldarelli, author A. Gabrielli, and author L. Pietronero, title title A new metrics for countries' fitness and products' complexity, https://doi.org/10.1038/srep00723 journal journal Sci. Rep. volume 2, pages 723 (year 2012)NoStop [Tacchella et al.(2013)Tacchella, Cristelli, Caldarelli, Gabrielli, and Pietronero]Tacchella2013 author author A. Tacchella, author M. Cristelli, author G. Caldarelli, author A. Gabrielli, and author L. Pietronero, title title Economic complexity: Conceptual grounding of a new metrics for global competitiveness, https://doi.org/10.1016/j.jedc.2013.04.006 journal journal J. Econ. Dynam. Control volume 37, pages 1683 (year 2013)NoStop [Hidalgo et al.(2007)Hidalgo, Klinger, Barabasi, and Hausmann]Hidalgo2007 author author C. A. Hidalgo, author B. Klinger, author A.-L. Barabasi, and author R. Hausmann, title title The product space conditions the development of nations, https://doi.org/10.1126/science.1144581 journal journal Science volume 317, pages 482 (year 2007)NoStop [Balassa(1965)]Balassa1965 author author B. Balassa, title title Trade liberalisation and “revealed” comparative advantage, https://doi.org/10.1111/j.1467-9957.1965.tb00050.x journal journal Manchester Sch. volume 33, pages 99 (year 1965)NoStop [Balland et al.(2022)Balland, Broekel, Diodato, Giuliani, Hausmann, O'Clery, and Rigby]Balland2022 author author P.-A. Balland, author T. Broekel, author D. Diodato, author E. Giuliani, author R. Hausmann, author N. O'Clery, and author D. Rigby, title title The new paradigm of economic complexity, https://doi.org/10.1016/j.respol.2021.104450 journal journal Res. Pol. volume 51, pages 104450 (year 2022)NoStop [Brummitt et al.(2020)Brummitt, Gómez-Liévano, Hausmann, and Bonds]Brummitt2020 author author C. D. Brummitt, author A. Gómez-Liévano, author R. Hausmann, and author M. H. Bonds, title title Machine-learned patterns suggest that diversification drives economic development, https://doi.org/10.1098/rsif.2019.0283 journal journal J. R. Soc. Interface volume 17, pages 20190283 (year 2020)NoStop [Albora et al.(2023)Albora, Pietronero, Tacchella, and Zaccaria]Albora2023 author author G. Albora, author L. Pietronero, author A. Tacchella, and author A. Zaccaria, title title Product progression: a machine learning approach to forecasting industrial upgrading, https://doi.org/10.1038/s41598-023-28179-x journal journal Sci. Rep. volume 13, pages 1421 (year 2023)NoStop [Asakura(1993)]ASAKURA1993 author author H. Asakura, title title The harmonized system and rules of origin, @noop journal journal J. World Trade volume 27, pages 5 (year 1993)NoStop [Organization(2017)]wco2017 author author W. C. Organization, https://www.wcoomd.org/en/topics/nomenclature/instrument-and-tools/hs-nomenclature-2017-edition.aspx title Hs nomenclature 2017 edition (year 2017), note accessed 2023-05-12NoStop [Jean et al.(2016)Jean, Burke, Xie, Davis, Lobell, and Ermon]Jean2016 author author N. Jean, author M. Burke, author M. Xie, author W. M. Davis, author D. B. Lobell, and author S. Ermon, title title Combining satellite imagery and machine learning to predict poverty, https://doi.org/10.1126/science.aaf7894 journal journal Science volume 353, pages 790 (year 2016)NoStop [Operti et al.(2018)Operti, Pugliese, Andrade, Pietronero, and Gabrielli]Operti2018 author author F. G. Operti, author E. Pugliese, author J. S. Andrade, author L. Pietronero, and author A. Gabrielli, title title Dynamics in the fitness-income plane: Brazilian states vs world countries, https://doi.org/10.1371/journal.pone.0197616 journal journal PLoS ONE volume 13, pages e0217034 (year 2018)NoStop [Cristelli et al.(2013)Cristelli, Gabrielli, Tacchella, Caldarelli, and Pietronero]Cristelli2013 author author M. Cristelli, author A. Gabrielli, author A. Tacchella, author G. Caldarelli, and author L. Pietronero, title title Measuring the intangibles: A metrics for the economic complexity of countries and products, journal journal PLoS ONE volume 8, https://doi.org/10.1371/journal.pone.0070726 10.1371/journal.pone.0070726 (year 2013)NoStop [Girvan and Newman(2002)]Girvan2002 author author M. Girvan and author M. E. J. Newman, title title Community structure in social and biological networks, https://doi.org/10.1073/pnas.122653799 journal journal Proc. Natl. Acad. Sci. U. S. A. volume 99, pages 7821 (year 2002)NoStop [Bathelt et al.(2004)Bathelt, Malmberg, and Maskell]Bathelt2004 author author H. Bathelt, author A. Malmberg, and author P. Maskell, title title Clusters and knowledge: local buzz, global pipelines and the process of knowledge creation, https://doi.org/10.1191/0309132504ph469oa journal journal Prog. Hum. Geog. volume 28, pages 31 (year 2004)NoStop [Peixoto(2014)]Peixoto2014 author author T. P. Peixoto, title title Efficient monte carlo and greedy heuristic for the inference of stochastic block models, https://doi.org/10.1103/PhysRevE.89.012804 journal journal Phys. Rev. E volume 89, pages 012804 (year 2014)NoStop [Bialek(2012)]bialek2012biophysics author author W. Bialek, @noop title Biophysics: searching for principles (publisher Princeton University Press, year 2012)NoStop [Cocco et al.(2009)Cocco, Leibler, and Monasson]Cocco2009 author author S. Cocco, author S. Leibler, and author R. Monasson, title title Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods, https://doi.org/10.1073/PNAS.0906705106/SUPPL_FILE/APPENDIX_PDF.PDF journal journal Proc. Natl. Acad. Sci. U. S. A. volume 106, pages 14058 (year 2009)NoStop [Tkacik et al.(2009)Tkacik, Schneidman, au2, and Bialek]Tkacik2009 author author G. Tkacik, author E. Schneidman, author M. J. B. I. au2, and author W. Bialek, http://arxiv.org/abs/0912.5409 title Spin glass models for a network of real neurons (year 2009), https://arxiv.org/abs/0912.5409 arXiv:0912.5409 [q-bio.NC] NoStop [Shlens et al.(2006)Shlens, Field, Gauthier, Grivich, Petrusca, Sher, Litke, and Chichilnisky]Shlens2006 author author J. Shlens, author G. D. Field, author J. L. Gauthier, author M. I. Grivich, author D. Petrusca, author A. Sher, author A. M. Litke, and author E. J. Chichilnisky, title title The structure of multi-neuron firing patterns in primate retina, https://doi.org/10.1523/JNEUROSCI.1282-06.2006 journal journal J. Neurosci. volume 26, pages 8254 (year 2006)NoStop [Tang et al.(2008)Tang, Jackson, Hobbs, Chen, Smith, Patel, Prieto, Petrusca, Grivich, Sher, Hottowy, Dabrowski, Litke, and Beggs]tang2008maximum author author A. Tang, author D. Jackson, author J. Hobbs, author W. Chen, author J. L. Smith, author H. Patel, author A. Prieto, author D. Petrusca, author M. I. Grivich, author A. Sher, author P. Hottowy, author W. Dabrowski, author A. M. Litke, and author J. M. Beggs, title title A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro, https://doi.org/10.1523/JNEUROSCI.3359-07.2008 journal journal J. Neurosci. volume 28, pages 505 (year 2008)NoStop [Mora et al.(2015)Mora, Deny, and Marre]Mora2005 author author T. Mora, author S. Deny, and author O. Marre, title title Dynamical criticality in the collective activity of a population of retinal neurons, https://doi.org/10.1103/PHYSREVLETT.114.078105/FIGURES/3/MEDIUM journal journal Phys. Rev. Lett. volume 114, pages 078105 (year 2015)NoStop [Lotfi et al.(2020)Lotfi, Fontenele, Feliciano, Aguiar, Vasconcelos, Soares-Cunha, Coimbra, Rodrigues, Sousa, Copelli, and Carelli]Lotfi2020 author author N. Lotfi, author A. J. Fontenele, author T. Feliciano, author L. A. Aguiar, author N. A. D. Vasconcelos, author C. Soares-Cunha, author B. Coimbra, author A. J. Rodrigues, author N. Sousa, author M. Copelli, and author P. V. Carelli, title title Signatures of brain criticality unveiled by maximum entropy analysis across cortical states, https://doi.org/10.1103/PHYSREVE.102.012408/FIGURES/11/MEDIUM journal journal Phys. Rev. E volume 102, pages 012408 (year 2020)NoStop [Ioffe and Berry II(2017)]Ioffe author author M. L. Ioffe and author M. J. Berry II, title title The structured ‘low temperature’ phase of the retinal population code, @noop journal journal PLoS Comput. Biol. volume 13, pages e1005792 (year 2017)NoStop [Morcos et al.(2011)Morcos, Pagnani, Lunt, Bertolino, Marks, Sander, Zecchina, Onuchic, Hwa, and Weigt]Morcos2011 author author F. Morcos, author A. Pagnani, author B. Lunt, author A. Bertolino, author D. S. Marks, author C. Sander, author R. Zecchina, author J. N. Onuchic, author T. Hwa, and author M. Weigt, title title Direct-coupling analysis of residue coevolution captures native contacts across many protein families, https://doi.org/10.1073/pnas.1111471108 journal journal Proc. Natl. Acad. Sci. U. S. A. volume 108, pages E1293 (year 2011)NoStop [Weigt et al.(2009)Weigt, White, Szurmant, Hoch, and Hwa]Weigt2009 author author M. Weigt, author R. A. White, author H. Szurmant, author J. A. Hoch, and author T. Hwa, title title Identification of direct residue contacts in protein-protein interaction by message passing, https://doi.org/10.1073/pnas.0805923106 journal journal Proc. Natl. Acad. Sci. U. S. A. volume 106, pages 67 (year 2009)NoStop [Stein et al.(2015)Stein, Marks, and Sander]Stein2015 author author R. R. Stein, author D. S. Marks, and author C. Sander, title title Inferring pairwise interactions from biological data using maximum-entropy probability models, https://doi.org/10.1371/journal.pcbi.1004182 journal journal PLoS Comput. Biol. volume 11, pages e1004182 (year 2015)NoStop [Lezon et al.(2006)Lezon, Banavar, Cieplak, Maritan, and Fedoroff]Lezon2006 author author T. R. Lezon, author J. R. Banavar, author M. Cieplak, author A. Maritan, and author N. V. Fedoroff, title title Using the principle of entropy maximization to infer genetic interaction networks from gene expression patterns, https://doi.org/10.1073/PNAS.0609152103/SUPPL_FILE/09152FIG9.JPG journal journal Proc. Natl. Acad. Sci. U. S. A. volume 103, pages 19033 (year 2006)NoStop [Locasale and Wolf-Yadlin(2009)]Locasale2009 author author J. W. Locasale and author A. Wolf-Yadlin, title title Maximum entropy reconstructions of dynamic signaling networks from quantitative proteomics data, https://doi.org/10.1371/journal.pone.0006522 journal journal PLoS ONE volume 4, pages e6522 (year 2009)NoStop [Bialek et al.(2012)Bialek, Cavagna, Giardina, Mora, Silvestri, Viale, and Walczak]Bialek2012 author author W. Bialek, author A. Cavagna, author I. Giardina, author T. Mora, author E. Silvestri, author M. Viale, and author A. M. Walczak, title title Statistical mechanics for natural flocks of birds, https://doi.org/10.1073/pnas.1118633109 journal journal Proc. Natl. Acad. Sci. U. S. A. volume 109, pages 4786 (year 2012)NoStop [Bialek et al.(2014)Bialek, Cavagna, Giardina, Mora, Pohl, Silvestri, Viale, and Walczak]Bialek2014 author author W. Bialek, author A. Cavagna, author I. Giardina, author T. Mora, author O. Pohl, author E. Silvestri, author M. Viale, and author A. M. Walczak, title title Social interactions dominate speed control in poising natural flocks near criticality, https://doi.org/10.1073/PNAS.1324045111/SUPPL_FILE/PNAS.201324045SI.PDF journal journal Proc. Natl. Acad. Sci. U. S. A. volume 111, pages 7212 (year 2014)NoStop [Burleson-Lesser et al.(2017)Burleson-Lesser, Morone, DeGuzman, Parra, and Makse]Burleson2017 author author K. Burleson-Lesser, author F. Morone, author P. DeGuzman, author L. C. Parra, and author H. A. Makse, title title Collective behaviour in video viewing: A thermodynamic analysis of gaze position, https://doi.org/10.1371/journal.pone.0168995 journal journal PLoS ONE volume 12, pages 1 (year 2017)NoStop [Torres et al.(2021)Torres, Sena, Carmona, Moreira, Makse, and Andrade]Debora2021 author author D. Torres, author W. R. Sena, author H. A. Carmona, author A. A. Moreira, author H. A. Makse, and author J. S. Andrade, title title Eye-tracking as a proxy for coherence and complexity of texts, https://doi.org/10.1371/journal.pone.0260236 journal journal PLoS ONE volume 16, pages e0260236 (year 2021)NoStop [Bury(2013)]Bury2013 author author T. Bury, title title A statistical physics perspective on criticality in financial markets, https://doi.org/10.1088/1742-5468/2013/11/P11004 journal journal J. Stat. Mech. volume 2013, pages P11004 (year 2013)NoStop [Mora and Bialek(2011)]Mora2011 author author T. Mora and author W. Bialek, title title Are Biological Systems Poised at Criticality?, https://doi.org/10.1007/s10955-011-0229-4 journal journal J. Stat. Phys. volume 144, pages 268 (year 2011)NoStop [da Fazenda(2003)]nfe2023 author author M. da Fazenda, https://www.nfe.fazenda.gov.br/portal/principal.aspx title Portal da nota fiscal eletrônica (year 2003), note accessed 2023-05-12NoStop [Fischler and Bolles(1981)]Fischler1981 author author M. A. Fischler and author R. C. Bolles, title title Random sample consensus, https://doi.org/10.1145/358669.358692 journal journal Commun. ACM volume 24, pages 381 (year 1981)NoStop [Chum and Matas(2008)]Chum2008 author author O. Chum and author J. Matas, title title Optimal randomized ransac, https://doi.org/10.1109/TPAMI.2007.70787 journal journal IEEE Trans. Pattern Anal. Mach. Intell. volume 30, pages 1472 (year 2008)NoStop [Newman(2004)]Newman2004 author author M. E. Newman, title title Analysis of weighted networks, https://doi.org/10.1103/PHYSREVE.70.056131/FIGURES/3/MEDIUM journal journal Phys. Rev. E volume 70, pages 9 (year 2004)NoStop [Barrat et al.(2004)Barrat, Barthélemy, Pastor-Satorras, and Vespignani]Barrat2004 author author A. Barrat, author M. Barthélemy, author R. Pastor-Satorras, and author A. Vespignani, title title The architecture of complex weighted networks, https://doi.org/10.1073/PNAS.0400087101/ASSET/7970A6FD-0C68-49D2-AB37-938124DCEAFA/ASSETS/GRAPHIC/ZPQ0080439150007.JPEG journal journal Proc. Natl. Acad. Sci. U. S. A. volume 101, pages 3747 (year 2004)NoStop [Rubinov and Sporns(2010)]Rubinov2010 author author M. Rubinov and author O. Sporns, title title Complex network measures of brain connectivity: Uses and interpretations, https://doi.org/10.1016/J.NEUROIMAGE.2009.10.003 journal journal NeuroImage volume 52, pages 1059 (year 2010)NoStop [Boccaletti et al.(2014)Boccaletti, Bianconi, Criado, del Genio, Gómez-Gardeñes, Romance, Sendiña-Nadal, Wang, and Zanin]Boccaletti2014 author author S. Boccaletti, author G. Bianconi, author R. Criado, author C. I. del Genio, author J. Gómez-Gardeñes, author M. Romance, author I. Sendiña-Nadal, author Z. Wang, and author M. Zanin, title title The structure and dynamics of multilayer networks, https://doi.org/10.1016/J.PHYSREP.2014.07.001 journal journal Phys. Rep. volume 544, pages 1 (year 2014)NoStop [Rosvall and Bergstrom(2008)]Rosvall2008 author author M. Rosvall and author C. T. Bergstrom, title title Maps of random walks on complex networks reveal community structure, https://doi.org/10.1073/pnas.0706851105 journal journal Proc. Natl. Acad. Sci. U. S. A. volume 105, pages 1118 (year 2008)NoStop [Rosvall and Bergstrom(2011)]Rosvall2011 author author M. Rosvall and author C. T. Bergstrom, title title Multilevel compression of random walks on networks reveals hierarchical organization in large integrated systems, https://doi.org/10.1371/JOURNAL.PONE.0018209 journal journal PLoS ONE volume 6, pages e18209 (year 2011)NoStop [Alzahrani and Horadam(2016)]Alzahrani2016 author author T. Alzahrani and author K. J. Horadam, title title Community detection in bipartite networks: Algorithms and case studies, in https://doi.org/10.1007/978-3-662-47824-0_2 booktitle Complex Systems and Networks: Dynamics, Controls and Applications, editor edited by editor J. Lü, editor X. Yu, editor G. Chen, and editor W. Yu (publisher Springer Berlin Heidelberg, year 2016) pp. pages 25–50NoStop [Nasser and Vuorinen(2020)]Nasser2020 author author M. M. S. Nasser and author M. Vuorinen, title title Conformal invariants in simply connected domains, https://doi.org/10.1007/s40315-020-00351-8 journal journal Lect. Notes. Math. volume 20, pages 747 (year 2020)NoStop [Papamichael and Kokkinos(1981)]Papamichael1981 author author N. Papamichael and author C. A. Kokkinos, title title Two numerical methods for the conformal mapping of simply-connected domains, @noop journal journal Comput. Methods Appl. Mech. Eng. volume 28, pages 285 (year 1981)NoStop [Fortunato and Hric(2016)]Fortunato2016 author author S. Fortunato and author D. Hric, title title Community detection in networks: A user guide, https://doi.org/10.1016/J.PHYSREP.2016.09.002 journal journal Phys. Rep. volume 659, pages 1 (year 2016)NoStop [Bernard et al.(2007)Bernard, Redding, and Schott]Bernard2007 author author A. B. Bernard, author S. J. Redding, and author P. K. Schott, title title Comparative advantage and heterogeneous firms, https://doi.org/10.1111/J.1467-937X.2007.00413.X journal journal Rev. Econ. Stud. volume 74, pages 31 (year 2007)NoStop [Schneidman et al.(2006)Schneidman, Berry, Segev, and Bialek]Schneidman2006 author author E. Schneidman, author M. J. Berry, author R. Segev, and author W. Bialek, title title Weak pairwise correlations imply strongly correlated network states in a neural population, https://doi.org/10.1038/nature04701 journal journal Nature volume 440, pages 1007 (year 2006)NoStop [Tkačik et al.(2014)Tkačik, Marre, Amodei, Schneidman, Bialek, and Berry]Tkacik2014 author author G. Tkačik, author O. Marre, author D. Amodei, author E. Schneidman, author W. Bialek, and author M. J. Berry, title title Searching for collective behavior in a large network of sensory neurons, https://doi.org/10.1371/JOURNAL.PCBI.1003408 journal journal PLoS Comput. Biol. volume 10, pages e1003408 (year 2014)NoStop [Tkačik et al.(2015)Tkačik, Mora, Marre, Amodei, Palmer, Berry, and Bialek]Tkacik2015 author author G. Tkačik, author T. Mora, author O. Marre, author D. Amodei, author S. E. Palmer, author M. J. Berry, and author W. Bialek, title title Thermodynamics and signatures of criticality in a network of neurons, https://doi.org/10.1073/PNAS.1514188112/SUPPL_FILE/PNAS.201514188SI.PDF journal journal Proc. Natl. Acad. Sci. U. S. A. volume 112, pages 11508 (year 2015)NoStop [Bialek et al.(2013)Bialek, Cavagna, Giardina, Mora, Pohl, Silvestri, Viale, and Walczak]Bialek2013 author author W. Bialek, author A. Cavagna, author I. Giardina, author T. Mora, author O. Pohl, author E. Silvestri, author M. Viale, and author A. Walczak, title title Social interactions dominate speed control in driving natural flocks toward criticality, https://doi.org/10.1073/pnas.1324045111 journal journal Proc. Natl. Acad. Sci. U. S. A. volume 111, pages 7212 (year 2013)NoStop [Nguyen et al.(2017)Nguyen, Zecchina, and Berg]Nguyen2017 author author H. C. Nguyen, author R. Zecchina, and author J. Berg, title title Inverse statistical problems: from the inverse ising problem to data science, https://doi.org/10.1080/00018732.2017.1341604 journal journal Adv. Phys. volume 66, pages 197 (year 2017)NoStop [Ataliba et al.(2014)Ataliba, Barreto, Sarquis, and Menezes]Ataliba2014 author author F. Ataliba, author F. D. Barreto, author A. Sarquis, and author B. D. Menezes, www.ipece.ce.gov.br title Desenvolvimento econômico do Ceará: Evidências recentes e reflexões (year 2014)NoStop
http://arxiv.org/abs/2407.12495v1
20240717112309
Pseudomode expansion of many-body correlation functions
[ "Alexander Teretenkov", "Filipp Uskov", "Oleg Lychkovskiy" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
myequation addtoresetequationmyequation ⟨#|1⟨#1| |#⟩1|#1⟩ ⟨#|1⟩⟨#1⟩ #1<#1| #1|#1>
http://arxiv.org/abs/2407.13471v1
20240718124613
Accelerating structure search using atomistic graph-based classifiers
[ "Andreas Møller Slavensky", "Bjørk Hammer" ]
physics.chem-ph
[ "physics.chem-ph", "cond-mat.mtrl-sci" ]
hammer@phys.au.dk Center for Interstellar Catalysis, Department of Physics and Astronomy, Aarhus University, Aarhus C, DK‐8000 Denmark § ABSTRACT We introduce an atomistic classifier based on a combination of spectral graph theory and a Voronoi tessellation method. This classifier allows for the discrimination between structures from different minima of a potential energy surface, making it a useful tool for sorting through large datasets of atomic systems. We incorporate the classifier as a filtering method in the Global Optimization with First-principles Energy Expressions (GOFEE) algorithm. Here it is used to filter out structures from exploited regions of the potential energy landscape, whereby the risk of stagnation during the searches is lowered. We demonstrate the usefulness of the classifier by solving the global optimization problem of 2-dimensional pyroxene, 3-dimensional olivine, Au_12, and Lennard-Jones LJ_55 and LJ_75 nanoparticles. Accelerating structure search using atomistic graph-based classifiers Bjørk Hammer July 22, 2024 ===================================================================== § INTRODUCTION In recent years, machine learning (ML) models have become increasingly relevant in materials science. They can be used to predict various properties of different atomistic compositions at the level of density functional theory (DFT) or above, but at a fraction of the computational cost, enabling for large-scale simulations that would not be possible using first-principle methods. They have been used to aid global optimization (GO) algorithms <cit.>, perform molecular dynamics simulations <cit.>, calculate phase diagrams <cit.>, and predict infrared spectra <cit.>. Large datasets of structural configurations and calculated properties are required in order to train reliable ML models. These can be generated on-the-fly using active learning methods<cit.>, or collected from the resulting structures from GO searches, but these tasks can be quite cumbersome and time consuming. Recently, several databases have been constructed and made publicly available<cit.>, potentially reducing the time required to obtain stable and well-behaving ML models. When dealing with large datasets, a reliable classifier can be useful in order to distinguish structures from each other. This can be relevant to ensure large diversity in a training set for a ML model in order to avoid overfitting. For example, after a GO search<cit.>, a lot of structures from the same energy basin are often found, and it would be convenient to filter out a subset of these structures from such a collection without decreasing its quality. To construct such a classifier, a computational method for encoding and describing a given structure is needed. Through these descriptions, it should be possible to quantify whether two structures are identical or not. In the context of ML, there exist different methods to encode structures in order to obtain a model which is invariant to certain transformations, i.e., translations and rotations. Examples of different categories of descriptors are global feature descriptors<cit.>, where a single representation is constructed for the entire structure, and local feature descriptors<cit.>, where each atom in a configuration is given its own representation. Recently, graph representations<cit.> have become a popular way to represent structures, and graph neural networks produce state-of-the-art results in many ML tasks.<cit.> From these representations, a measure of similarity between configurations can be constructed, which can be used to ensure a large diversity in the training set for the ML models<cit.>. Another case for having a strong classifier is to guide certain GO algorithms. Some algorithms, such as evolutionary algorithms (EA) <cit.>, rely on sampling a subset of previously evaluated structures to create a population, in this paper referred to as a sample. From these structures, new configurations are created to advance the search. Relying on a sample of low-energy structures can potentially lead to stagnation in the search, since the sampled structures might not contain the required configurational diversity that is needed to explore the potential energy surface (PES) and find the global energy minimum (GM) structure. In such cases, a classifier can be used to quantify the usefulness of certain structures and eventually filter unwanted configurations out of the sample. Similar ideas have previously been used to improve other GO algortihms, such as in EA<cit.>, where the sample is kept diverse through distance criteria, and in minima hopping<cit.>, where the molecular dynamics temperature is gradually increased when the search is stuck in an energy funnel. This can increase the explorative nature of the search, hopefully leading to a more robust algorithm. In this work, we propose a classifier that is based on spectral graph theory<cit.>, which is used to generate simple graph representations of structures. We construct a modified adjacency matrix, which describes the existence of bonds between atoms within a given structure. We combine this with a Voronoi tessellation method<cit.> to improve its stability towards subtle changes to atomic positions. The Voronoi tessellation method has previously been used to describe coordination numbers in crystals <cit.>, where it was shown to be robust to small changes in the atomic positions, and to extract features for crystal structure representations<cit.>. This classifier is discontinuous, enabling us to easily distinguish between structures by directly comparing their representations. We employ our classifier in the Global Optimization with First-principles Energy Expressions (GOFEE) algorithm<cit.>, where we use it to filter out unwanted structures from the search to improve the explorative capabilities of the algorithm. We show that this improves the performance of GOFEE in terms of finding the GM structure of different atomistic systems, demonstrating the usefulness of the classifier for solving GO problems. The paper is outlined as follows: First, the need for a reliable classifier is presented by considering a dataset of different two-dimensional pyroxene (MgSiO_3)_4 configurations. The classifier is based on a modified adjacency matrix combined with Voronoi tessellation, and its performance on this dataset is investigated. Secondly, the problem of stagnation in GO algorithms is discussed, and a structural filtering method based on the classifier is proposed. Third, the GOFEE algorithm is explained, and the filtering method is incorporated into its sampling protocol. Fourth, the filtering method is used to find the global energy minimum structure of a two-dimensional pyroxene test system. Finally, we use the method to solve three-dimensional global optimization problems, namely finding the global energy minimum structure of olivine (Mg_2SiO_4)_4, Au_12 and Lennard-Jones LJ_55 systems. We also demonstrate the usefulness of the method by solving the LJ_75 system using a modified GOFEE algorithm. The methods presented in this paper have been implemented in the Atomistic Global Optimization X (AGOX)<cit.> Python code, which is available at https://gitlab.com/agox/agoxhttps://gitlab.com/agox/agox. § METHODS In the first part of this section, we introduce methods to construct a classifier that can distinguish between structures from different energy basins. Subsequently, we discuss how this classifier can be used in a global optimization setting, where it guides the search towards unexplored regions of the PES. §.§ Filtering structures from a dataset To illustrate the task of classifying structures, we start by considering 6 distinct two-dimensional pyroxene (MgSiO_3)_4 compounds, as illustrated in Fig. <ref>(a). Each of these 6 structures are randomly perturbed three times, resulting in 18 structures in total. These 18 structures are then locally optimized in a ML energy landscape, and the trajectory from each structure relaxation is stored in a database, which results in N=627 configurations. The ML energy landscape is the Gaussian Process Regression model used in Sec. <ref>. We calculate the Fingerprint descriptor<cit.> of these structures and plot them in a two-dimensional space described by the first two components of a principal component analysis (PCA), which is shown in Fig. <ref>(b). See Sec. <ref> for more details about this descriptor. The colors represent which of the 6 energy basins each structure belongs to. We note that several of these structures have very similar descriptors, which stems from the fact that a lot of structures are very similar near the end of a local relaxation. Below we introduce methods that can be used to classify structures from such a dataset. These methods are based on a simple graph representation of structures, which can be used to obtain a description of a given configuration. §.§.§ Modified adjacency matrix classifier We start by describing a classifier that is based on spectral graph theory<cit.>. Here, a structure is described by an adjacency matrix, where the nodes correspond to the atoms, and the edges correspond to bonds between them. The adjacency matrix A^k is defined as A_ij^k = 0, if i=j, 1 , if d_ij≤ d_k, 0, otherwise, where d_ij is the Euclidean distance between atoms i and j, and d_k=k · (r_ cov,i + r_ cov,j) is a cutoff distance, where r_ cov,i is the covalent radius of atom i, and k is a hyperparameter. By including the covalent radii of the atoms in the cutoff distance, the method is able to characterize bonds between different types of atoms. If two atoms are separated by a distance which is less than the specified cutoff distance, a value of 1 is added to the adjacency matrix, indicating that the two atoms are bound to each other. A problem with the adjacency matrix A^k is that it does not take the atomic type of the individual atoms into account. For example, there is no way to distinguish between an A-A-B and an A-B-A configuration, where A and B are different atomic species. To solve this, we introduced a modified version of the adjacency matrix, B^k, B_ij^k = Z_i, if i=j, 1 , if d_ij≤ d_k , 0, otherwise, where Z_i is the atomic number of atom i. We note that the matrix B^k is not invariant with respect to swapping indices of the atoms. Since B^k is real and symmetric, its spectrum of eigenvalues only consists of real numbers. This can be used to define a descriptor G of a structure, consisting of N_A atoms, as G = [λ_1, λ_2,…,λ_N_A], where λ_1 ≤λ_2 ≤…≤λ_N_A are the sorted eigenvalues of B^k. To avoid problems with numerical precisions, the eigenvalues are rounded to three decimal places. This descriptor is only based on interatomic distances, meaning that it is invariant with respect to translations and rotations of a given structure, and it is invariant with respect to the indexation of the atoms. Using this descriptor, we consider two configurations to be the same if they have identical spectra of eigenvalues, and this will be used to classify structures. We note that for a structure containing N_A atoms, the descriptor G will contain only N_A eigenvalues, which is less than the 3N_A-6 degrees of freedom of the system. This means that the descriptor is not guaranteed to be unique for a given structural configuration. While this is not ideal when extracting unique structures from a dataset, we have not seen any problems due to this in practice, and for the GO problems considered later on in this paper, this descriptor works well in terms of finding the GM structure. Having designed a simple classifier, we can apply it to the dataset using different scaling parameters, k, for the cutoff distance. The results are shown in Fig. <ref>(c). It can be seen that the classifier is not able to obtain only one structure from each energy basin, and that the quality of the classifier depends heavily on the value of k. This can be explained by the discontinuity of the modified adjacency matrix B^k. Small changes to the position of atoms, that are separated by a distance close to the cutoff distance, can change the matrix B^k, and therefore also the descriptor G. This is illustrated in Fig. <ref>(d) and (e), where bonds between atoms appear or disappear during the local relaxation of structures. Thus, basing the classification of structures only on the distance between atoms is not a strong enough criterion if one needs to extract exactly one structure from each energy basin. §.§.§ Voronoi tessellation In order to get rid of the problem with atoms that are separated by a distance close to the cutoff distance, we need a more robust way of describing bonds between atoms. Looking at the structures like those in Fig. <ref>(d) and (e), we often observe that a major problem for the modified adjacency matrix is to determine the existence of bonds between Mg-Mg, Mg-Si, or Si-Si pairs that are separated by oxygen atoms. To tackle this problem, we want to divide space around each atom into distinct regions and quantify how much two atoms actually "see" each other. To do this, we proceed by employing a Voronoi tessellation method together with the modified adjacency matrix. This method allows for environment aware bond criteria by considering the local environment around each atom. It works by dividing space into regions, where each region consists of all points that are closest to some prespecified points, which in this case are the positions of the atoms. The regions are separated by planes that are placed midway between atoms. These planes are referred to as Voronoi faces, and they can be used to determine if two atoms are bound to each other. In this work, we have formulated the following criteria for a bond between atoms: * The atoms share a Voronoi face. * A straight line between the two atoms passes through this Voronoi face. * The Voronoi face must be large enough to enclose a circle, centered on the line through the face, that spans a certain solid angle. Figure <ref>(a) illustrates the Voronoi faces for a Mg atom and the atoms around it as grey lines. The Mg atom is bound to a silicon atom and three oxygen atoms, indicated by the black lines. The Mg and the Si atoms only share a small Voronoi face, which reflects that these atoms barely "see" each other, since they are screened from each other by the neighboring oxygen atoms. This is the reason for introducing the third criterion. By placing a circle on the Voronoi face, centered on the potential bond, we can control how large a Voronoi face must be in order for a bond to be valid. The size of the circle is determined by a hyperparameter θ_C, which we refer to as a cutoff angle. By introducing the hyperparameter θ_C, we are now able to tune the classifier to be more or less sensitive to changes in the structure. Figures <ref>(c) and (d) show two examples of how the cutoff angle is used to determine whether a bond is invalid or valid, respectively. The result of this criterion, using different values of θ_C for three choices of k, is shown in Fig. <ref>(b). The number of structures in the dataset is greatly reduced, and the classifier is much less sensitive to the choice of scaling parameter k for the cutoff distance d_k. Using this method, we are able to reduce the number of structures down to 6 for certain values of k and θ_C, meaning that we have a more reliable way of distinguishing between structures from different energy basins. In AGOX, the Voronoi tessellation method has been implemented in a simple and computationally fast manner, where the Voronoi faces are added iteratively between atoms that are bound to each other according to the modified adjacency matrix B^k. If two atoms do not obey the three criteria above, the bond between them is removed from B^k. For computational convenience and speed, we only sample N_p=8 points on the circumference of the circle and check that they do not intersect other Voronoi faces. These points are sampled uniformly on the circle in a deterministic manner, such that the method always returns the same descriptor for a given structure. For the atomic systems considered in this paper, the Voronoi tessellation method is the most time consuming part, compared to setting up and diagonalizing the adjacency matrix, when constructing a descriptor. However, both these tasks are much less computationally costly compared to other parts of global optimization algorithms, for example DFT calculations. §.§ Filtering of structures during global optimization search Some GO algorithms rely on sampling a subset of known low-energy structures to produce new structures, which introduces the risk of stagnation, because the search keeps exploiting known regions of the PES. This is illustrated in Fig. <ref>(a). Here, the two sample members are always picked from the same two energy basins, and the search is prone to stagnate. To avoid this problem, a GO algorithm requires a reliable way to distinguish between structures and filter away any unwanted subset of these, which is illustrated in Fig. <ref>(b). To achieve this, we employ the classifier introduced in Sec. <ref>. By tracking the descriptors of all structures that are found during the search, the algorithm is able to make a 'family tree'-like history of all the structures and use it to filter out unwanted structures to avoid certain scenarios to occur. Figure <ref> shows schematic examples of the history of a search and such unwanted scenarios. The grey nodes correspond to structures, and the arrows indicate which structure was modified to generate another structure, establishing a parent-child relation between structures. In Fig. <ref>(a), the search is doing fine in terms of exploring different structures. Figure <ref>(b) shows a situation where a chain of structures with the same descriptor G has been produced. This means that the search makes new structures that are very similar to the parent structure, but which has a slightly lower energy. This situation can be seen in Fig. <ref>(a), where the same two local minima are being refined over and over again. This is an unwanted behavior of the sampler, since this causes the algorithm to focus too much on few parts of the PES. Therefore, we want to filter away structures with the descriptor G to avoid this situation. Figure <ref>(c) shows another situation where the descriptor G_1 has been used to produce five new structures. Since the same structure with description G_1 enters the sample multiple times, it means that all structures generated from it have a higher energy and will not be picked as a sample member. We therefore want to filter out structures with this specific G_1 descriptor from entering the sample, since no new structures with lower energy can be produced from them. Finally, in Fig. <ref>(d), a situation is depicted where a structure with the descriptor G_2 has been used to produce a structure with the descriptor G_1. This indicates that the structure with descriptor G_2 is close to the structures with descriptor G_1 in the configurational space, meaning that the search is still focusing on a region of the PES that has already been exploited. Since we have removed structures with the descriptor G_1, we also want to filter out structures with the descriptor G_2. To avoid these unwanted situations from Fig. <ref> to occur, we require that structures with a descriptor G do not produce more than N_ max new structures, and that they do not produce new structures with already removed descriptors. By doing so, we allow the algorithm to leave a region of the PES that it has exploited, and focus on new, unknown regions, which is illustrated in Fig. <ref>(b). This filtering method will be referred to as the Family Tree Filtering (FTF) method. § THE GOFEE ALGORITHM Now that we have identified scenarios we want to avoid during a search, we can test the FTF method in a GO algorithm. In this work, we focus on the Global Optimization with First-principles Energy Expressions (GOFEE) algorithm. GOFEE is an iterative search method that combines elements from EAs with ML methods. In each iteration, a subset of previously found structures are sampled. These sampled structures, also referred to as parents, are modified to create N_C new structures. The new structures are locally optimized in an uncertainty-aware machine learned energy model, and the structure with lowest confidence bound (= predicted energy minus some degree of predicted uncertainty), according to this ML model, is evaluated in the target potential, i.e., DFT or LJ. More details about the GOFEE algorithm can be found in the Appendix, Sec. <ref>. The sampling of known structures to obtain the parent structures is based on the k-means algorithm. This sampling method was introduced in Ref. merte_2022, where it was shown to increase the performance of the GOFEE algorithm. In this method, all evaluated structures with an energy E>E_ min+5 eV are removed from the pool of potential sample structures, where E_ min is the lowest energy found so far. The remaining structures are clustered into N_k clusters, where N_k is a predefined hyperparameter equal to the desired number of sampled structures. Finally, the sample of parent structures is constructed by extracting the lowest energy structure from each of the N_k clusters. A schematic one-dimensional example of this is shown in Fig. <ref>(a) with N_k=2 and at different times of the search. When the FTF method is employed in GOFEE, it will be used to filter out unwanted structures before the k-means sampling takes place, such that the sample keeps evolving during the search as illustrated in Fig. <ref>(b). We resort to success curves to compare the performance of the GOFEE algorithm with and without the FTF method. To construct a success curve, a GOFEE search is started 100 times with the same settings but different initial atomistic configurations. For each of these 100 searches, the initial configurations were constructed by randomly placing the atoms in a large confinement cell<cit.>. The success curve s(x) then measures the proportion of these individual searches that have found a structure with the same descriptor as the GM structure after x number of single-point calculations. We will compare success curves from different searches that either uses the FTF method or not. Not using the FTF method corresponds to setting N_ max=∞, which we will use as the notation for such searches. § APPLICATION TO A 2-DIMENSIONAL TEST SYSTEM This section shows the results of introducing the FTF method when searching for the GM structure of the 2-dimensional pyroxene (MgSiO_3)_4 system. The target potential was a pretrained Gaussian Process Regression<cit.> (GPR) model, trained on 314 pyroxene structures. A small sample size N_k=2 was used in order to visualize the evolution of the sample during searches. The success curves with and without the filtering can be seen in Fig. <ref>(a). The figure shows that employing the FTF method increased the performance of GOFEE in terms of finding the GM structure. Figure <ref>(b) shows the final success after 2000 target evaluations for different choices of N_ max. It can be seen that the performance of GOFEE using the FTF method greatly depends on the choice of N_ max. If N_ max is too low, e.g. 3, the final success is lower compared to larger values of N_ max, but it is still better than not using the filtering at all. This shows that filtering is indeed useful for the GOFEE algorithm. The evolution of the sample from different searches can be compared to investigate the behavior of the sample members. In Fig. <ref>(a), the evolution of the sample in one of the successful GOFEE searches, which used the FTF method, is shown. The GM structure was found after 884 target evaluation, while the second best configuration was found after 1112 target evaluations. This demonstrates that the search is indeed capable of exploring various regions of the PES during a single search when employing the FTF method, which is the desired outcome of adding the method to the GOFEE algorithm. This can be compared to samples from GOFEE searches that did not employ the FTF method. Figure <ref>(b) shows the evolution of the sample of an unsuccessful GOFEE search. Clearly, up to iteration 900, the sample consisted of progressively lower energy structures, but from then on the sample did not evolve, meaning that the search was stuck in some region of the PES. Fig. <ref>(c) shows the sample from a successful GOFEE search. The GM structure was obtained in less than 300 iterations, but the sample did not change at all during the remaining part of the search, except for some energy reduction of the already found structures. Even though this search was successful, it does not find the second best structure, shown in Fig. <ref>(a), indicating that the search could not escape these low-energy regions of the PES. § FURTHER APPLICATIONS Now that we have demonstrated the effect of adding the filtering method to GOFEE, we will apply it to solve more interesting problems. In this section, GOFEE will be used to find the GM structure of a 3-dimensional olivine compound and a Au_12 configuration. The target potential will be DFT using the Perdew-Becke-Ernzerhof (PBE)<cit.> exchange-correlation functional and a LCAO<cit.> basis using the GPAW<cit.> module. In both cases, the structures were placed in a 20×20×20 Å^3 box. A sample size of N_k=5 was used in both cases. To further show the usefulness of the FTF method in the GOFEE algorithm, we employ the computationally cheap Lennard-Jones potential to search for larger compunds. The GOFEE algorithm is used to find the GM structure of the LJ_55 systems. After that, we turn off the ML part of GOFEE in order to solve an even larger problem, namely the LJ_75 system. §.§ Olivine compound In this section, GOFEE was used to solve the problem of finding the GM structure of a 3-dimensional olivine (Mg_2SiO_4)_4 compound<cit.>. Nanosized silicate compounds are found to be abundant in the Interstellar Medium<cit.>, and several properties of these systems have been computationally studied<cit.>. Figure <ref> shows the success curves for finding both the GM structure and the second-lowest energy structure. By using the filtering method before the sampling, nearly all searches were able to find both the GM and the second-lowest energy structure during the same search. It is clear that some of the searches, which did not employ the filtering method, could also find the GM, but in 1/3 of the searches the GM was not found at all. On the other hand, it seems that almost all of them eventually find the second best structure, but it happened much later compared to the searches that used the filtering process. This could indicate that these searches spent too many resources exploring regions of the PES that were not relevant for finding the GM structure. §.§ Small gold configuration GOFEE was used to solve the global optimization problem of finding the global energy minimum structure of the Au_12 cluster. Small gold systems have previously been investigated<cit.>. It has been shown that the GM structure of the smaller gold clusters tend to be planar, while low-energy three-dimensional configurations exist at the same time. On the other hand, the GM configurations of the larger compounds often appear to be three-dimensional. This fact potentially increases the complexity of the problem, since the GOFEE algorithm might get stuck searching for 3-dimensional configurations while the GM actually is planar, and vice versa. The planar GM structure for this global optimization problem is shown in the inset in Fig. <ref>. We allow the search to be fully three-dimensional in order to increase the difficulty of the problem. Fig. <ref> evidences that the filtering method improves the performance of the GOFEE algorithm. §.§ LJ_55 To further examine the performance of the FTF method, we wanted to test it on a larger system. To do this we used the Lennard-Jones potential, since the energy evaluations are fast to perform compared to DFT, and because the GM structures of various LJ systems are well-known<cit.>. We therefore applied the method to the LJ_55 system. This system has three highly symmetric minima, making it a potentially difficult problem to solve. The success curves for different searches can be seen in Fig. <ref>. The figure demonstrates that the GOFEE algorithm greatly benefits from using the filtering method, since the problem can be solved almost every time within 2000 single-point evaluations. §.§ LJ_75 We now turn our attention to the LJ_75 system<cit.>. For a system of this size, the configurational space is much larger compared to the LJ_55 system, meaning that a large number of energy evaluations are required to solve the problem. At the same time, such evaluations using the Lennard-Jones formula are quite fast, meaning that the machine learning part of GOFEE quickly becomes the time-dominating factor. To combat this problem, we modified the GOFEE algorithm such that no ML is used during the search. All structures are locally optimized directly in the LJ potential instead, making it possible to perform longer global optimization searches. More details about the modified GOFEE algorithm can be found in the Appendix, sec. <ref>. The succes curves for solving this problem can be seen in Fig. <ref>. Without the FTF method, the problem is really difficult to solve for the modified GOFEE algorithm, indicating that the search potentially becomes stuck in unwanted regions of the PES. Using the FTF method greatly improves the chance of finding the GM configuration, demonstrating that the method increases the performance of the GO algorithm. § CONCLUSION We have proposed a method for classifying structures from different energy basins in a dataset. This method employed spectral graph theory and the Voronoi tessellation method to make it robust to small variations to the structures, and we have shown its capability of distinguishing structures from different energy minima. The classifier was introduced to the GOFEE algorithm as a filtering mechanism for the sampling of known structures. By doing so, the algorithm was able to abandon regions of the PES that it had exploited and thereby avoid stagnation. We have shown that the filtering method greatly improves the performance of the GOFEE algorithm when searching for an olivine (Mg_2SiO_4)_4, a Au_12, and a LJ_55 system, and that a modified GOFEE algorithm benefits from using the filtering method when solving the LJ_75 system. § ACKNOWLEDGEMENTS This work has been supported by VILLUM FONDEN through Investigator grant, project no. 16562, and by the Danish National Research Foundation through the Center of Excellence “InterCat” (Grant agreement no: DNRF150). § DATA AVAILABILITY The data that support the findings of this study and the corresponding AGOX scripts are available at https://gitlab.com/aslavensky/classifier_paperhttps://gitlab.com/aslavensky/classifier_paper. The AGOX code can be obtained from https://gitlab.com/agox/agoxhttps://gitlab.com/agox/agox. The documentation for the AGOX module can be found at https://agox.gitlab.io/agoxhttps://agox.gitlab.io/agox. § REFERENCES § APPENDIX §.§ Fingerprint descriptor To train a machine learning model, a translational and rotational invariant descriptor of atomic systems is required. In GOFEE, the Fingerprint descriptor<cit.> by Valle and Oganov is used. Here, the radial part of the descriptor is evaluated as F_AB(r) ∝∑_i,j1/r_ij^2exp(- (r-r_ij)^2/2 l_r^2) , r<R_r 0 , r ≥ R_r and the angular part is calculated as F_ABC(θ) ∝∑_i,j,k f_c(r_ij) f_c(r_ik) exp(- (θ - θ_ikj)^2/2l_θ^2 ), where A, B, and C are atomic types, R_r=6 Å is a radial cutoff distance, l_r=0.2 Å, l_θ=0.2 rad, and f_c is a cutoff function. For a given structure, all two-body (radial) and three-body (angular) combinations of the atomic types are calculated and binned into 30 bins. These bins are concatenated into a single feature vector that describes the whole atomistic system. §.§ The GOFEE algorithm In this work, we focus on the GOFEE algorithm<cit.>. GOFEE combines elements from EAs and machine learning (ML) to reduce the required number of target potential evaluations required to find the GM structure. To avoid spending a lot of computational resources on doing local optimizations of new structures in the target potential, which could become time-consuming when using DFT, GOFEE employs a ML surrogate model for the structure relaxation. The surrogate model, which is trained on-the-fly, is constructed using Gaussian Process Regression (GPR)<cit.>. Given a database of training data X, which is a matrix consisting of the Fingerprint representations<cit.> of the structures, and a vector of their corresponding energies E, the GPR model can predict the energy of a new structure x_*, E_GPR(x_*), and the model's uncertainty of the predicted energy, σ_GPR(x_*), as E_GPR(x_*) = K(x_*,X) C (E- μ) + μ(x_*), σ_GPR(x_*) = K (x_*,x_*) - K(x_*,X) C K(X,x_*), where μ is a vector containing the prior values of each training structure, K is the kernel function, and C = [K(X,X) + σ_n^2 I]^-1, where σ_n=10^-2 eV acts as regularization. In GOFEE, the kernel K is given as K(x_i,x_j) = θ_0(1-β) exp( - 1/2[x_i - x_j/λ_1]^2 ) + θ_0 βexp( - 1/2[x_i - x_j/λ_2]^2 ) where β=0.01 is the weight between the two Gaussians, θ_0 is the amplitude of the kernel, and λ_1>λ_2 are length scales. The prior μ is given by μ(x) = E̅ + 1/2∑_ab( 1 Å/r_ab + 1Å - 0.7r_CD,ab)^12eV, where r_ab is the distance between atoms a and b, r_CD,ab is the sum of their covalent radii, and E̅ is the mean energy of the training data. Using these quantities, a lower confidence bound (LCB) energy expressions can be formulated, E_ LCB(x_*) = E_GPR(x_*) - κσ_GPR(x_*), where κ=2 is a hyperparameter that controls the trade-off between exploration and exploitation of the PES. This energy model is used in GOFEE to do local optimization of structures. More details about the GPR model can be found in Ref. GOFEE2022. The GOFEE algorithm starts out by making completely random structures, evaluating their energies, adding them to a database, and construct the initial surrogate model. From then on, it iteratively samples the database, makes new structures, and improves the surrogate model in its attempt of finding the GM structure. GOFEE can be summarized as follows: * Sample N_k parent structures from the database of previously evaluated structures. * Modify these parent structures to obtain N_C new structures. * Locally optimize all N_C new structures in the LCB landscape. * Pick the structure with the lowest energy predicted by the LCB energy expression. * Evaluate its energy in the target potential. * Add the new structure and its energy to the database. Retrain the surrogate model. * Repeat steps 1-6 until a certain number of iterations has been done. §.§ Modified GOFEE algorithm The difference between the GOFEE algorithm and the modified version of it, used to solve the LJ_75 system, is that the modified GOFEE algorithm does not employ ML. This choice was made to investigate the performance of the FTF method for a large well-known problem. The modified GOFEE algorithm can be summarized as follows: * Sample N_k parent structures from the database of previously evaluated structures. * Modify these parent structures to obtain N_C new structures. * Optimize all N_C structures in the LJ potential and add them to the database. * Repeat steps 1-3 until a certain number of iterations has been done.
http://arxiv.org/abs/2407.12897v1
20240717153310
NeuroSynth: MRI-Derived Neuroanatomical Generative Models and Associated Dataset of 18,000 Samples
[ "Sai Spandana Chintapalli", "Rongguang Wang", "Zhijian Yang", "Vasiliki Tassopoulou", "Fanyang Yu", "Vishnu Bashyam", "Guray Erus", "Pratik Chaudhari", "Haochang Shou", "Christos Davatzikos" ]
q-bio.QM
[ "q-bio.QM", "stat.ML" ]
Adaptive Finite Blocklength for Low Access Delay in 6G Wireless Networks Yixin Zhang^†, Wenchi Cheng^†, and Wei Zhang^  ^†State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, China ^School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, Australia E-mail: {yixinzhang@stu.xidian.edu.cn, wccheng@xidian.edu.cn, w.zhang@unsw.edu.au} This work was supported in part by the National Key Research and Development Program of China under Grant 2021YFC3002102 and in part by the Key R&D Plan of Shaanxi Province under Grant 2022ZDLGY05-09. Received 04 December 2023 / Accepted 15 July 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty § BACKGROUND & SUMMARY Empowered by widely available open datasets and challenges, in the past decade, machine learning algorithms have surpassed human-level performance on many non-trivial computer vision and natural language understanding tasks including face recognition, object detection, question answering, and translation <cit.>. Translating the success of artificial intelligence tools from natural image or language applications to medical domain holds immense potential for advancing automated diagnosis and precision medicine. However, medical data usually contains sensitive patient information, which is subject to stringent privacy regulations from hospitals <cit.>. Consequently, machine learning research in medicine is contained within individual institutes or hospitals, constrained by relatively small sample sizes due to resource limitations and resulting in datasets that lack diversity in terms of patient demographics and pathology representation. Models trained on such less-representative medical data are subject to sample selection bias and may lack generalizability across clinical cohorts <cit.>. This underscores the need for collaborative efforts to access larger and more diverse medical datasets, ensuring the development of robust and trustworthy AI models in healthcare. To address such shortages of data specifically in the context of neuroimaging, numerous international consortia have been established. Examples include ENIGMA <cit.> and iSTAGING <cit.>, which primarily compile structural, functional imaging, and genetic data from tens of thousands of individuals. However, the usage of the data is still restricted to each consortium's participants. To broaden accessibility and enhance research potential, generative models in machine learning have garnered attention. Techniques such as kernel-density estimation (KDE) <cit.> and generative adversarial networks (GANs) <cit.> have shown promises in synthesizing data that faithfully represents the underlying probability distributions of real-world data. This capability opens avenues for generating synthetic datasets that closely resemble actual patient data, thereby augmenting the pool available for analysis and model training. Recently, these models have been utilized to produce synthetic medical data and have proven valuable in many medical applications including chest X-ray screening and skin lesion detection <cit.>. Consequently, these synthetic datasets can be openly shared and distributed without concerns regarding data privacy. Furthermore, having access to synthetic data that accurately represents a large population can significantly benefit downstream classification models, especially when working with a limited number of labeled examples. By leveraging synthetic data, we can bridge the gap between the available labeled samples and the diverse real-world scenarios, improving the robustness and generalization of our models. In many studies involving MRI (Magnetic Resonance Imaging), brain structure is commonly summarized by region-of-interest (ROI) volumes <cit.>, which are derived from structural T1-weighted MRI scans. ROI volumes are robust brain features that have been validated in many applications including disease diagnosis and prognosis <cit.>, progression modelling <cit.>, and pathology subtype discovery <cit.>. In this paper, we present NeuroSynth, a collection of generative models that generate normative ROI data over the adult lifespan (age range: 22 to 90 years) for different demographic groups categorized by race and sex. Furthermore, using these models, we offer a dataset of 18,000 synthetic neuroimaging samples representing a diverse global healthy adult population. Notably, we also provide the generative models alongside the dataset to enable users to customize their data synthesis. The dataset includes participants' demographic information, such as sex, age and race, which can be beneficial for research focusing on mitigating algorithmic bias and promoting fairness. Our approach to generating synthetic data involved training generative models, specifically KDE models, on 34,000 subjects from the iSTAGING consortium <cit.> to synthesize both brain anatomical ROI volumes and their associated demographics. In the experiments, we assess the quality of the synthetic data from both qualitative and quantitative perspectives. For example, we create plots highlighting the similarity between the real and synthetic data distributions across all ages groups. In terms of quantitative evaluation, we use statistical tests and machine learning models to show the similarity in distributions between synthetic and real data. We also examine the fidelity of the synthetic data by performing covariates prediction including sex or race classification and age regression tasks. Finally, we show the utility of the synthetic data in real world applications such as brain age gap prediction <cit.>. NeuroSynth, a comprehensive structural brain imaging generator and dataset, aims to significantly contribute to advancing machine learning research within the field of neuroimaging. In particular, NeuroSynth can facilitate: 1) Local Disease Population Comparisons: Researchers can leverage NeuroSynth to compare their own patient populations to NeuroSynth's normative dataset. To ensure robust inference that is invariant to inter-site differences, researchers should first use part of their control data to harmonize their measures with those of NeuroSynth. 2) Brain Age Prediction Models: NeuroSynth serves as a resource for training brain age prediction models. These models have been used to detect the effects of neurodegenerative and neuropsychiatric disorders on the brain. In the technical validation section, experiments using NeuroSynth have explored the relationship between brain age residuals and cognitive scores.Beyond brain age prediction, the synthetic data generated by NeuroSynth can also be leveraged to train other machine learning models and adapt them for smaller-scale studies through techniques like transfer learning or domain adaptation. 3) Enriching Healthy Controls in Classification Studies: The dataset enhances the healthy control class for discriminative analysis and disease subtyping research. For instance, in an Alzheimer’s disease diagnosis experiment, NeuroSynth has been utilized for data augmentation, leading to more robust results. 4) Synthetic Data Generator Models: Beyond providing the NeuroSynth dataset containing 18,000 samples, researchers also have access to the synthetic data generator models. These pre-trained models allow customization of brain volume synthesis based on factors such as sex, age range, and race groups. Our goal is to continue expanding NeuroSynth with additional covariates, including genetic risk factors, cognitive scores, and biomarker data. This expansion will enable users to enrich their own datasets by synthesizing highly specific brain ROI measures tailored to their individual studies. § METHODS In the following subsections we describe i) the real dataset used to construct the NeuroSynth generative model, and ii) the approach used for training the generative model for synthetic data generation. Real data For our real data, we use the iSTAGING consortium <cit.> that consolidated and harmonized imaging and clinical data from multiple cohorts spanning a wide age range (22 to 90 years). Our data consists of multimodal neuroimaging and demographic measures taken from subjects labeled as cognitively normal in the iSTAGING consortium. Specifically, the neuroimaging measures are the 145 anatomical brain ROI volumes (119 ROIs in gray matter, 20 ROIs in white matter and 6 ROIs in ventricles) from baseline scans extracted using a multi‐atlas label fusion method <cit.>. To mitigate site effects, ComBat-GAM harmonization <cit.> was applied to these 145 ROI volumes while accounting for age, sex and intracranial volume (ICV). The demographic measures including subjects' age, sex, and race were accounted in the synthesis. Employing a stratified approach, subjects in the real data were grouped based on race and sex covariates, resulting in six subsets or categories: white male, white female, black male, black female, asian male, and asian female. Before fitting the generative model, within each category, the feature vector of 145 ROI volumes along with the age variable is normalized using the mean and the standard deviation. Mean and standard deviation values for each category were retained to facilitate the back transformation of synthesized data from normalized space to the original space. Generative model training We employ a separate non-parametric kernel density estimation (KDE) model for each category, using a Gaussian kernel to delineate the joint probability density of age and the 145 ROI volumes. The selection of the Gaussian kernel is crucial; its smooth, bell-shaped profile is well-suited for generating a continuous representation of the underlying distribution from discrete observations. Within this framework, each data point is represented by a local Gaussian density surface centered at the observed location. Collectively, the overall density is smoothly estimated using composite functions of these local densities, based on the discrete empirical data. This approach facilitates the estimation of the multivariate joint probability density function for age and ROI volumes within each distinct category. Generating new synthetic data involves a two-step process: model fitting and sampling. Initially, a multivariate KDE model is fitted to the real data of each sex and race combination with carefully tuned bandwidth parameters. We then generate new data points by sampling from this refined distribution. Each synthesized vector is of size 146; the first 145 elements correspond to the ROI volumes, and the last element represents age. The sampling process introduces randomness, ensuring that while the synthetic data are not exact replicas of the original, they adhere to the same statistical properties. Hyper-parameter selection The bandwidth parameter determines the width of the Gaussian kernel and plays a crucial role in KDE. A smaller bandwidth results in a more granular estimate with greater detail, reflecting minor variations in the data distribution, whereas a larger bandwidth leads to a smoother and more generalizable estimate. By carefully selecting the bandwidth, the KDE model can effectively navigate the trade-off between overfitting, where the model captures excessive noise, and underfitting, where important features of the data distribution are overlooked. To identify the optimal bandwidth for the kernel in Kernel Density Estimation (KDE), we conducted a grid search spanning bandwidths from 0.5 to 1. The selection criterion for the optimal bandwidth was the maximization of the log-likelihood score across these values. For the majority of the categories analyzed, the optimal bandwidth converged at approximately 0.7. The search space of the bandwidth is constrained to 0.5 to 1 because our dataset is normalized (∼ N(0,1)). A small bandwidth will overfit the training data, since it is going to fit a narrow Gaussian kernel around any point; whereas a large bandwidth, close to 1, will result in an overly smoothed density estimate that fails to maintain the correlation between age and the 145 ROIs. The model implementation and the bandwidth grid-search are carried out using the scikit-learn library <cit.>. § DATA RECORDS The NeuroSynth dataset and model are hosted at https://huggingface.co/spaces/rongguangw/neuro-synthhttps://huggingface.co/spaces/rongguangw/neuro-synth. The "neurosynth_dataset.csv" file contains 18,000 synthetic samples with 3000 samples allocated per combination of race and sex, a deliberate sampling strategy aimed at ensuring thorough representation across demographic groups. Each sample comprises of 148 features (145 brain ROI volumes along with age, sex, and race information). See Table. <ref> for an example of the synthetic dataset. We enhance the utility of our dataset by providing users with access to both the synthetic data and the fitted Kernel Density Estimation (KDE) models. This provision equips researchers with the ability to generate synthetic data that aligns precisely with their research requirements. § TECHNICAL VALIDATION We conduct a comprehensive validation of the synthetic data through two levels of analysis. In the first level, we employ a combination of statistical and machine learning techniques to ensure that the synthetically generated brain ROI volume data closely mirrors the distribution observed in real ROI volume data across demographic variables such as age, sex, and race. Subsequently, at the second level, we aim to validate the practical use of synthetic data in real-world scientific applications, such as brain age gap estimation and data augmentation. Through these rigorous analyses, we aim to ascertain the accuracy, reliability, and utility of synthetic data in both replicating real data distributions and facilitating meaningful applications in scientific research and clinical practice. §.§ Assessing Fidelity: Statistical and Machine Learning Analysis Comparing Synthetic and Real Data Distributions In this section, we examine synthetic samples generated from the KDE models in comparison with a held-out dataset containing real samples from the iSTAGING consortium <cit.>. Given that the synthetic data spans the entire adult lifespan and covers all combinations of race and sex, Fig.1a serves to visually illustrate the similarity in distributions between real and synthetic data across several ROIs. From Fig. 1a it is evident that for these two representative examples, i.e. 3rd ventricle and right hippocampus, there is significant overlap between the generated data samples and the real data samples. Furthermore, the age trend observed in the synthetic data roughly mirrors that observed in the real data. To further evaluate the fidelity of the ROI volume distributions, univariate statistical analysis was conducted via linear regression of each ROI volume on group (real versus synthetic) while adjusting for age as a covariate  <cit.>. Bonferroni correction <cit.> was applied to adjust for multiple comparisons, resulting in a significance threshold of α = 0.05/145 = 0.0003. Fig. 1b presents the count of ROIs, out of the total 146 ROIs, with statistically significant differences in group means for every combination of race and sex. Additionally, to assess the quality of the multivariate ROI volume distributions, we use the support vector machine (SVM) <cit.> classifier that is trained to differentiate between synthetic and real samples. An area under the receiver operating characteristic curve (AUC) <cit.> of 0.5 would suggest that the classifier is making random predictions and cannot distinguish between the two groups. Therefore, if the SVM fails to differentiate synthetic samples from real samples, we can infer that the generated dataset is indistinguishable from the real dataset. Fig. 1c summarizes the findings across different race and sex categories. While there remains some classification capability (AUC > 0.5 in white males, white and black females), suggesting slight differences between the two distributions, subsequent analyses under practical applicability section demonstrate that this discrepancy does not confound or significantly impact the utility of the synthetic dataset. Since the data for each combination of race and sex was generated from different KDE models, it is critical to assess whether the relationship between ROI volumes and covariates (age, sex, race) in the synthetic data aligns with observations from real data. To carry out this analysis, we build machine learning models for covariate prediction, specifically we use gradient boosted trees (implemented using XGBoost library <cit.>). These models are trained to predict covariates based on ROI volume data, performing tasks such as age regression, sex classification, and race classification. By comparing the performance of the models trained on synthetic data with those trained on the real data, we can determine if synthetic data is a suitable substitute for real data. Sex and race classification performance is evaluated using metrics such as accuracy, balanced accuracy, and AUC. Age regression performance is evaluated using mean absolute error (MAE) and Pearson's correlation <cit.> between predicted age and ground truth age. Fig.<ref> shows the results for covariate prediction, we used 5-fold cross-validation during analysis. While models trained on real data generally outperform those trained on synthetic data, it is noteworthy that the performance of models trained on synthetic data is comparable to their real data counterparts. These results suggest that synthetic data can serve as a valuable alternative to real data, when the latter is not available. The next section delves into the practical applicability of synthetic data, exploring its potential benefits and limitations in real-world scenarios. §.§ Practical Applicability In addition to validating the quality of the generated data, we conducted comprehensive assessments to showcase the practical applications of synthetic data across various scenarios. This included its efficacy in augmenting training datasets for disease classification and deriving clinically meaningful estimates of brain age gaps. In this section, we utilized the ADNI study as a held-out dataset to assess the aforementioned properties. The synthetic data was derived from a KDE model retrained on the remaining studies, following the steps outlined in Methods. §.§.§ Data augmentation Deep learning methods usually require a large number of training samples, which are laborious and costly to obtain, especially for brain MRI studies. Moreover, datasets focusing on specific diseases may sometimes lack a sufficient number of healthy controls. To address this, we investigated the feasibility of using synthetic data to supplement the normal control (CN) group. We centered our analysis on the mild cognitive impairment (MCI) or Alzheimer's disease (AD) classifications. Leveraging data from the ADNI study <cit.>, we evaluated whether synthetic data based data augmentation can help improve classification performance. We divided the ADNI dataset, allocating 500 cognitively normal participants for training, 368 for testing, and evenly dividing 1101 MCI and 419 AD participants between training and testing sets. To ensure the robustness of our findings, we conducted fifty distinct random splits. For both CN vs MCI and CN vs AD classifications, we trained the SVM algorithm on the training set and assessed its performance on the test set using the AUC metric. Notably, the training sets comprised different proportions of CN data from both real and synthetic datasets, allowing us to evaluate the specific contribution of synthetic data to performance enhancement (Fig. 3). Figure 3 illustrates that a reduced number of training samples from the normal control group (500 v.s. 100) results in lower MCI and AD classification performances, underscoring the importance of augmenting the healthy control dataset. Supplementation of the CN set with synthetic normal control data progressively improved classification AUCs, although the rate of improvement slowed as the proportion of synthetic data within the CN set increased. §.§.§ Brain age gap estimation Brain age gap<cit.>, the difference between predicted brain age and actual chronological age, indicates deviations from normal brain aging and proves important for assessing neurological health. Utilizing large-scale synthetic control data can potentially enhance the development of age-prediction models, offering more reliable and clinically relevant brain age gap estimations. We trained XGBoost regression models on both synthetic and real control data, and used the trained models for calculating brain age gaps for CN and MCI/AD participants in the held-out ADNI dataset. We will refer to brain age gaps derived using the synthetic data model as OOD brain age gaps and refer to brain age gaps derived using the real data model as IND brain age gaps. To mitigate biases inherent in predicted brain age estimates, we applied Cole’s method <cit.> before calculating the brain age gaps. Further, to examine their effectiveness in indicating cognitive decline and underlying neuropathology, we examined their correlations with Mini Mental State Examination (MMSE) scores — a widely utilized cognitive assessment for measuring cognitive impairment. As shown in Fig. 4, both OOD and IND brain age gaps show no significant correlations with MMSE among CN participants. However, among MCI/AD participants, they exhibit significant correlations (p<0.0001). Interestingly, the Pearson's correlation coefficient for OOD brain age gaps (ρ=-0.235) is similar to that observed for IND brain age gaps(ρ= -0.304). This further underscores the potential of regression models trained on extensive synthetic datasets to provide brain age gap estimation with increased clinical significance. § USAGE NOTES By employing rigorous statistical and machine learning analyses, we have demonstrated that the synthetic data generated by NeuroSynth closely aligns with real data distributions across various demographic variables, including age, sex, and race. Our evaluations, detailed in the technical validation section, indicate that the multivariate distributions learned by the model accurately preserve covariate effects, ensuring that the synthetic data maintains the integrity of the original data's demographic characteristics. The practical utility of NeuroSynth has been illustrated through various applications, showing its potential to serve as either a substitute or complement to real data. In scenarios with class imbalances in real datasets, we have shown how researchers can expand their sample sizes with synthetic data and improve the performance of machine learning models. We have also show the utility of NeuroSynth in brain age predictions. As it is designed to emulate a reference dataset representing a global healthy population across the human lifespan, researchers can leverage NeuroSynth to compare their small cohorts to the reference population, gaining insights into age-related changes and deviations from expected brain development trajectories, thus enhancing our understanding of neurodevelopmental processes and age-related neurodegeneration. Moreover, synthetic data holds significant potential for harmonization efforts in integrating and comparing datasets from diverse sources. Synthetic data can be specifically tailored to generate covariate-matched control data, ensuring that external studies can be effectively harmonized with existing datasets. This approach enables researchers to create control groups with similar demographic characteristics as that of their study population, thereby reducing confounding effects and then utilizing a harmonization technique <cit.> to align their external data to the synthetic matched control data. This enhances the robustness of comparative analyses. While we have demonstrated the potential uses of synthetic data, there are certain limitations that end-users should consider when using NeuroSynth. The quality of synthetic samples is influenced by the selection of the bandwidth parameter in KDE, and although the synthetic data statistically resembles real data, it may contain noise due to kernel smoothing. This noise might affect the results of any analysis performed with NeuroSynth, so users should interpret their results with caution. Additionally, when integrating their own data with NeuroSynth, users should be aware of potential site-related differences that might impact their analyses. There are techniques available that can help the user harmonize their data to NeuroSynth<cit.> to mitigate site-related effects and facilitate more robust analyses. In conclusion, NeuroSynth represents an advancement in the generation and application of synthetic neuroimaging data, providing a valuable resource for the neuroimaging community. By continuing to expand NeuroSynth with additional covariates, including genetic risk factors, cognitive scores, and biomarker data, we aim to further enhance its utility and applicability in diverse research contexts. This work underscores the importance of synthetic data in advancing neuroimaging research, promoting data accessibility, and ensuring the development of robust, generalizable machine learning models in healthcare. § CODE AVAILABILITY All our model building and analysis were carried out in python. All our data transformations are described in detail in the methods section. Statistical analyses were conducted via online python packages, statsmodels 0.8.0, SciPy 1.6.3, NumPy 1.16.6 and pandas 0.21.0. Our machine learning experiments were conducted with version 1.1.3 of the Scikit-learn library[https://scikit-learn.org]. The trained KDE models are available at https://huggingface.co/spaces/rongguangw/neuro-synthhttps://huggingface.co/spaces/rongguangw/neuro-synth. Sample code for model training and data generation is available at https://huggingface.co/spaces/rongguangw/neuro-synth/blob/main/script/synthetic_data_generation.ipynbhttps://huggingface.co/spaces/rongguangw/neuro-synth/blob/main/script/synthetic_data_generation.ipynb. § ACKNOWLEDGEMENTS The iSTAGING consortium is a multi-institutional effort funded by the National Institute on Aging with grant number RF1AG054409. Data used in preparation of this article were in part obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: https://adni.loni.usc.edu/wpcontent/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf. ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California. § AUTHOR CONTRIBUTIONS STATEMENT Conceptualization, Methodology, Formal Analysis, Investigation & Writing: Sai Spandana Chintapalli, Rongguang Wang, Zhijian Yang, and Vasiliki Tassopoulou; Conceptualization, Methodology & Formal Analysis: Fanyang Yu and Vishnu Bashyam; Conceptualization: Guray Erus; Conceptualization & Review: Pratik Chaudhari; Review, Editing & Supervision: Haochang Shou and Christos Davatzikos. § COMPETING INTERESTS The authors declare no competing interests.
http://arxiv.org/abs/2407.13760v1
20240718175801
Neural Network Tire Force Modeling for Automated Drifting
[ "Nicholas Drake Broadbent", "Trey Weber", "Daiki Mori", "J. Christian Gerdes" ]
eess.SY
[ "eess.SY", "cs.AI", "cs.SY" ]
Neural Network Drifting (AVEC '24) Nicholas Drake Broadbent et al. (AVEC '24) Nicholas Drake Broadbent, Trey Weber, Daiki Mori, and J. Christian Gerdes Stanford University, Stanford CA 94305, USA {ndbroadb,tpweber,dmori,gerdes}@stanford.edu Neural Network Tire Force Modeling for Automated Drifting (AVEC '24) Nicholas Drake Broadbent Trey Weber Daiki Mori J. Christian Gerdes July 22, 2024 ====================================================================== § ABSTRACT Automated drifting presents a challenge problem for vehicle control, requiring models and control algorithms that can precisely handle nonlinear, coupled tire forces at the friction limits. We present a neural network architecture for predicting front tire lateral force as a drop-in replacement for physics-based approaches. With a full-scale automated vehicle purpose-built for the drifting application, we deploy these models in a nonlinear model predictive controller tuned for tracking a reference drifting trajectory, for direct comparisons of model performance. The neural network tire model exhibits significantly improved path tracking performance over the brush tire model in cases where front-axle braking force is applied, suggesting the neural network’s ability to express previously unmodeled, latent dynamics in the drifting condition. § INTRODUCTION The maneuvering capability of a vehicle is fundamentally limited by the friction between the tires and the road. Vehicle operation at the friction limits may require large lateral and longitudinal tire slip, a regime that can be difficult to model accurately in the presence of parameter variation <cit.>. This is due in part to the many empirically defined characteristics of tire material composition (e.g. coefficient of friction between tire and road, cornering stiffness of the tire, thermal properties) and the geometry of tire and suspension subassemblies (e.g. camber, caster, and toe angles) that can significantly impact the overall vehicle dynamics <cit.>. The resulting force and moment computations of physics-based models are sensitive to the precise representation of these dynamic characteristics, particularly when operating in coupled slip regions at the limits of handling <cit.>. Recently, autonomous racing and drifting have emerged as challenge problems for demonstrating precise vehicle control at the friction limits. While interesting problems on their own, the insights gained from automated racing and drifting also lay the foundation for future automated systems that could improve safety. To succeed, controllers must reliably control tire forces in these nonlinear, coupled slip regions. Autonomous drifting, in particular, poses challenges as the tires are not only operating in these coupled slip regions but also heating and disintegrating over the course of the test <cit.>. Many examples in the literature have shown that an autonomous vehicle, with thoughtful modeling and control design, can drift. Velenis was one of the first to develop a controller for automated drifting, stabilizing the vehicle around an unstable cornering equilibrium with a large sideslip angle <cit.>. Subsequent control approaches by other authors have extended this result to path tracking and, with the use of front axle braking, simultaneous velocity control, demonstrated on full scale test vehicles <cit.>. The success of these approaches, applied to a variety of autonomous drifting problems, suggest that automated vehicles could harness these dynamics for greatly increased maneuverability. Perhaps surprisingly, front axle braking while drifting poses an even more difficult modeling problem than the rear axle. Unlike the rear tires, the front tires are not always saturated and the tires are not coupled through a locked differential. Front suspension geometry while drifting with large steering angles further complicates modeling the coupled front tire forces. This is particularly true with dedicated drifting vehicles such as Takumi, our automated 2019 Toyota Supra built to Formula Drift specifications. Takumi features a custom front wheel alignment designed for high-performance drifting (-7 +/- 0.3 degrees camber and 6 +/- 0.3 degrees caster at 0 degrees steering angle). This setup creates effects in the coupled slip behavior that can be difficult to model, since the tire contact patch changes size and location based on steering angle. Artificial intelligence offers a chance to address some of these challenges. Djeumou et al. developed front and rear tire force models for drifting using neural ordinary differential equations and neural-ExpTanh parameterization, ensuring physical accuracy by constraining predictions to a family of solutions and capturing higher-order effects from vehicle data. Compared to a nonlinear model predictive controller using the Fiala brush tire model, their models significantly improved tracking, smoothed control inputs, and sped up computation time in experiments <cit.>. Notably, their approach focused on steering and drive torque and did not include the front axle braking necessary for independent speed control. Given the particular challenges with modeling front axle tire force generation under braking, we propose a neural network for predicting front tire lateral force that makes no prior assumptions about the shape of the resulting tire curve (or constraining predictions accordingly), relying exclusively on capturing these dynamics with vehicle data. Comparing the performance to that of the Fiala brush model in an experimental setup similar to that of Djeumou et al, the learning-based model achieved significantly better overall trajectory tracking performance with no increase in computational complexity. Deeper analysis of the results highlights the importance of training data coverage of the state space and potential opportunities for extending this approach to learn higher-order effects. § EXPERIMENTAL SETUP §.§ Neural Network Model Development We structure the input layer of the neural network around the same terms that define lateral tire force generation within the Fiala brush model, as shown in Fig. <ref>. We label vehicle states (yaw rate, velocity, and sideslip angle) and control inputs (steering angle and braking force) with raw measurements from the vehicle. The corresponding normal and lateral forces are labeled with estimates provided by an unknown input observer. The data used to train the neural network features a combination of automated and manual drifting, amounting to approximately 30 minutes recorded up to one month before these comparative experiments took place. One dataset, featuring automated drifting with instances of front axle braking collected the day before these experiments, is held out of the training data in order to iteratively tune the hyperparameters of the model including batch size, training epochs, activation function, and number of hidden elements. The resulting neural network consists of a three-layer feedforward architecture with 8 elements in the first hidden layer, 16 elements in the second hidden layer and tanh activation functions in both hidden layers. While quite small by neural network standards, this model size corresponds to a roughly 35ms average solve time, approximately equivalent to that of the physics-based tire model used for comparisons. Therefore, a model of this size represents a drop-in replacement for a physical tire model. Training proceeds by cycling through mini-batches of 1000 samples over 1000 epochs, with loss optimization governed by the Adam optimizer and mean squared error loss function. §.§ Trajectory Generation The same tire force observer that generates front tire lateral force labels for the neural network training assists in fitting front and rear axle tire parameters. In addition to fully defining the Fiala brush model that served as the point of comparison for these experiments, these tire parameters and the resulting model are used in computing the offline reference trajectory, similar in approach to Weber <cit.>. This trajectory features a 15 meter radius circle path with a constant sideslip angle of -40 degrees. By incorporating front axle braking, the target velocity decreases from the equilibrium value without the use of brakes (V_sol) with each revolution of the map (lap 1: V_des = V_sol, lap 2: V_des = 0.95 · V_sol, lap 3: V_des = 0.875 · V_sol), allowing us to compare model performance in the condition of increasing front axle longitudinal force (lap 1: F_xf,ref = 0 N, lap 2: F_xf,ref = 1000 N, lap 3: F_xf,ref = 2150 N). §.§ Control Architecture Nonlinear Model Predictive Control (NMPC) can handle multi-input, multi-output systems with nonlinear dynamics and constraints on both states and inputs while predicting future system behavior. These properties are advantageous in trajectory tracking for automated drifting, as exhibited by both Goel and Weber <cit.>. The implementation of NMPC for these experiments is very similar to that of the latter contribution, with a similar cost function (reformulated as a velocity tracking problem) and slightly different costs. The baseline physics-based MPC incorporates a Fiala brush front tire model. The neural network MPC (NNMPC) features an otherwise identical control framework with the same rear tire model and the learning-based front tire lateral force model as a drop-in replacement for the Fiala brush tire model. § RESULTS AND DISCUSSION While both controllers slightly undershoot desired velocity after initiation, NNMPC is able to respond to the error more quickly and with less oscillation, as shown in Fig. <ref>a. This is consistent throughout the run, whereas the physics-based MPC tends to respond to changes in desired speed more slowly, incurring a higher frequency of large absolute velocity errors in the process. This hesitation persists in sideslip angle tracking as well, where physics-based MPC shows some greater deviation from the desired -40 degree sideslip while negotiating control in the other states, as shown in Fig. <ref>b. Conversely, NNMPC is able to more quickly achieve and maintain the desired sideslip angle, leading to higher frequencies of small absolute sideslip angle errors in the process. NNMPC’s trend of high performance in the velocity states translates well to path tracking performance, where it exhibits a relatively low mean and max absolute lateral error, as shown in Fig. <ref>c. In contrast, the physics-based control appears to cause Takumi to slowly slide out from the desired path as the experiment progresses. This trend is consistent with the fact the tire temperature increases throughout the experiment and proportionally reduces friction, as shown by Kobayashi <cit.>. Conversely, the neural network-based model does not rely upon explicit tire parameterization for the front axle. The neural network may potentially be underfitting these temperature-dependent friction dynamics by generalizing to tire force generation characteristics that are indicative of a wide range of tire temperatures. NNMPC’s comparatively strong performance trends in both velocity and path state tracking appears to yield an overall reduced steering control effort required to maintain the drift equilibrium throughout the maneuver, as shown in Fig. <ref>d. However, if we decompose the stages of this experiment further into the drift initiation region (s = 90.7:112.5) and steady state equilibrium region (s = 112.5:435.3), we gain further insights into the advantages and disadvantages of each respective modeling approach—particularly when we focus into the initiation region dynamics, as shown in the insets of Fig. <ref>. For example, while it may appear that NNMPC is outperforming physics-based MPC in the initiation region, the mean absolute errors of velocity, sideslip angle, and steering angle are higher with NNMPC than with physics-based MPC—and the percent difference is significant (49%, 31%, and 26%, respectively). This is in stark contrast to the steady state equilibrium region of the experiment, where the mean absolute errors of velocity, sideslip angle, lateral error, and steering angle are lower with NNMPC than with physics-based MPC, where the percent difference is significant once again (41%, 46%, 55%, and 53%, respectively). One explanation for this behavior may be found in the way in which the neural network was trained. Of the approximately 30 minutes of data used to train the model, less than 5% can be prescribed to the drift initiation region. This imbalance in data representation can potentially lead to a bias toward solely capturing the dynamics of the steady state equilibrium region. Since the gradients calculated from the dominant region will have a greater influence on the network's parameter updates, this can cause the model to prioritize minimizing the loss in the steady state equilibrium region at the expense of capturing the dynamics of the drift initiation region, further exacerbating the imbalance represented in the data. Another explanation for this behavior may be rooted in how the features and targets were labelled and synchronized. Latencies inherent in observers such as the one used to label tire force targets can cause a temporal misalignment between the observed states and the actual system states—impairing the neural network's ability to learn the correct temporal patterns and dynamics of the system. This may be particularly crucial in the drift initiation region, where the vehicle is highly dynamic, undergoing comparatively far greater velocity state derivatives (in yaw rate, velocity, and sideslip angle) than those indicative of the steady state equilibrium region. § CONCLUSION This investigation presents a novel neural network architecture for predicting front tire lateral forces as a substitute for traditional physics-based models, with a specific focus on autonomous vehicle drifting maneuvers. Through comparative experimentation using a full-scale automated vehicle, we demonstrated that the neural network model significantly enhances path tracking performance, especially under conditions involving front-axle braking forces. The implications of this study are significant for the development of advanced control systems in autonomous vehicles, particularly those designed to operate in extreme conditions. As we continue to build trust and understanding in machine learning techniques, we may be able to achieve higher levels of precision and reliability in vehicle dynamics modeling, paving the way for safer and more efficient autonomous driving technologies. This research may be extended in several ways to ultimately achieve similar closed-loop performance in trajectories of increasing complexity. Since observer latency and temporal misalignment of labeled data may have been an issue with this approach, we are currently investigating approaches with target labeling that rely solely upon vehicle-collected measurements to potentially eliminate this behavior. Additional performance enhancements can conceivably be obtained with the inclusion of additional relevant states as input to the neural network (e.g. tire temperature, in order to capture temperature-dependent dynamics) or simply expanding the complexity of the network itself as computational limitations allow. 9 Svendenius Svendenius, J.: Tire Modeling and Friction Estimation. Lund University. (2007) Pacejka Pacejka, H. B.: Tire and Vehicle Dynamics. 3rd ed., Butterworth-Heinemann (2012) Kobayashi Kobayashi, T., Weber, T. P., Gerdes, J. C.: Trajectory Planning Using Tire Thermodynamics for Automated Drifting. IEEE Intelligent Vehicles Symposium, (2024) Velenis Velenis, E., Katzourakis, D., Frazzoli, E., Tsiotras, P., Happee, R.: Steady-state drifting stabilization of RWD vehicles. Control Engineering Practice. 19, (2011) Goel Goel, T.: In Complete Control; Simulataneous Path, Speed, and Sideslip angle Control of a Drifting Automobile. Stanford University, Stanford, CA (2022) Djeumou Djeumou, F., Goh, J., Topcu, U., Balachandran, A.: Autonomous Drifting with 3 Minutes of Data via Learned Tire Models. In: ICRA, IEEE, London (2023) Weber Weber, T. P., Gerdes, J. C.: Modeling and Control for Dynamic Drifting Trajectories. IEEE Transactions on Intelligent Vehicles, (2023)
http://arxiv.org/abs/2407.12257v1
20240717015934
Compound Expression Recognition via Multi Model Ensemble for the ABAW7 Challenge
[ "Xuxiong Liu", "Kang Shen", "Jun Yao", "Boyan Wang", "Minrui Liu", "Liuwei An", "Zishun Cui", "Weijie Feng", "Xiao Sun" ]
cs.CV
[ "cs.CV" ]
Abbreviated paper title Hefei University of Technology Hefei, China {liuxuxiong, shenkang, yaojun, wangboyan}@mail.hfut.edu.cn {2022171285,anliuwei,liumr}@mail.hfut.edu.cn sunx@hfut.edu.cn,wjfeng@hfut.edu.cn Compound Expression Recognition via Multi Model Ensemble for the ABAW7 Challenge Xuxiong Liu1 Kang Shen1 Jun Yao Boyan Wang Liuwei An Zishun Cui Minrui Liu Xiao SunWeijie Feng July 22, 2024 =================================================================================================== § ABSTRACT Compound Expression Recognition (CER) is vital for effective interpersonal interactions. Human emotional expressions are inherently complex due to the presence of compound expressions, requiring the consideration of both local and global facial cues for accurate judgment. In this paper, we propose an ensemble learning-based solution to address this complexity. Our approach involves training three distinct expression classification models using convolutional networks, Vision Transformers, and multiscale local attention networks. By employing late fusion for model ensemble, we combine the outputs of these models to predict the final results. Our method demonstrates high accuracy on the RAF-DB datasets and is capable of recognizing expressions in certain portions of the C-EXPR-DB through zero-shot learning. § INTRODUCTION Facial Expression Recognition (FER) holds a significant position in the field of Artificial Intelligence, as it enables computers to better convey human emotional information, supplementing the crucial role of voice in emotional communication in real life. However, traditional facial expression recognition techniques are typically limited to classifying six basic facial expressions, namely anger, happiness, sadness, surprise, disgust, and fear. In reality, human emotional expressions are far more complex than these predefined categories.To address this challenge, Compound Expression Recognition (CER) has emerged as a part of affective computing. CER is an emerging task in intelligent human-computer interaction and multimodal user interfaces. It requires the automatic recognition of individuals' compound emotional states, which may include combinations of two or more basic emotions, such as fearfully surprised, happily surprised, sadly surprised. The utilization of multimodal features, including visual, audio, and text features, has been extensively employed in previous ABAW competitions <cit.>. We can improve the performance in affective behavior analysis tasks by extracting and analyzing these multimodal features. Specifically, our classification task models can typically be divided into two types: ResNet and Transformers. ResNet is one of the commonly used backbone networks in CNNs. It calculates high-level features of images by sliding convolutional kernels over them, focusing on local features. The innovation of ResNet lies in the introduction of residual connections, which make the training of deep networks more efficient and mitigate the vanishing gradient problem. On the other hand, the Vision Transformer is the first widely applied backbone in the Transformer family. It segments the image into patches and then flattens them into a sequence. By incorporating positional encoding, the Vision Transformer embeds the positional information of each patch into the sequence. Through the Transformer's encoder module, the Vision Transformer can model the positional relationships of all locations in the image simultaneously, capturing contextual information from different parts of the face and obtaining global information. <cit.> indicate that hybrid models combining CNN and Transformer architectures also demonstrate significant potential. These models can maintain efficient feature extraction while further enhancing the ability to capture global information, thereby achieving superior performance in tasks such as facial expression recognition. In this paper, we adopt a multi-model solution to address the problem of compound expression recognition. First, we use ResNet50 as the convolutional neural network model, which focuses on capturing the local features of facial expressions. Simultaneously, we employ the ViT to extract features from images, effectively capturing the global information of facial expressions through the self-attention mechanism. Subsequently, we use a multilayer perceptron (MLP) to fuse the features extracted by both models, leveraging the complementarity of local and global features. Finally, we train and validate our model on two commonly used facial expression recognition datasets, RAF-DB and C-EXPR-DB. Through this multi-model fusion approach, we aim to improve the accuracy and robustness of compound expression recognition. § RELATED WORK §.§ Facial Expression Recognition Facial Expression Recognition (FER) has made significant progress in its development from recognizing single expressions to compound expressions recognition (CER). CER has garnered widespread attention due to its ability to identify complex facial expressions that convey combinations of basic emotions, reflecting more nuanced human emotional states<cit.>. Research has shown that recognizing basic emotional expressions through deep learning methods paves the way for more advanced approaches capable of deciphering compound expressions<cit.>. Typical approaches focus on utilizing Convolutional Neural Networks (CNNs) for feature extraction and employing Recurrent Neural Networks (RNNs) or attention mechanisms to capture the subtle nuances and dynamics of facial expressions over time. Multi-task learning frameworks have also been widely explored to simultaneously recognize multiple basic expressions more accurately and robustly. Researchers have also developed the Real-world Affective Faces (RAF-CE) database, which contains a rich set of compound expression samples. The combination of Meta Multi-task Learning (MML) and Action Units (AU) recognition has significantly improved the performance of compound FER. This approach enhances the model's generalization ability by simultaneously learning multiple related tasks, thereby better understanding the subtle nuances in complex emotions. Additionally, the C-EXPR-DB dataset and the CEXPR-NET model have improved the performance of compound emotion recognition through a multi-task learning approach, utilizing cross-entropy and KL divergence to update the model. Deep Bi Manifold CNN (DBMCNN) introduces a novel manifold-based deep learning network for learning and recognizing compound expressions. Recent advancements include the application of transformer models in emotion recognition, such as Former-DFER, Spatio-Temporal Transformer (STT), and NR-DFERNet, which have improved the performance of dynamic facial expression recognition by capturing both spatial and temporal features. Despite these breakthroughs in addressing discrete label dynamic facial expression recognition (DFER), interference from image backgrounds remains a challenge. To address this issue, researchers have incorporated ensemble learning into their methods to further enhance recognition accuracy and robustness. § FEATURE EXTRACTION We fuse features from different neural networks to obtain more reliable emotional features and utilize these fused features for downstream tasks. By combining information from various feature extraction models such as ResNet and POSTER, we achieve a more comprehensive and accurate representation of emotions. §.§ Resnet-18 ResNet<cit.>(He et al. 2016) is a deep convolutional neural network (CNN) architecture designed to address the common issues of vanishing gradients and exploding gradients during the training of deep neural networks. Its core idea is the introduction of residual blocks, which incorporate skip connections, making the network easier to train. Instead of directly learning the mapping of each layer, the output of the residual block learns the residual between the input and output. This structure effectively mitigates the vanishing gradient problem. The pre-trained model of ResNet-18 can be used as a feature extractor, which first pretained on the MS-Celeb-1M<cit.>, and finally obtain a 512-dimensional visual feature vector. transforming images into high-dimensional feature vectors for use in other machine learning tasks, such as image retrieval and image similarity computation. §.§ POSTER The two-stream Pyramid crOss-fuSion TransformER network (POSTER)<cit.> is a novel deep learning model specifically designed for video understanding tasks, such as action recognition and video classification. POSTER combines a pyramid structure with a two-stream architecture, leveraging cross-layer fusion and transformer networks to enhance video understanding performance. Extensive experimental results demonstrate that POSTER outperforms SOTA methods on RAF-DB with 92.05%, AffectNet<cit.> (7 cls) with 67.31%, and AffectNet (8cls) with 63.34%, respectively . The dimension of the visual feature vectors is 768. § METHOD In this section, we introduce our late-fusion ensemble model developed for compound expression recognition. As illustrated in <ref>, the pipeline includes two separate models: Vision Transformer (ViT) and ResNet. §.§ Data collection Given the differences between the ImageNet dataset and facial expression recognition datasets, as well as the limited number of datasets with compound expression annotations like RAF-DB, we built a unified dataset using single-expression annotations from AffectNet and RAF-DB . This dataset comprises a total of 306,989 facial images, with 299,922 used for training and 7,067 for validation. §.§ Encoders §.§.§ PosterV2 We utilized a pre-trained PosterV2, which was initially trained on the ImageNet-1K dataset using self-supervised learning with masked reconstruction. Subsequent experiments in the paper have demonstrated its effectiveness across various image-based tasks, achieving comparable or even superior generalization performance compared to supervised training, including in the domain of emotion recognition for facial expressions. The model processes the extracted facial images and produces 768-dimensional embeddings for each image. §.§.§ Residual Neural Network (ResNet) As a member of the residual network series, ResNet is widely used in the field of computer vision, including tasks such as image classification, object detection, and image segmentation. Due to its powerful feature extraction capabilities and relatively moderate computational complexity, ResNet50 has become one of the classic backbone networks. We use the weights trained on FER2013 as initialization parameters, generating a 2048-dimensional vector for each image. §.§ Ensemble We use batched data images x as input to the model, where X∈ℝ ^B× 3× H× W. Here, B denotes the batch size, 3 represents the RGB channels, and H and W are the height and width of the images, respectively. Therefore, the features after data augmentation can be represented as: feature_1=PosterV2(x)∈ℝ ^B× 768 feature_2=Resnet(x)∈ℝ ^B× 512 To improve model performance by merging multiple feature maps for more comprehensive feature representation, we utilize a late fusion strategy. Specifically, we concatenate the three aforementioned features along a designated dimension, then feed these feature maps into a multi-layer perceptron (MLP) and apply softmax to calculate the logits for seven compound expressions, as follows: feature= [ feature_1 ; feature_2 ] where [ ; ; ] denotes the concatenation operation. logit=softmax(MLP(feature)) § EXPERIMENTS §.§ Dataset C-EXPR-DB<cit.> is currently the largest and most diverse audiovisual database collected in the wild. It contains 400 videos, amounting to around 200,000 frames, meticulously annotated for 12 compound expressions and various affective states. Additionally, C-EXPR-DB provides annotations for continuous valence-arousal dimensions, speech detection, facial landmarks, bounding boxes, 17 action units, and facial attributes. In the Compound Expression Recognition Challenge, a total of 56 unlabeled videos were selected, covering 7 types of compound expressions. The extracted video tags include seven compound expressions: Fearfully Surprised, Happily Surprised, Sadly Surprised, Disgustedly Surprised, Angrily Surprised, Sadly Fearful, and Sadly Angry. We utilize the RAF-DB<cit.> dataset for pretraining our visual feature extractors. The RAF-DB is a large-scale database comprising approximately 30,000 facial images and 3954 compound expression from thousands of individuals. Each image has been independently annotated around 40 times and then filtered using the EM algorithm to remove unreliable annotations. About half of these images have been manually annotated for seven discrete facial expressions as well as the intensity of valence and arousal in the facial expressions. §.§ Implement Details §.§.§ Evaluation metric For the Compound Expression Recognition Challenge, the evaluation metric is the F1 Score for seven compound expressions. This metric measures the prediction accuracy of the model for each expression category by combining both precision and recall. It provides a more comprehensive assessment of the model’s overall performance. The metric can be defined as follows: F_1=∑_i=1^7F_1^i/7 where F_1^i corresponds to the i-th expression. §.§.§ Training setting All experiments in this paper are conducted using PyTorch and trained on a 64-bit Linux computer with a 64 AMD Ryzen Threadripper 3970X 32-core CPU (3.80GHz), 64 GB RAM, and a 24GB NVIDIA RTX 3090 GPU. The input image resolution is consistently set to 224 × 224 pixels. The training process runs for 100 epochs, utilizing cross-entropy loss as the optimization objective and the Adam optimizer for parameter updates. To improve stability during training, a warm-up learning rate strategy is employed for pre-training the ViT on Unity. In the RAF-DB compound expression experiment, the learning rate is set to 5e-5, and the batch size is set to 128. §.§ Results §.§.§ Visual Models’ performance on RAF-DB <ref> presents a performance comparison of ViT, Poster, and ResNet in recognizing compound expressions on the RAF-DB validation set. For each compound expression, the table lists the recognition accuracy of the three models, along with their overall accuracy and F1 scores. In recognizing the Happily Surprised expression, ViT leads with an accuracy of 92.59%. For the Sadly Surprised expression, ResNet significantly outperforms both ViT and Poster with an accuracy of 55.56%, compared to 38.89% for the latter two. Overall, ViT achieves the highest accuracy at 78.09%, followed by ResNet at 75.06%, and Poster at 74.06%. Regarding F1 scores, ViT and ResNet perform similarly with scores of 70.25% and 68.19%, respectively, while Poster has the lowest performance at 63.57%. These results indicate that ViT demonstrates a well-balanced performance overall, though it may not be the best for certain individual expressions. This analysis highlights that different network architectures exhibit unique strengths and weaknesses in recognizing complex emotional expressions. Therefore, it suggests the potential benefit of combining multiple models to enhance the accuracy and robustness of expression recognition in practical applications. §.§.§ Visual Models’ performance on RAF-DB (CE) The results of the ensemble models with late fusion on the RAFDB dataset are shown in <ref>, and it can be seen that Compared with the single model, the integrated model is more accurate in predicting five compound expressions, namely,Angry Surprised, Disgustedly Surprised, Disgustedly Surprised, Sadly Fearful, and Sadly Surprised. In particular, the expression of Sadly Surprised, which is difficult to identify, is 22.22% higher than that of ViT. In addition, the accuracy rate and F1 score are both better, which is consistent with our idea of using different models to bridge the gap between each other. § CONCLUSION In this paper, we present our solution to the 7th Affective Behavior Analysis in the Wild Workshop and Competition (ABAW), focusing on the recognition of complex expressions. We train three models for expression classification: one based on convolutional networks, one on visual transformers, and one on multi-scale local attention networks. By employing model ensemble techniques with late fusion, we combine the outputs of these models to predict the final result. Extensive experiments demonstrate that our method significantly outperforms the baseline and achieves excellent results in the competition. splncs04
http://arxiv.org/abs/2407.12147v1
20240716200646
Optimal Distance Labeling for Permutation Graphs
[ "Paweł Gawrychowski", "Wojciech Janczewski" ]
cs.DS
[ "cs.DS" ]
Modeling Kinetic Effects of Charged Vacancies on Electromechanical Responses of Ferroelectrics: Rayleighian Approach P. Ganesh July 22, 2024 ==================================================================================================================== § ABSTRACT A permutation graph is the intersection graph of a set of segments between two parallel lines. In other words, they are defined by a permutation π on n elements, such that u and v are adjacent if an only if u<v but π(u)>π(v). We consider the problem of computing the distances in such a graph in the setting of informative labeling schemes. The goal of such a scheme is to assign a short bitstring ℓ(u) to every vertex u, such that the distance between u and v can be computed using only ℓ(u) and ℓ(v), and no further knowledge about the whole graph (other than that it is a permutation graph). This elegantly captures the intuition that we would like our data structure to be distributed, and often leads to interesting combinatorial challenges while trying to obtain lower and upper bounds that match up to the lower-order terms. For distance labeling of permutation graphs on n vertices, Katz, Katz, and Peleg [STACS 2000] showed how to construct labels consisting of (log^2 n) bits. Later, Bazzaro and Gavoille [Discret. Math. 309(11)] obtained an asymptotically optimal bounds by showing how to construct labels consisting of 9logn+(1) bits, and proving that 3logn-(loglogn) bits are necessary. This however leaves a quite large gap between the known lower and upper bounds. We close this gap by showing how to construct labels consisting of 3logn+(loglog n) bits. empty § INTRODUCTION Geometric intersection graph is a graph where each vertex corresponds to an object in the plane, and two such vertices are adjacent when their corresponding objects have non-empty intersection. Usually, one puts some restriction on the objects, for example they should be unit disks. The motivation for such a setup is twofold. First, it allows for modelling many practical problems. Second, it leads to nice combinatorial questions. This is a large research area, and multiple books/survey are available <cit.> (to name just a few). In this paper, we are interested in one of the most basic classes of geometric intersection graphs, namely permutation graphs. A permutation graph is the intersection graph of a set of segments between two parallel lines. An alternative (and more formal) definition is as follows. A graph G=(V,E), where V={1,2,…,n}, is a permutation graph if there exists a permutation π on n elements, such that u and v are adjacent exactly when u<v but π(u)>π(v). See Figure <ref> for a small example. Permutation graphs admit a few alternative definitions. For example, G is a permutation graph if and only if both G and its complement are comparability graphs <cit.>. Alternatively, they can be defined as comparability graphs of two-dimensional posets <cit.>. From the algorithmic point of view, the motivation for studying such graphs is that they can be recognised in linear time <cit.>, and multiple problems that are computationally difficult on general graphs admit efficient algorithms on permutation graphs <cit.>. In this paper, we consider constructing a distributed data structure capable of efficiently reporting the distance between two given vertices of a permutation graph. Informative labeling schemes. We work in the mathematically elegant model of informative labeling schemes, formally introduced by Peleg <cit.>. Such a scheme is meant to represent graphs in an extremely distributed way. Instead of storing a single global data structure, a scheme assigns to each vertex v of a given graph a binary string ℓ(v), called a label. Later, given the labels of two vertices (and no additional information about the graph), we should be able to compute some fixed function on those two vertices. In the context of informative labeling schemes, the first function that one usually considers is adjacency, where we simply want to decide whether the two vertices in question are neighbours in the graph. As observed by Kannan, Naor, and Rudich <cit.>, this is equivalent to finding a so-called vertex-induced universal graph, and predates the more general notion of informative labeling schemes. Non-trivial adjacency labeling schemes have been constructed for many classes of graphs, for example undirected, directed, and bipartite graphs <cit.>, graphs of bounded degree <cit.>, trees <cit.>, planar graphs <cit.>, comparability graphs <cit.>, or general families of hereditary graphs <cit.>. In every case, the length of each individual label is much smaller than the size of a centralised structure, often by a factor close to Θ(n), i.e., we are able to evenly distribute the whole adjacency information. Other functions considered in the context of labeling schemes are ancestry in trees <cit.>, routing <cit.> or connectivity <cit.>. However, from the point of view of possible applications, the next most natural question is that of distance labelings, where given labels of two vertices we need to output the exact distance between them in a graph. This properly generalises adjacency and usually needs much longer labels. Distance labelings. The size of a labeling scheme is defined by the maximum length of any label assigned by the encoder. If not stated otherwise, all graphs are unweighted and undirected, and consist of n vertices. For general undirected graphs, Alstrup, Gavoille, Halvorsen, and Petersen <cit.> constructed distance labeling of size (log3)n/2 + o(n), while the known lower bound is ⌈ n/2 ⌉ bits. Alstrup, Dahlgaard, Knudsen, and Porat <cit.> describe a slightly sublinear o(n)-bits labeling for sparse graphs. In case of planar graphs, scheme of size (√(n)) bits is presented by Gawrychowski and Uznanski <cit.>, and the known lower bound is Ω(n^1/3) bits. Shur and Rubinchik <cit.> designed a scheme using n^1.5/√(6)+(n) distinct labels for families of cycles, against a lower bound of Ω(n^4/3) <cit.>. For trees, we do not need a polynomial number of bits, as they can be labeled for distances using only 1/4 log^2n+o(log^2n) bits as shown by Freedman, Gawrychowski, Nicholson, and Weimann <cit.>, which is optimal up to the second-order terms <cit.>. Of course, the interesting question is to find natural classes of graphs that admit small distance labeling schemes. Distance labeling for permutation graphs. Katz, Katz and Peleg <cit.> presented distance labeling scheme of size (log^2n) for interval and permutation graphs. This was improved by Gavoille and Paul to 5logn labeling for interval graphs <cit.>, with a lower bound of 3logn-(loglog n). Very recently, He and Wu <cit.> presented tight 3logn+(loglogn) distance labeling for interval graphs. For connected permutation graphs, Bazzaro and Gavoille in <cit.> showed a distance labeling scheme of size 9logn+(1) bits, and a lower bound of 3logn-(loglog n). As noted in their work, this is especially interesting as there are very few hereditary graph classes that admit distance labeling schemes of size o(log^2n). As our main result, we close the gap between the lower and upper bounds on the size of distance labeling for permutation graph, by showing the following theorem. There is a distance labeling scheme for permutation graphs with n vertices using labels of size 3logn+(loglogn) bits. The distance decoder has constant time complexity, and labels can be constructed in polynomial time. On constants. We stress that in the area of informative labeling scheme, it is often relatively easy to obtain asymptotically optimal bounds on the size of a scheme, and the real challenge is to determine the exact constant for the higher-order term. This has been successfully done for multiple classes, e.g. distance labeling for trees, where (log^2n) <cit.> was first improved to (1/2)log^2n <cit.> and then (1/4)(log^2n)+o(log^2n) <cit.>, optimal up to second-order term. Adjacency labeling for trees is a particularly good example, with the first scheme having a size of 6logn based on <cit.>, then 4logn <cit.>, (2+o(1))logn <cit.>, (4/3+o(1))logn <cit.>, finally logn+(√(lognloglogn)) <cit.> and logn+(√(logn)) <cit.> were presented, the last two being optimal up to the second order terms. For adjacency in bounded-degree graphs with odd Δ, initial (Δ/2+1/2)logn+(1) <cit.> was improved to (Δ/2+1/2-1/Δ)logn+(loglogn) <cit.> and then to optimal (Δ/2)logn+(1) <cit.>. In the case of adjacency labelings for general undirected graphs, starting with the classical result presenting labels of size n/2+(logn) <cit.>, n/2+(1) <cit.> and n/2+1 <cit.> labelings were constructed. Similar sharply optimal labelings are shown for directed graphs, tournaments, bipartite graphs, and oriented graphs. Finally, the first described ancestry labeling schemes for trees was of size 2logn <cit.>, and then (3/2)logn <cit.>, logn+(logn/loglogn) <cit.>, logn+(√(logn)) <cit.>, logn+4loglogn+(1) <cit.>, logn+2loglogn+(1) <cit.> schemes were provided, achieving optimality up to second-order terms. Related works. The challenge of designing labeling schemes with short labels is related to that of designing succinct data structures, where we want to store the whole information about the input (say, a graph) using very few bits, ideally at most the information theoretical minimum. This is a rather large research area, and we only briefly describe the recent results on succinct data structures for the interval and permutation graphs. Tsakalidis, Wild, and Zamaraev <cit.> described a structure using only nlogn+o(nlogn) bits (which is optimal) capable of answering many types of queries for permutation graphs. They also introduce the concept of semi-distributed representation, showing that for distances in permutation graphs it is possible to store global array of size (n) bits and labels on only 2logn bits, offering a mixed approach which can overcome 3logn lower bound for distance labeling. For interval graphs, a structure using nlogn+(n) bits (which is again optimal) is known (<cit.> and <cit.>). § OVERVIEW AND ORGANISATION In Section <ref>, we present basic definitions for labeling schemes and our approach to permutation graphs. Then, in Section <ref>, we build on the methods of Gavoille and Paul <cit.>, as well as Bazzaro and Gavoille <cit.> for creating distance labelings of interval and permutation graphs. We can represent a permutation graph as a set of points in the plane, where two points (two vertices) are adjacent when one is above and to the left of the other. The first thing we need to notice when considering distances is the presence of two boundaries in such representation. We say that the top boundary is formed by points with empty (containing no other points) top-left (north-west) quadrant, and bottom boundary by points with empty bottom-right (SE) quadrant. Points on the boundaries are especially important – it can be seen that for any pair of points, there is a shortest path between them with all internal points of the path being on boundaries. As a set of boundary points forms a bipartite graph, such shortest path strictly alternates between boundaries. We can also observe that for a point v not a boundary, there are four boundary points of special interest for it, see Figure <ref>. These are pairs of extreme points on both boundaries from the points adjacent to v. Any shortest path from v to u with d(v,u)>2 can have as a second point one of these special points for v, and as a penultimate point one of the special points for u. We need to handle distances of 1 and 2 separately, but otherwise, this means it is enough to be able to compute distances between boundary points. If we can build distance labeling for boundary points, and store labels of special points efficiently, we can obtain good distance labelings for permutation graphs. This is possible as boundaries are highly structured, in particular ordered. In <cit.> authors view two boundaries as two proper interval graphs and deal with them using methods from <cit.>. An interval graph is proper when no interval is completely contained by another one. Gavoille and Paul first partition vertices of a proper interval into distance layers, by distances to the vertex representing the leftmost interval. Let us denote layer number of vertex u by L(u). It can be seen that for any two vertices u,v in interval graph we have either d(u,v)=|L(u)-L(v)| or d(u,v)=|L(u)-L(v)|+1. Then the following lemma is used <cit.>: There exists a total ordering of vertices of proper interval graph such that given (v), (u) and layer numbers L(u) < L(v) for two vertices u,v, we have d(u,v)=L(v)-L(u) if and only if (u) > (v). In other words, we can assign to each vertex v just two numbers L(v), (v), and then still be able to determine all exact distances. Going back to permutation graphs, when we view two boundaries as proper interval graphs, it is possible to obtain straightforward distance labeling for permutation graphs using 20logn bits, where the big constant is due to storing many distance labels for interval graphs completely independently. Then authors are able to reduce the size of labels to 9logn bits, after eliminating many redundancies in the stored sub-labels. In this paper, we show that working with both boundaries at once can yield better results. To do this, we modify the methods of Bazzaro and Gavoille and then carefully remove even more redundancies. First, we partition points on both boundaries into layers, defined by distances from some initial point, see Figure <ref>. As we use distances from a single point to define layers, the distance between any two boundary points is a difference of their layer numbers, or this value increased by two. It can be shown that again some ordering can be used, and storing it takes around logn bits for each boundary point. As a single point is adjacent to at most three layers, layer numbers of four special points are easy to store, and we could achieve labeling of length (2+1+4)logn+(1)=7logn+(1) by storing for each point respectively its 2D coordinates, layer numbers of neigbours and four times values for extreme neighbours, in order to compute distances between boundary points. This can be reduced to 5logn by dealing with distances 1 and 2 more carefully, allowing us to not store point coordinates explicitly. All of the above is described in Section <ref>. After additional analysis and reductions laid out in Section <ref>, we can decrease the size to 3logn. This is since, roughly speaking, one can collapse information stored for two pairs of extreme boundary neighbours into just two numbers, due to useful graph and layers properties. More precisely, we can observe that we store excessive information about the set of four extreme neighbours. For vertex v, two extreme right points on both boundaries are used to reach points to the right of v, and extreme left are used to reach points to the left. But we do not need the exact distance between points to the left of v and right extreme points, thus we have some possibility to adjust the stored values. Particularly, the main case is when value of the right extreme point on the bottom boundary is smaller than value of the left extreme point on the top boundary; it turns out that these two values can be equalised and stored as some single value in between the original values. The second pair of extreme points can be dealt with in a similar manner, and then we need to ensure that all of this did not interfere with the correctness of deciding about distances 1 and 2, which are different cases than all distances larger than 2. § PRELIMINARIES Permutation graphs. Permutation graph G_π is a graph with vertices representing elements of permutation π, where there is an edge between two vertices if and only if the elements they represent form an inversion in the permutation. See Figure <ref>. In <cit.> McConnell and Spinrad show that it is possible to test in linear time whether a given graph is a permutation graph, and also construct the corresponding permutation. We will use a geometric (or 'grid') representation of permutation graph G_π on n vertices as a set of points with coordinates in [1,n], with point (i,π^-1(i)) for each i ∈ [1,n]. Considering a point p, we always denote its coordinates by p=(p_x,p_y). Top-left quadrant of point p, _p, is a subset of points {v: v_x < p_x v_y > p_y} from the graph. Similarly, we have _p (top-right), _p (bottom-left) and _p (bottom-right) quadrants. Two points are adjacent in the graph iff one is in or quadrant of the other. See Figure <ref>. We have transitivity in a sense that if w ∈_v and u ∈_w, then u ∈_v; similarly for other quadrants. By distance d(u,v) between two points we will mean distance in the graph. We will assume the given permutation graph is connected. There are standard ways to enhance labelings to disconnected graphs by adding at most (loglogn) bits to the labels, and we will describe how it can be done after the main theorem. We note that for connected graphs of size at least two, no point could be on both boundaries, as it would be isolated otherwise. Labeling schemes. Let 𝒢 be a family of graphs. A distance labeling scheme for 𝒢 consists of an encoder and a decoder. The encoder takes a graph G∈𝒢 and assigns a label (binary string) ℓ(u) to every vertex u∈ G. The decoder receives labels ℓ(u) and ℓ(w), such that u,w∈ G for some G∈𝒢 and u ≠ w, and should report the exact distance d(u,w) between u and w in G. The decoder is not aware of G and only knows that u and w come from the same graph belonging to 𝒢. We are interested in minimizing the maximum length of a label, that is, max_G∈𝒢max_u∈ G |ℓ(u)|. Organization of the labels. The final labels will consist of a constant number of parts. We can store at the beginning of each label a constant number of pointers to the beginning of each of those parts. As the total length of a label will be (logn), pointers add only (loglogn) bits to the labels. § SCHEME OF SIZE 5LOGN In this section, we describe how to use boundaries to design distance labeling of size 7logn+(1), and then how to refine it to reach 5logn+(1). §.§ Properties of Boundaries For a set of points S, we have its top boundary defined as a subset of points from S which top-left quadrants are empty, and bottom boundary as a subset of points which bottom-right quadrants are empty. See Figure <ref>. Observe that points on boundaries are ordered, that is, for u and v on the same boundary, either u_x>v_x and u_y>v_y, or u_x<v_x and u_y<v_y. We use < to note this relation on boundary points. Boundaries are particularly useful when considering distances between points: For any two points u,v at distance d, there is a path P=(u=q_0,q_1,q_2,…,q_d=v) of length d such that all points except possibly u,v are on alternating boundaries. Take any shortest path P' and any adjacent q_i and q_i+1 on P', assume without loss of generality that q_i+1∈_q_i. Suppose that q_i+1 is not on the top boundary. We either have q_i+2∈_q_i+1 or q_i+2∈_q_i+1. Note that by transitivity if q_i+2∈_q_i+1, then q_i+2∈_q_i and we could have a shorter path by removing q_i+1. Thus, assume q_i+2∈_q_i+1. If q_i+1 is not on the top boundary, then by definition there exists a boundary point q' ∈_q_i+1, and it must be that q_i+2∈_q'. This means that we could replace q_i+1 by q', increasing the number of points from the path lying on the boundary, and then repeat the argument. Therefore, we have that all points except the first and last ones can always lie on boundaries. See Figure <ref> for an illustration. These must be alternating boundaries, as by definition no two points on the same boundary are adjacent. We partition all points on the boundaries into layers, in the following way. The layer number 0 consists of a single left-most point p_0 in the whole set S. Note that p_0 is on the top boundary. Then, a boundary point is in a layer number i if its distance to p_0 is i. By L(u) we denote the layer number of u. See Figure <ref>, we will soon see that indeed layers are always nicely structured, as pictured. Observe that in even layers, we have only points from the top boundary, and in odd layers only from the bottom boundary, as points on a single boundary are non-adjacent. Thus, points in a single layer are ordered, by both coordinates. To determine the distance between boundary points, we use a method similar to the one from the paper of Gavoille and Paul <cit.>, precisely Theorem 3.8. This is also connected to what Bazzaro and Gavoille <cit.> do in their work, but not identical, as they use bottom and top boundaries separately, as two mostly independent interval graphs. There exists a total ordering of boundary points such that given (v), (u) and layer numbers L(u) < L(v) for two boundary points u,v, we have d(u,v)=L(v)-L(u) if and only if (u) > (v). As noted, points on both boundaries are ordered, and layers switch between boundaries, starting with layer number 0 containing just a single left-most point from the top layer. Say ordered points on the top layer are t_0,t_1,t_2,…. We prove that there exist strictly increasing numbers i_0=0,i_2,i_4,… such that layer number 0 consists of t_0, and then any layer 2k consists of consecutive points t_i_2k-2+1,…,t_i_2k. Similarly, for points b_0,b_1,… on bottom layer, there exists numbers i_1,i_3,… defining analogous ranges. Denote by (q) the last point (with largest coordinates) in layer q. We prove by induction, in a given order, some intuitive properties (considering without loss of generality odd layer): * All points from layer 2k+1 are to the right of all points from layer 2k (or all points from layer 2k are above layer 2k-1). * All points from layer 2k+1 are adjacent to (2k). * Layer 2k+1 is formed by consecutive points b_i_2k-1+1,…,b_i_2k+1. * Ordered points from layer 2k are adjacent to increasing prefixes of points b_i_2k-1+1,…,b_i_2k+1. This means that, firstly, any point t_j with L(t_j)=2k is adjacent exactly to points b_i_2k-1+1, …, b_q from layer 2k+1, for some q ≤ i_2k+1. Secondly, for t_j+1 with L(t_j+1)=2k, t_j+1 is adjacent to points b_i_2k-1+1, …, b_r with q ≤ r. The base of layer 0 is apparent, except for the third property, which can be done as in the induction step. Now consider layer 2k+1. As all points from layer 2k are adjacent to (2k-1), meaning they are in TL_(2k-1), all these points are to the left of points from layer 2k+1. Moreover, the last point in layer 2k has the largest y coordinate, thus if any point from layer 2k+1 is adjacent to some point from layer 2k, then it is also adjacent to (2k). This gives us the first two properties. Now, by definition, if points b_q, b_r with r>q are adjacent to some point v on the top boundary, then all points b_q, b_q+1, …, b_r are adjacent to v. Thus, if b_i_2k-1+1 is adjacent to (2k), we get the third property. But it must be adjacent, as otherwise it would be above (2k), and then layer 2k+1 would be empty. We can apply the above principles to any point from layer 2k - it either neighbours the first point in layer 2k+1 or no point from this layer. This means any point from layer 2k neighbours prefix of points from layer 2k+1 (possibly empty, possibly full layer). Moreover t_j+1 is adjacent to the same points from layer 2k+1 that t_j is, and possibly more, as t_j+1 is above t_j. See Figure <ref>. By the definition of layers, for points u,v, d(u,v) ≥ |L(u)-L(v)|. If d(u,v)=|L(u)-L(v)|, we say that there is a quick path between them. Going back to the statement of the lemma, it says there is such ordering (v) applied to boundary points, that there is a quick path between two points if relations between their values and layer numbers are opposite. We can create ordering in the following greedy way: starting from value of 1, always assign current value to the lowest point in the lowest layer such that it is adjacent to no points in the next layer without values already assigned, then increment current value. In other words, repeatedly choose the lowest (by layer) possible point having all neighbours from the next layer already assigned values. See Figure <ref> for an example. For correctness, observe that when we choose v from layer k, for all layers larger than k the last point with assigned value is adjacent to all points with assigned values in the next layer. This is by greedy procedure, as assume there is a layer j>k such that u, the last point with assigned value in layer j, is not adjacent to w, the last point with assigned value in layer j+1. It is only possible if (w)>(u). But by definition of greedy procedure, there is no reason to choose w at any point after choosing u and before choosing v, as no other point from layer j was chosen between these events and procedure always choose the lowest layer. Now, using the above, we can observe that at the moment we assign value for point v in layer k: * There are quick paths from v to all points with assigned values and in layers larger than k. This is true by the choice of v and transitivity. It is given that v is adjacent to all points in layer k+1 with assigned values, then the last of these points is adjacent to all points in layer k+2 with assigned values, and so on. * There are no quick paths from v to any point without assigned value in a layer larger than k. This is clear from the greedy procedure, previous point and the fourth inductive property - we do have quick paths to points in larger layers up to the last point with an assigned value, and these points cannot be adjacent to any more points, since they were chosen only when all their neighbours from the next layer got assigned values. By Lemma <ref>, we are able to detect when quick paths exist, and to complete knowledge about distances between boundary points we observe the following: For any boundary points u,v with L(v) ≥ L(u), d(u,v) is equal to either L(v)-L(u) or L(v)-L(u)+2. First let us argue that d(u,v) ≤ L(v)-L(u)+2. We observed (i) is adjacent to all points in layer i+1. Thus, (u,(L(u)-1),(L(u)),…,(L(v)-1),v) is always a correct path with length L(v)-L(u)+2. By definition of layers, d(u,v) cannot be less than L(v)-L(u). Finally, by the Property <ref> there is a shortest path that alternates between boundaries, so it cannot be of length L(v)-L(u)+1, as we cannot change parity. To simplify our proofs, we will add some points to the original set. For each point v on the bottom boundary and from the original input, we add point (v_x+, v_y-). It is easy to see that such a point lies on the bottom boundary, adding it does not change distances between any existing points, and it removes v from the bottom boundary. Then we change numeration to integer numbers again, increasing the range of numbers by some constant factor. Similarly, for any original point v on top boundary, we add (v_x-, v_y+). What we achieve is that after this change no point from the original input lies on the boundary, which reduces the number of cases one needs to consider when assigning labels (only to the original points). Now let us focus on any point v not on the boundary. Assume v is adjacent to some points from layers i and j, j>i. It cannot be that j>i+2, by definition of layers as distances from p_0. Thus, v is adjacent to points from at most three (consecutive) layers. We note that v is adjacent to a consecutive segment of ordered points from any layer i. Let us denote by (v) and (v) the first and last points on the bottom boundary adjacent to v and by (v) and (v) the first and last points on the top boundary adjacent to v. Consult Figure <ref>. We can make easy observation on points at distance two: For any two points u,v, d(u,v) ≤ 2 is equivalent to [(u),(u)] ∩ [(v),(v)] ≠∅ or [(u),(u)] ∩ [(v),(v)] ≠∅. This is since by Property <ref>, we must have a path between u,v at distance two going through a single point on the boundary. In the case of d(u,v)=1, assume without loss of generality that u ∈ TL_v, then [(u),(u)] ⊆ [(v),(v)], and ranges are never empty. Considering points at a distance of at least three, we have the following: For any two non-boundary points u,v with d(u,v)>2 and u_x<v_x, there is a shortest path from u to v with the second point being either (u) or (u) and the penultimate point being either (v) or (v). We will prove statement for the second point, as the penultimate point is symmetric. By Property <ref> there always exists a shortest path P with all but extreme points lying on alternating boundaries, with P=(u=q_0,q_1,q_2,…,w,q_d(u,v)=v), so we denote penultimate point by w. Consider layer number of w. As d(u,v)>2 and u_x<v_x, it must be that v,w ∈ TR_u and so L(w) ≥min(L((u)),L((u))). If L(w) ≥max(L((u)),L((u))), then by Property <ref> and Lemma <ref> we can replace q_1 with (u) or (u) (which have the largest values in their layers from neighbours of w), while keeping the length of P and w as penultimate point. We are left with L(w) = min(L((u)),L((u))). Assume L((u))>L((u)), so L(w)=L((u) and also w>(u). Then w is adjacent to (L(w)-1) ∈_u and thus also to (u), so we can have q_1=(u). In other case, we could similarly set q_1=(u). Thus, we can always change q_1 to be (u) or (u), without changing w. We established ways to determine the distance between any points using distances between specific boundary points. Additionally, observe that all conditions from Lemma <ref> and Property <ref> can be checked using just values and layer numbers. That is, for u,v on the same boundary we have u ≤ v iff (L(u),(u)) ≤_lex (L(v),(v)). At this stage, we could create labels of length 7logn+(1), by storing for each point v coordinates v_x,v_y, and for all points of interest (v),(v),(v),(v), their (·) and L(·) values. As a point is adjacent to at most three layers, all four layer numbers can be stored on just logn+(1) bits. Coordinates allow us to check for distance 1, distance two is checked by using Property <ref>, and larger distances by Property <ref>, Lemma <ref> and <ref>. §.§ Auxiliary points To better manage detecting points at distance 1 without explicitly storing coordinates, we will add, for each point in the set S, four additional artificial points, two on each boundary. Consider v from the initial set and its bottom-right quadrant _v. We add two points v_b=(v_x-ϵ, (v)_y-ϵ) and v_b'=((v)_x+ϵ,v_y+ϵ) to the set S of points. See Figure <ref>. Then, we change the numeration of coordinates so that we still use the permutation of natural numbers up to |S|. This is repeated for all the initial points. First, we check that this addition did not disturb the properties of the points too much: All added points are on the bottom boundary. Moreover, for any two points u,w ∈ S, d(u,w) remains the same. Considering the first property, we need to observe that when adding a point, its bottom-right quadrant is empty. For v_b' it holds as the point is between (v) and the next point on the bottom boundary on both axes. Thus it changes the status of no point on the bottom boundary and itself is on this boundary. We have a similar situation with v_b. For the second property, we notice that any point adjacent to v_b' is also adjacent to (v). Since v_b' and (v) lie on the bottom boundary, any adjacent point must be in their top-left quadrant. As v_b'=((v)_x+ϵ,v_y+ϵ) and there are no points with x-coordinate between (v)_x and (v)_x+ϵ, if some point is to the left of v_b', it is also to the left of (v), and by definition we have v_b'>v_y>(v)_y. Similarly, any point adjacent to v_b is also adjacent to (v). Therefore, these points cannot offer any shortcuts in existing shortest paths. Similarly, for each point in the initial set, we add two points on the upper boundary. That is, consider v and _v. We add two points v_t=((v)_x-ϵ,v_y-ϵ) and v_t'=(v_x+ϵ, (v)_y+ϵ) to the set S of points. Then, we again change the numeration of coordinates. This is symmetric and has the same properties. After adding four auxiliary points for all initial points, we have the desired property: For any two points v,w from the initial set, w ∈_v is equivalent to (v) < (w) ≤(w) < (v). Moreover, w ∈_v is equivalent to (v) < (w) ≤(w) < (v). The cases for both boundaries are symmetrical, so we focus on the bottom one. If w ∈_v, then by transitivity v is adjacent to (w), and then by definition of w_b, v is also adjacent to w_b. As w_b<(w), we get (v)<(w). Analogous facts hold for w_b', therefore implication in the right direction holds. We consider the left direction and use contraposition. Firstly, we want to show that if w ∉_v, then either (v)>(w) or (w)>(v). w ∉_v means w_x < v_x or w_y> v_y. Assume w_x < v_x and (v) ≤(w). This can be only if (v) = (w), since w is to the left of v and points on boundaries are ordered. As v_b is below (v) and between w_x and v_x, it must be that v_b∈_w. So we have v_b∈_w and v_b < (v)=(w), a contradiction, as then v_b should be (w). Similarly using v_b' we can show that assuming w_y > v_y and (w) ≤(v) leads to a contradiction. See Figure <ref>. The above lemma is useful when testing for adjacency of points – we do not need explicit coordinates to check it. Now, we are able to create labels of length 5logn+(1), as there is no longer a need to store coordinates, thus just four values, and additionally layer numbers using only logn+(1) bits. We can still improve on this, getting our final result in the next Section. § FINAL SCHEME OF SIZE 3LOGN In this Section, the final improvement to label sizes is achieved, by collapsing two pairs of values into just two values, with an additional constant number of bits. We can store for each input point v two integers v_x',v_y' with values in (n), bit values v_binf,v_tinf, and layer numbers L((v)),L((v)),L((v)),L((v)) smaller than n, such that distance queries for pairs of points can be answered using these values only. Let us consider any input point v, recall we made sure it does not lie on the boundary. As previously, we can store L((v)), L((v)), L((v)) and L((v)) on logn+(1) bits, as there are no more than n layers, and the differences between these four values are at most 2. We would like to store one number instead of ((v)) and ((v)), then also one number instead of ((v)) and ((v)). We need to consider four possible cases for the layout of layers, see Figure <ref>: * v is adjacent to two layers on the bottom boundary (and then necessarily one on the top). * v is adjacent to two layers on the top boundary. * v is adjacent to one layer on both boundaries, and the layer on the bottom is higher. * v is adjacent to one layer on both boundaries, and the layer on the bottom is lower. These will be referred to as possible layouts. First, consider the last case. We argue that it must be that (v) is the last point in its layer. Assume otherwise, so there is a point w on the bottom boundary, with L(w)=L((v)) and w>(v), so necessarily w ∈ TR_v. This means that there is a point u with L(u)=L(w)-1 and u_y>w_y>v_y, by definition of layers. It cannot be u ∈ TL_v, as we assumed the last case, where there are only points from layer L(u)+2 in TL_v. But if u ∈ TR_v, and thus u_x>v_x, then no point from layer L(w)+1 could be in TL_v, as all of them are to the right of u, and we reach a final contradiction. Therefore, (v) is in this case the last point in its layer. Notice that this information is easy to store, as it was proven that the last point in a layer has value larger than all of the points in larger layers, that is, there is always a quick path to such points. We can store a bit v_binf=1 indicating this case, effectively using value of infinity instead of exact ((v)), and then just store unchanged ((v)). We still need to argue that no point from lower layers must use (v) to reach v, and that checks for distances 1 and 2 works, which will be done later. Now, consider the three first cases. In all of them, we have L((v))>L((v)), and as these two points are adjacent ((v))<((v)) also holds. Denote by w a boundary point with the smallest (w) value among points with w_y>v_y and (w)>((v)). Note that as (v) can be chosen, such w always exists and it must be (w) ≤((v)). We will store in our label value of v_y'=(w)- (to be normalised later), and as we will see this one number can replace both values of ((v)),((v)). We say that value ((v)) is increased to v_y', and ((v)) is decreased. We can deal in a similar manner with (v),(v) values. If (v) is the last point in its layer (which is always true in the third case of the possible layouts), we store v_tinf=1 and an exact value of ((v)). Otherwise, denote by w' a boundary point with the largest (w') among points with w'_x>v_x and (w')>((v)). We will store value of v_x'=(w')-, and this one value can replace both ((v)),((v)). See Algorithm <ref> for the summary of the encoder work. We denote by d((l,i),u), for numbers l,i and boundary point u, a distance between u and point in layer l with value of equal to i, as would be returned by using Lemma <ref>. Decoding navigates all possible cases and makes some distance queries, using values of v_x',v_y'. See Algorithm <ref> for the description of the decoder, some details will become clear later. To prove correctness, first consider distances of at least three. For u,v with d(u,v) ≥ 3, our scheme will return d(u,v), or a value less than 3. The formulation is for technical reasons, we will exclude the possibility of returning value less than 3 later. For some vertex v, due to distance, we are interested only in paths from v to points in _v and _v. By Lemma <ref>, the first ones start in either (v) or (v), other ones in either (v) or (v). Let us informally go through what we need to check for any vertex v: * For any boundary point p ∈_v with d(v,p) ≥ 3, distances between p and (v),(v) are preserved by the encoding. * For any boundary point p ∈_v with d(v,p) ≥ 3, distances between p and (v),(v) as checked by the decoder are not smaller than in the graph (but are allowed to be larger). * For any boundary point p ∈_v with d(v,p) ≥ 3, distances between p and (v),(v) are preserved. * For any boundary point p ∈_v with d(v,p) ≥ 3, distances between p and (v),(v) as checked by the decoder are not smaller than in the graph. First, consider (v). We need that for any boundary point u in _v, d((v),u)=d((L((v)),v_y'), u). This holds by choice of v_y', no relation between values has changed. In the case of the last layout, using infinite value also preserves these relations. We also need to consider whether we could have reduced distance to some points in BL_v by increasing stored ((v)) value to v_y'. As for a quick path to exists relations between layer numbers and values need to be opposite, increasing ((v)) could introduce a false quick path only for points in layers larger than L((v)). For BL_v this is possible only in the last layout, when (v) is the last point in its layer and is adjacent to all points in layer L((v))+1 anyway, by Lemma <ref>. We had a (v), now consider (v), where the situation is more complicated due to the non-symmetric definition of v_y'. We assume v_binf=0, as otherwise the exact value of ((v)) is stored and the decoder does not err. We need that for any boundary point u in _v, d((v),u)=d((L((v)),v_y'), u). By examining possible layouts we see that any point u ∈ BL_v with L(u)>L((v)) must have L(u)=L((v)), thus (u)<((v)), so (u)<v_y'<((v)) and the distance query is still correct. If L(u)=L((v)), distance is always 2 no matter values. Now assume there is boundary point u ∈_v with L(u)<L((v)) and v_y'<(u)<((v)), which is the only remaining way to produce false result of distance query. Note that L((v))<L((v)), by v_binf=0. Recall w is a boundary point with the smallest (w) value larger than ((v)) among points with w_y>v_y thus we have ((v)) ≥(w) > ((v)). If L(w)=L((v)), then w=(v) and we have nothing to prove, so we assume L(w)>L((v)). See Figure <ref>. It holds that (u)>(w) and L(u) < L((v)) < L(w), thus d(u,w)=L(w)-L(u). This means that there is a quick path P from u to w containing exactly one point from each layer in range [L(u),L(w)]. Let r ∈ P and L(r)=L((v)). There is no quick path from u to (v), so it must be r<(v), which means r_y < v_y. This in turn means that r is not adjacent to any point in layer L((v)) with value larger than ((v). But then it cannot be that P contains a single point from layer L((v)), as such a point must be adjacent to r and also have value of at least (w), meaning larger than ((v). Therefore, by contradiction, there cannot be a point u ∈ BL_v with L(u)<L((v)) and v_y'<(u)<((v)), which means that for all u ∈_v we have d((v),u)=d((L((v)),v_y'), u). We also need to consider whether we could have reduced distance to some points in _v by using v_y' value instead of ((v)). But in _v there are only points in layers at least as large as L((v)), for which decreasing value of ((v)) cannot decrease output of distance query. Proof for v_x', (v), and (v) is symmetric. As a side note, let us observe two things that may help in understanding our methods. It might be d((L((v)),v_y'), u) = d((v), u)+2 for point u ∈ BL_v, but by Lemma <ref> we do not need to store this distance correctly, as there is a shortest path from u to v with the penultimate point not being (v). In other words, we do lose some unnecessary information. Secondly, we can store infinity values for boundary points that are last in their layers because of layers and values definition – all points in the following layers have smaller values already. We could not store 'zero' or 'minus infinity' values for points that are first in their layers, but there is no need for this. Now we turn to distances 1 and 2, with the former being easier. For u,v with d(u,v) = 1, our scheme will return 1. For u,v with d(u,v) ≥ 2, our scheme will never return 1. Once again, we have several cases of values being increased, decreased, or set to infinity, but can check that methods from Lemma <ref> and Property <ref> still holds for our new values. The encoder never changes layers of points, so any possible issue is connected only to values. Consider input points u,v with v_y > u_y and L((v))=L((u)). By v_y > u_y, we have (v) ≥(u). Then if v_binf=u_binf=0, we must have v_y'≥ u_y'. Indeed, when (v) ≠(u), then u_y' < ((v))<v_y'. If (v) = (u), again v_y'≥ u_y' as the smallest value of larger than ((v)) for points above u cannot be larger than analogous value for v by v_y > u_y. Finally, u_binf=1 implies v_binf=1. This means that for any u,v, we can derive whether ((v)) ≥((u)) given ℓ(v),ℓ(u) simply by comparing values from the labels. As (v) too is replaced by v_y', relation between (v),(u) is retained. Similarly, we can check that respective inequalities hold for v_x', u_x' values. Overall, this means that relations between points of the same kind (say (v),(u)) are retained, therefore by Lemma <ref> our labeling correctly outputs 1 exactly when d(u,v) = 1. Lastly, we are left with the case of distance two. For u,v with d(u,v)=2, our scheme will return 2. For u,v with d(u,v) ≥ 3, our scheme will never return 2. Using Property <ref>, whenever d(u,v)=2 the decoder reports this correctly, as values of (), () can only be increased, and (),() only decreased, and thus ranges only widen. Therefore, we need to just exclude the possibility of false intersections, that is, reporting 2 for d(u,v) > 2. First let us note that the case when ((u))<((v)) but u_y'=v_x' is impossible to achieve. This is because whenever value is increased or decreased, it is set to some new unique value, different from all values existing at the moment, and ((u)),((v)) are not changed simultaneously. Consider input points v,u with v_y > u_y, L((v))=L((u)), v_tinf=u_binf=0, and d(u,v)>2. From distance constraint, we get that also v_x > u_x. Assume (u)<(v), meaning ranges of neighbours of v,u on the bottom layer do not intersect. Recall that the encoder replaces ((u)) with u_y', and ((v)) with v_x', where v_x' is defined using ((v)). We will show that it is impossible that u_y'>v_x', and thus ranges of v,u still do not intersect. First, assume ((v))>((u)). Then, by definition u_y'<((v)) as (v) is above u, and v_x'>((v)), which means u_y'<v_x' as needed. So, we might assume ((v))<((u))<((v)). Now, for u_y'>v_x' to hold, we would need a boundary point w somewhere to the right of v, and with ((v))<(w)<((u)). By definitions there are no points simultaneously below u and to the right of (u), and by (u)<(v) and v_y > u_y, v is to the right of (u). Thus, w must be above u. We assumed v_tinf=0, excluding the third case of possible layouts (Figure <ref>) v, so L((v))-1=L((v))=L((u)). By (w)<((u)), it also must be that L(w) ≥ L((v)). Summing up, we get that L((u))=L((v))<L((v)) ≤ L(w). See Figure <ref>, which depicts the only relevant remaining arrangement of points. We examine (u) to show that this case is also impossible. Since L(w)>L((u)) and (w)<((u)), it holds that d((u),w)=L(w)-L((u)) and there is a quick path between these two points, having a single point in each of layers [L((u)),L(w)]. But as ((v))<(w) and L((v)) ≤ L(w), we get d((v),w)=L(w)-L((v))+2. As (u) is to the left of v, the largest adjacent point in layer L((v)) has value at most ((v)), so smaller than (w). But this is a contradiction, as then there cannot be a quick path from (u) to w having a single point in layer L((v)). This means that we can still use Property <ref> for new stored values. By Properties <ref>, <ref> and <ref>, we have proven the lemma. To conclude, we have that ℓ(v) consists of the following parts: * L((v)), L((v)), L((v)) and L((v)), all stored on total logn+(1) bits due to differences between these values being at most 2. * Bit v_binf and value v_y', on logn+(1) bits. * Bit v_tinf and value v_x', like above. We increased the number of vertices in the graph by a constant factor, so the final length is 3logn+(1) bits. Decoding can be done in constant time. §.§ Disconnected graphs Here we describe how to modify distance labeling for connected graphs into distance labeling for general graphs, by adding at most (loglogn) bits to the labels. This is a standard simple approach, present in some related works. We sort connected components of the graph by decreasing size, say we have C_1,C_2,…, then proceed with creating distance labeling for each individual connected component. The final label is the number of connected component of a vertex and then its label for the distance created in the component. If v ∈ C_i, we encode i on logi+(loglogn) bits. We have |C_i| ≤ n/i, so the final label size is at most log(n/i)+(loglogn)+logi=logn+(loglogn), as claimed. § CONCLUSION Improving upon the previous results, we have described a distance labeling scheme for permutation graphs matching existing lower bound up to an additive second-order (loglogn) term. This also improves constants in distance labeling for circular permutation graphs, as described in <cit.>. Namely, one can construct distance labeling of size 6logn+(loglogn) for such graphs. We leave as an open question determining the complexity of distance labeling for circular permutation graphs, and finding more interesting generalisations. plain
http://arxiv.org/abs/2407.12634v1
20240717150141
Stabilization of self-steepening optical solitons in a periodic PT-symmetric potential
[ "Eril Güray Çelik", "Nalan Antar" ]
nlin.PS
[ "nlin.PS", "math-ph", "math.MP", "physics.optics", "physics.plasm-ph" ]
inst1]Eril Güray Çelikcor1 celik19@itu.edu.tr [cor1]Corresponding author inst1]Nalan Antar [inst1]organization=Department of Mathematics Engineering, Istanbul Technical University, Maslak, city=Istanbul, postcode=34469, country=Türkiye § ABSTRACT We numerically investigate the existence and stability dynamics of self-steepening optical solitons in a periodic 𝒫𝒯-symmetric potential. We show that self-steepening solitons of the modified nonlinear Schrödinger (MNLS) equation undergo a position shift and amplitude increase during their evolution in the MNLS equation. The stabilization of solitons by an external potential is a challenging issue. This study demonstrates that the suppression of both the amplitude increase and the position shift of self-steepening solitons can be achieved by adding a periodic 𝒫𝒯-symmetric potential to the MNLS equation. Self-steepening Modified nonlinear Schrödinger equation SolitonHigher-order effects External potentialPT-symmetry § INTRODUCTION Optical solitons are solitary waves that arise from a delicate balance between the group velocity dispersion (GVD) and nonlinear effects caused by the optical Kerr effect <cit.>. The generation and analysis of solitons in optics is a fairly popular research topic since optical solitons have a wide range of applications, including femtosecond lasers <cit.>, logic gates and filters in optical logic devices <cit.>, pulse compression and splitting in ultrafast optics <cit.>, and all-optical switching <cit.>. In particular, the propagation of optical solitons in fiber-optic communication systems is an area of great interest for research because of their remarkable stability properties <cit.>. Optical solitons can propagate through long distances in fiber transmission systems without being affected by chromatic and polarization mode dispersion. Since their natural structure is preserved, they can be used as natural optical bits of information in fiber optic systems. In mono-mode optical fibers, the envelope of an electromagnetic field of a light signal contains the signal information and changes slowly because it has a high-frequency carrier wave. Also, since the refractive index to which the light signal is exposed in the fiber depends on the intensity of the light, this phenomenon is called the Kerr nonlinear effect, the light signal has a weak nonlinearity. Hence, light pulse propagation in mono-mode optical fibers is governed by the nonlinear Schrödinger (NLS) equation <cit.>. However, the classical NLS equation may be inadequate to model the physical system for femtosecond pulses. This is because when the width of the light pulse is very small, i.e., the frequency is high, the higher-order NLS equation has to be taken into account, which includes some higher-order effects such as third-order dispersion, self-steepening, and stimulated Raman scattering <cit.>. The impact of these higher-order effects on soliton propagation in nonlinear optical fibers has been the subject of some studies before <cit.>. The self-steepening is one of the most important of these higher-order effects and besides soliton propagation in fiber optics, it has significant applications such as the propagation of nonlinear Alfvén waves in plasmas <cit.> and light-matter interactions <cit.>. Self-steepening becomes crucial for the propagation of short pulses in long optical fibers or waveguides <cit.>. The propagation of an optical soliton under the self-steepening effect can be described by the modified NLS (MNLS) equation i u_z+β/2 u_x x +|u|^2 u + is ∂/∂ x(|u|^2u)=0, where complex-valued function u(x,z) is the envelope of the light's electric field, x is the transverse coordinate, z is the distance along the direction of propagation, β is the GVD coefficient, and s is the self-steepening coefficient. The self-steepening effect occurs when the group velocity of an optical pulse depends on the density <cit.>. In such a case, since the group velocity is density-dependent, the peak of an optical pulse moves slower than its wings <cit.>. Therefore, the self-steepening causes the top of an optical pulse to become steeper towards the trailing edge and causes an optical pulse to become asymmetric <cit.>. In other words, it causes an optical shock at the trailing edge, especially in the absence of the GVD effect <cit.>. The GVD dampens the effects of the self-steepening to a remarkable amount <cit.>. However, despite the GVD, the self-steepening severely affects the stability of the soliton by causing a shift in the position of the pulse and dividing the pulse into sub-pulses <cit.>. These impacts of the self-steepening on an NLS soliton are well-known phenomena. In this paper, we examine the dynamic properties of self-steepening solitons of the MNLS equation and show that these self-steepening solitons undergo a position shift and amplitude increase during their evolution in the MNLS equation, leading to inherent instability. This finding highlights a critical challenge for practical applications, as unstable solitons can distort information and limit transmission distances in fiber-optic communication systems. We present a novel solution to address this challenge: incorporating a periodic 𝒫𝒯-symmetric potential into the MNLS equation. We demonstrate that the suppression of both the amplitude increase and the position shift of self-steepening solitons can be achieved by adding a periodic 𝒫𝒯-symmetric potential to the MNLS equation. 𝒫𝒯-symmetric quantum systems were discovered in 1998 by Bender and Boettcher, who proposed the idea that Hermitian Hamiltonians could be extended to non-Hermitian Hamiltonians <cit.>. After that, 𝒫𝒯-symmetric systems became the subject of research in many fields such as microwave cavities <cit.>, electronic circuits <cit.>, lasers <cit.>, chaos and noise <cit.> and optics <cit.>. In 2008, Musslimani et al. studied the existence, stability, and propagation dynamics of solitons in 𝒫𝒯-symmetric lattices <cit.>. Subsequently, many studies have been conducted on 𝒫𝒯 solitons as certain characteristics of solitons in optical lattices can be controlled by adjusting the lattice depth and period <cit.>. In this study, we use 𝒫𝒯-symmetric periodic lattices to stabilize self-steepening solitons. The structure of the paper is as follows: Section 2 presents the governing equation (model) describing the propagation of self-steepening solitons in a periodic 𝒫𝒯-symmetric potential. The governing equation cannot be solved analytically. Section 3 introduces the pseudospectral renormalization (PSR) method employed to obtain self-steepening solitons numerically. The existence region of self-steepening solitons is investigated in Section 4. The linear and nonlinear stability analysis of self-steepening solitons and numerical results are depicted in Sections 5 and 6. A summary of the numerical results is given in Section 7. § THEORETICAL MODEL In fiber optic communication systems, the self-steepening effect significantly impacts optical solitons when pulse durations are shortened to the femtosecond regime to achieve high bandwidth. The propagation of self-steepening optical solitons in a periodic 𝒫𝒯-symmetric potential is governed by the following equation i u_z+β/2 u_x x +|u|^2 u +is ∂/∂ x(|u|^2u)+V_𝒫𝒯(x) u=0, where ∂/∂ x(|u|^2u) is the self-steepening term, and V_𝒫𝒯(x)=V(x)+iW(x) represents a periodic 𝒫𝒯-symmetric potential. The real and imaginary parts of a 𝒫𝒯-symmetric potential satisfy the conditions V(-x)=V(x) and W(-x)=-W(x). Physically, V(x) corresponds to the spatial distribution of the refractive index, and the W(x) corresponds to the balanced gain-loss relationship. We consider the following periodic 𝒫𝒯-symmetric potential V_𝒫𝒯(x)=V_0cos ^2(x)+i W_0sin (2 x). Here, V_0 and W_0 are the depths of the real and imaginary parts of the potential, respectively. Since Eq. (2) (with the potential given by Eq. (3)) cannot be solved analytically, we employ the PSR method <cit.>, a robust numerical technique for such nonlinear equations, to identify localized solutions. § PSEUDOSPECTRAL RENORMALIZATION METHOD The spectral renormalization method <cit.>, originally derived from the Petviashvili method <cit.>, is a powerful tool for numerically computing solitons in nonlinear waveguides. This method is based on transforming the governing equation into Fourier space and determining a convergence factor with a nonlinear nonlocal integral equation coupled to an algebraic equation. The spectral renormalization method is easy to implement and usually converges fairly quickly. On the other hand, if the system lacks any homogeneity, as in the case of saturable nonlinearity, the convergence factor cannot be found explicitly by the spectral renormalization method. To determine the convergence factor, a root-finding method should be employed. In <cit.>, the pseudospectral renormalization (PSR) method is derived from the spectral renormalization method. This derivation allows for the explicit calculation of the convergence factor, even in the absence of homogeneity <cit.>. We utilize a modification of the PSR method to find the localized solutions of Eq. (2). We seek a soliton solution of Eq. (<ref>) in the form u(x, z)=f(x) e^i μ z, where f(x) is a localized complex-valued function, and μ>0 is the propagation constant. Substituting this solution ansatz into Eq. (<ref>), we get the following nonlinear eigenequation for f and μ -μ f +β/2d^2 f/dx^2 +|f|^2 f + V_𝒫𝒯 f + i s d/d x( f |f|^2) =0, with the boundary condition f → 0 as |x| →+∞. Applying the Fourier transform and inverse Fourier transform to Eq. (<ref>), respectively, gives the following equation - μ f - β/2 F^ - 1{k^2 f̂} + |f|^2 f + V_ P T f + is F^ - 1{ik F{f|f|^2 }} = 0. Here ℱ and ℱ^-1 denote the Fourier and inverse Fourier transforms, respectively, and k is the Fourier variable. μ f can be written as μ f=ℱ^-1{μf̂}. Substituting Eq. (<ref>) into Eq. (<ref>), we get ℱ^-1{(μ +β/2 k^2) f̂} =|f|^2 f+ V_𝒫𝒯 f+ i s ℱ^-1{i k ℱ{f |f|^2}}. If we employ the fixed-point iteration method to find a localized solution of Eq. (<ref>), the solution either grows without bound or tends to zero under iteration. To avoid this situation, we should introduce a new field variable, f(x)=λ w(x), where λ is a real-valued constant to be determined and called the convergence factor. Substituting this new variable into Eq. (<ref>), the function w(x) satisfies F^ - 1{(μ + β/2k^2 )ŵ} = |λ |^2 |w|^2 w + V_ P T w + is|λ |^2 F^ - 1{ik F{w|w|^2 }}. Multiplying Eq. (<ref>) by w^* and integrating over (-∞,∞), we obtain an algebraic equation for the convergence factor | λ|^2 = S_1 /S_2 , where S_1 and S_2 are defined by [ S_1 = ∫_ - ∞^∞[ μ ww^* + β/2w^* F^ - 1{k^2 ŵ} - V_ P T (x)ww^* ] dx,; ; S_2 = ∫_ - ∞^∞w^* [ |w|^2 w + is F^ - 1{ik F{w|w|^2 }}]dx.; ] The desired solution can be obtained by iterating Eq. (<ref>) in Fourier space ŵ_n + 1 = F{|λ _n |^2 |w_n |^2 w_n + V_ P T w_n + isw_n F^ - 1{ik F{|λ _n |^2 w_n |w_n |^2 }}}/μ + β k^2 /2. We start implementing Eq. (<ref>) by choosing an initial function as w_0=e^-x^2, which yields λ_0 from Eq. (<ref>). Then, from Eqs. (<ref>) and (<ref>), we obtain ŵ_1 and λ_1, respectively. The iteration continues until two conditions are met [ e_n^(1) = f_n - f_n - 1_L_∞≤10^-10,; ; e_n^(2) = [ - μ f_n - β/2F^-1{ k^2 f̂_n } + |f_n |^2 f_n + V_ P T f_n; + is F^ - 1{ ikF{f_n |f_n |^2 }} ]_L_∞≤10^-10, ] where f_n=λ_n w_n. Figs. 1(a) and 1(b) show the real and imaginary parts of a self-steepening soliton solution of Eq. (2) (computed by the PSR method), respectively, while Fig. 1(c) displays the error diagrams (e_n^(1) and e_n^(2)) of the PSR method. We see that errors defined by e_n^(1) and e_n^(2) drop below 10^-10 after 37 PSR iterations. § EXISTENCE OF SELF-STEEPENING OPTICAL SOLITONS We numerically investigate the existence region of self-steepening solitons of Eq. (2) according to the equation parameters. Firstly, we study the effect of GVD and periodic 𝒫𝒯-symmetric potential (3) on the existence of self-steepening solitons. Fig. 2(a) shows the existence region (shown in green) of soliton solutions of Eq. (2) (with V_𝒫𝒯=0) according to the self-steepening coefficient (s) and GVD coefficient (β) for four different values of β. As can be seen from the figure, the GVD dampens the self-steepening effect. Namely, as the β coefficient increases, soliton solutions of Eq. (2) can be obtained for larger values of s. To scrutinize the effect of the periodic 𝒫𝒯-symmetric potential on the existence of self-steepening solitons, the relationship between s and β is reconsidered by taking the coefficients of the real and imaginary parts of the potential as 0.7 and 0.1, respectively, in Fig. 2(b). Our analysis of the figure demonstrates that the periodic 𝒫𝒯-symmetric potential significantly extends the existence region of self-steepening solitons. This implies that self-steepening solitons can be found for substantially larger values of the self-steepening coefficient compared to the case without the potential. Secondly, we investigate how the relationship between the coefficient of the real part of the periodic 𝒫𝒯-symmetric potential and the propagation constant (μ) affects the existence of self-steepening solitons. In Fig. <ref>, Eq. (2) (with β=1, s=W_0=0.1) has a soliton solution for the values of V_0 and μ corresponding to the green region, while it has no soliton solution for the values corresponding to the red region. This figure reveals that increasing the value of μ enables the existence of self-steepening solitons for a wider range of V_0 values. To ensure consistent analysis of the existence and stability of self-steepening solitons, we fix the propagation constant (μ) at 1 throughout the paper. Consequently, our investigation focuses on the V_0 coefficient within the range of 0 to 1. This choice is motivated by the observation that larger values of V_0 exceeding 1 necessitate a corresponding increase in μ to maintain soliton solutions. § LINEAR STABILITY OF SELF-STEEPENING OPTICAL SOLITONS In this section, we investigate the linear stability of self-steepening solitons in Eq. (2) by analyzing their linear stability spectra. To obtain a linear stability eigenvalue problem, we consider a perturbed solution of the form ũ(x, z)=[f(x)+ ϵ (g(x) e^λ z+h^*(x) e^λ^* z ) ] e^i μ z, where g(x) and h(x) are perturbation eigenfunctions, λ is the eigenvalue, and the superscript "^*" represents complex conjugation. Substituting the perturbed solution into Eq. (<ref>) and linearizing, we obtain the following linear stability eigenvalue problem for the self-steepening soliton u(x, z)=f(x) e^i μ z i[[ 1/2d^2/d x^2+G_0 d/d x+G_1 G_2 d/d x+G_3; ; -(G_2^* d/d x+G_3^*) -(1/2d^2/d x^2+G_0^* d/d x+G_1^*) ]][[ g; ; h ]]=λ[[ g; ; h ]], where [ G_0=2 i s|f|^2, G_1=-μ+V_𝒫𝒯+2|f|^2+4i s|f||f|_x ,; ; G_2=i s f^2, G_3=f^2+2 i s f f_x . ] We compute the whole stability spectrum of the soliton by solving Eq. (<ref>) with the aid of the Fourier collocation method <cit.>.[A detailed explanation of the Fourier collocation method can be found in <ref>.] The linear stability spectrum (λ spectrum) gives essential information about the behavior of the soliton under small perturbations. If the real part of any eigenvalues found in the linear spectrum is positive, the soliton is linearly unstable. Generally, as the positive real part of an eigenvalue increases, the perturbation growth rate of the soliton also increases. On the other hand, if the eigenvalues in the spectrum are purely imaginary, then some oscillations occur. In such a scenario, the soliton can be regarded as linearly stable. Taking s=0.1 and β=μ=1, the spectra of self-steepening solitons for different depths of the real and imaginary parts of the periodic 𝒫𝒯-symmetric potential (<ref>) are displayed in Fig. <ref>. Fig. <ref>(a) shows the spectrum in the potential-free state (V_0=W_0=0). Since the spectrum lacks any eigenvalues with a positive real part, it can be inferred that the self-steepening soliton is linearly stable. Additionally, we have observed that the spectrum contains no discrete eigenvalues other than a zero eigenvalue of multiplicity four. The absence of nonzero discrete eigenvalues in the linearization spectrum of solitons is a common characteristic of integrable equations <cit.>. Next, it's shown that when the real part of the potential is added to the system (V_0=0.7, W_0=0), the resulting soliton is linearly stable, too; see Fig. <ref>(b). However, in this instance, it is worth noting that the spectrum includes certain internal modes. This presence is indicative of nonintegrable equations. Subsequently, Fig. <ref>(c) and Fig. <ref>(d) illustrates the spectra of self-steepening solitons in the periodic 𝒫𝒯-symmetric potential. In these figures, the system has gain (V_0=0.7, W_0=0.3) and loss profiles (V_0=0.7, W_0=-0.3), respectively. In both cases, self-steepening solitons are weakly unstable, where the maximal linear instability growth rates of solitons are very small (∼10^-3 and ∼3×10^-4, respectively). When the coefficient of self-steepening is increased to 0.3 in the absence of potential, a pair of eigenvalues bifurcate out from the origin along the real axis, and the spectrum contains a positive real eigenvalue of multiplicity two; see Fig. <ref>(a). However, since this positive real eigenvalue is very small (∼10^-6), the soliton is linearly stable. Furthermore, the linear spectra of self-steepening solitons (with s=0.3) obtained with the addition of the periodic 𝒫𝒯-symmetric potential are similar to those observed for the case of s = 0.1. (Figs. <ref>(b)-(d)). § NONLINEAR STABILITY OF SELF-STEEPENING OPTICAL SOLITONS The nonlinear stability of self-steepening solitons is investigated through evolution simulations of Eqs. (1) and (<ref>) for long distances. To carry out these numerical investigations, we use the split-step method <cit.>. We show how the self-steepening term and the varying depths of the real and imaginary parts of the periodic 𝒫𝒯-symmetric potential (<ref>) affect the nonlinear stability of solitons. During the nonlinear stability analysis, we investigate whether the shape, position, and peak amplitude of self-steepening solitons change as they propagate in the z direction. If the shape, position, and peak amplitude of the solitons do not change during the simulation, or if the changes are extremely small, they are considered to be nonlinearly stable. §.§ Self-steepening optical solitons without an external potential It is well known that the NLS equation (MNLS equation with s=0) has stable fundamental soliton solutions. However, including the self-steepening term in the NLS equation has some consequences on the stability of solitons. The MNLS equation admits solitons u(z,x)=r(z) sech r(z)[x-v(z)] e^-i δ(z)[x-v(z)]+i σ(z), where v=-∫_0^z δ d s+τ_0, σ=1/2∫_0^z(r^2+δ^2) d s+σ_0 . Here, r denotes the soliton's amplitude, v is the velocity, δ represents a frequency parameter, σ_0 is the soliton's initial phase, and τ_0 indicates the amount of the soliton's position shift. In <cit.>, the effect of self-steepening on an NLS soliton was analyzed by the perturbation theory and it's found that d r/d z=d δ/d z=d σ_0/d z=0, d τ_0/d z=s r^2. According to these results, the main effect of self-steepening on the soliton is a position shift. This position shift increases linearly with distance in the amount of τ_0=sr^2z. To compare this phenomenon with the results of the split-step method, we numerically examine the evolution of an NLS soliton under the self-steepening effect. Fig. <ref>(a) depicts the numerical evolution of the sech(x) soliton in Eq. (1) (with s=0.1). Fig. <ref>(b) then shows the amount of position shift (vs. distance) computed in two different ways, by perturbation analysis and the split-step method. The amount of position shift calculated by both methods shows good agreement, with very similar results. In the rest of this paper, we examine the evolution simulations of self-steepening solitons obtained from Eqs. (1) and (2), not the effects of self-steepening on NLS solitons. As part of our investigation into the nonlinear stability of self-steepening solitons, we obtain the soliton solution of Eq. (1) for s=0.1, and depict the nonlinear evolution of this soliton in Fig. <ref>. When the soliton moves from z=0 to z=120, the self-steepening effect shifts the position of the peak of the soliton from x=0 to x=-29.39 and increases the peak amplitude of the soliton from 1.41 to 1.66. Since the peak amplitude and position of the self-steepening soliton change it is considered to be nonlinearly unstable. To gain a deeper understanding of the dynamic properties of self-steepening solitons, we find the soliton solution of Eq. (1) for s = 0.3, and depict the nonlinear evolution of the soliton in Fig. <ref>. Although the propagation distance is reduced to 20, the increase in the soliton's peak amplitude is significantly greater than in the case of s=0.1. In addition, there is a remarkable shift in the position of the soliton. Fig. <ref> shows the amount of amplitude increase and position shift of the peaks of the self-steepening solitons obtained for s=0.1 and s=0.3 during their propagation towards z=20. It’s deduced from the figure that increasing the self-steepening coefficient increases the rate of change of the amplitude increase and the position shift. §.§ Self-steepening optical solitons in the real periodic potential In this section, we aim to suppress the instability of self-steepening solitons by including the real component of the periodic 𝒫𝒯-symmetric potential (3) in the MNLS equation. The effectiveness of this approach is investigated in Figs. <ref> and <ref>, where we analyze the nonlinear stability of self-steepening solitons in this real periodic potential. In Fig. <ref>, the self-steepening coefficient is taken as s=0.1, and soliton solutions are found for various values of V_0 between 0 and 1. Then, these solitons are advanced up to z =50 by the split-step method, and the amount of change in peak amplitude and position of these solitons between z = 0 and z = 50 is depicted. It is seen that as V_0 increases, the amount of amplitude increase of the solitons decreases. Furthermore, the change in the position of the solitons has completely disappeared for almost all V_0>0. In Fig. <ref>, the value of V_0 is fixed to 0.7, and the amount of change in peak amplitude and position of self-steepening solitons are plotted as a function of s. While no position shift is observed even at a large value of s, such as s=0.3, the rate of increase in the amplitude of the solitons increases with increasing s values. [H] < g r a p h i c s > Nonlinear evolution of the soliton solution of Eq. (<ref>) (β=1, γ_1=1, s=0.3, V_0=1), (a) Three-dimensional view, (b) View from the top, (c) Optical pulses at z=0 and z=50, (d) Maximum amplitude as a function of the propagation distance z. In Figure <ref>, we illustrate the nonlinear evolution of the self-steepening soliton for s=0.3 in the real periodic potential with the depth of the potential V_0 = 0.7. This figure shows that the position shift is suppressed by including the real periodic potential in the MNLS equation. On the other hand, despite a substantial reduction in the amplitude increase compared to the potential-free case, the soliton still exhibits a notable amplitude change. §.§ Self-steepening optical solitons in the periodic 𝒫𝒯-symmetric potential In the previous section, we have significantly suppressed the instability of self-steepening solitons by including the real part of the periodic 𝒫𝒯-symmetric potential (3) in the MNLS equation. Adding the real periodic potential to the MNLS equation eliminates the position shift and significantly reduces the amplitude increase. In this section, we investigate the use of the periodic 𝒫𝒯-symmetric potential to further reduce the amplitude increase of self-steepening solitons during their propagation. Fig. <ref> shows the nonlinear evolution of the self-steepening soliton of Eq. (2) with s=0.1, V_0=0.7 (depth of the real part of the potential), and W_0=0.29135 (depth of the imaginary part of the potential). The periodic 𝒫𝒯-symmetric potential effectively eliminates the position shift phenomenon and reduces the amount of change in peak amplitude of the soliton between z = 0 and z = 50 to the order of 10^-7. Nevertheless, when the value of s is increased to 0.3, stability cannot be achieved for positive values of W_0. As an example, Fig. <ref> illustrates the soliton propagation for two different positive values of W_0. The increase in peak amplitude of optical pulses cannot be prevented when s=0.3 and W_0 ≥ 0. In contrast, Fig. <ref> shows the soliton propagation for two different negative values of W_0 when s=0.3. As shown in the figure, a stable soliton can be achieved when the real and imaginary parts of the potential have depths of V_0=0.7 and W_0=-0.209, respectively. The two solitons in Figs. <ref>(c) and <ref>(b) are perturbed by 1% random-noise perturbations, and their nonlinear evolutions in Eq. (2) are displayed in Figs. <ref>(a) and <ref>(b), respectively. Figure <ref> shows that self-steepening solitons in Figs. <ref>(c) and <ref>(b) are stable against perturbations. Then, it can be concluded that stable self-steepening solitons can exist in the periodic 𝒫𝒯-symmetric potential (<ref>) even when the self-steepening coefficient s has a large value of 0.3. § CONCLUSION In this study, we have conducted a numerical study on the existence, linear stability, and propagation dynamics (nonlinear stability) of self-steepening optical solitons in a periodic 𝒫𝒯-symmetric potential (3). Firstly, we investigated the existence region of self-steepening solitons of Eq. (2) according to the equation parameters. Fig. 2(b) shows that the periodic 𝒫𝒯-symmetric potential significantly extends the existence region of self-steepening solitons. Namely, self-steepening soliton solutions can be obtained for substantially larger values of the self-steepening coefficient when the governing equation has the periodic 𝒫𝒯-symmetric potential. Secondly, we investigated the linear stability of self-steepening solitons of Eq (2) by analyzing their linear stability spectra. Figs. 4 and 5 show that self-steepening solitons of Eq. (2) without the potential are linearly stable even when the self-steepening coefficient (s) has a large value of 0.3. However, when the governing equation contains the periodic 𝒫𝒯-symmetric potential, self-steepening solitons of Eq. (2) are considered weakly unstable. Because linear stability spectra of these solitons include some small positive real eigenvalues that are less than 10^-3. Finally, we investigated the nonlinear evolution of self-steepening solitons. Figs. 7 and 8 show that self-steepening solitons of the MNLS equation (1) exhibit a position shift and an increase in peak amplitude during their evolution. By including the periodic 𝒫𝒯-symmetric potential (3) in the MNLS equation we achieved to suppress the nonlinear instability of self-steepening solitons. It has been revealed that the real part of the periodic 𝒫𝒯-symmetric potential eliminates the position shift and significantly reduces the amplitude increase (see Figs. 10-12). On the other hand, the imaginary part of the potential significantly contributes to the stability of self-steepening solitons by further reducing the amplitude increase (see Figs. 13 and 15). In conclusion, we have demonstrated that the periodic 𝒫𝒯-symmetric potential can stabilize the propagation of self-steepening optical solitons. This finding opens avenues for exploring how different 𝒫𝒯-symmetric potential configurations can manipulate soliton dynamics. Our ongoing research investigates these broader effects, including the confirmed stabilizing properties of certain configurations such as Wadati <cit.> and Scarf-II <cit.> potentials, as well as richer phenomena like symmetry breaking. Unveiling the diverse impacts of 𝒫𝒯-symmetric potentials on solitons experiencing higher-order effects, such as self-steepening, holds immense significance for future applications in controlling solitons' behavior across various fields. § FOURIER COLLOCATION METHOD Firstly, the infinite x-axis is truncated into a finite interval [-L / 2, L / 2], where L is the length of the interval. Then the eigenfunctions [g, h]^T and functions G_0, G_1, G_2, G_3, G_0^* (G_4), G_1^* (G_5), G_2^* (G_6), G_3^* (G_7) in Eq. (15) are expanded into Fourier series: g(x)=∑_n a_n e^i n k_0 x, h(x)=∑_n b_n e^i n k_0 x, G_j=∑_n c_n^(j) e^i n k_0 x, j=0,1,...7, where k_0=2 π / L. Substituting these expansions into the eigenvalue problem (15) and equating the coefficients of the same Fourier modes, the following eigenvalue system for the coefficients {a_j, b_j} will be obtained: -1/2(k_0 j)^2 a_j+ ∑_n c_n^(0) i(j-n) k_0 a_j-n+∑_n c_n^(1) a_j-n +∑_n c_n^(2) i(j-n) k_0 b_j-n+∑_n c_n^(3) b_j-n=-i λ a_j, 1/2(k_0 j)^2 b_j-∑_n c_n^(4) i(j-n) k_0 b_j-n-∑_n c_n^(5) b_j-n -∑_n c_n^(6) i(j-n) k_0 a_j-n-∑_n c_n^(7) a_j-n=-i λ b_j, where -∞<j<∞. By truncating the number of Fourier modes to -N ≤ j ≤ N, the infinite-dimensional eigenvalue problem becomes the following finite-dimensional one: i[[ 1/2 D_2+C_0 D_1+C_1 C_2 D_1+C_3; ; -(C_6 D_1+C_7) -(1/2 D_2+C_4 D_1+C_5) ]][[ A; ; B ]]=λ[[ A; ; B ]]. Here D_1 =i k_0 diag(-N,-N+1, …, N-1, N), D_2 =(i k_0)^2 diag(-N,-N+1, …, N-1, N)^2, C_j =([ c_0^(j) c_-1^(j) … c_-N^(j) ; c_1^(j) c_0^(j) c_-1^(j) ⋱ ⋱ ; ⋮ c_1^(j) c_0^(j) ⋱ ⋱ ⋱ ; c_N^(j) ⋱ ⋱ ⋱ ⋱ ⋱ c_-N^(j); c_N^(j) ⋱ ⋱ ⋱ ⋱ ⋮; ⋱ ⋱ ⋱ ⋱ c_-1^(j); ; c_N^(j) … c_1^(j) c_0^(j) ]), j=0,1,2...7, A =(a_-N, a_-N+1, …, a_N)^T, B=(b_-N, b_-N+1, …, b_N)^T. To solve the matrix eigenvalue problem (A.4), either the QR algorithm or the Arnoldi algorithm can be used. myvancouver
http://arxiv.org/abs/2407.13510v1
20240718134255
Asymptotically Optimal Closed-Form Phase Configuration of $1$-bit RISs via Sign Alignment
[ "Kyriakos Stylianopoulos", "Panagiotis Gavriilidis", "George C. Alexandropoulos" ]
eess.SP
[ "eess.SP" ]
Asymptotically Optimal Closed-Form Phase Configuration of 1-bit RISs via Sign Alignment Kyriakos Stylianopoulos^1, Panagiotis Gavriilidis^1, and George C. Alexandropoulos^1,2 ^1 Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Greece ^2 Department of Electrical and Computer Engineering, University of Illinois Chicago, IL, USA {kstylianop, pangavr, alexandg}@di.uoa.gr This work has been supported by the SNS JU project TERRAMETA under the EU's Horizon Europe research and innovation programme under Grant Agreement No 101097101, including top-up funding by UKRI under the UK government's Horizon Europe funding guarantee. July 22, 2024 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT While Reconfigurable Intelligent Surfaces (RISs) constitute one of the most prominent enablers for the upcoming sixth Generation (6G) of wireless networks, the design of efficient RIS phase profiles remains a notorious challenge when large numbers of phase-quantized unit cells are involved, typically of a single bit, as implemented by a vast majority of existing metasurface prototypes. In this paper, we focus on the RIS phase configuration problem for the exemplary case of the Signal-to-Noise Ratio (SNR) maximization for an RIS-enabled single-input single-output system where the metasurface tunable elements admit a phase difference of π radians. We present a novel closed-form configuration which serves as a lower bound guaranteeing at least half the SNR of the ideal continuous (upper bound) SNR gain, and whose mean performance is shown to be asymptotically optimal. The proposed sign alignment configuration can be further used as initialization to standard discrete optimization algorithms. A discussion on the reduced complexity hardware benefits via the presented configuration is also included. Our numerical results demonstrate the efficacy of the proposed RIS sign alignment scheme over iterative approaches as well as the commonplace continuous phase quantization treatment. Reconfigurable intelligent surfaces, discrete optimization, phase configuration, SNR maximization. § INTRODUCTION The technology of Reconfigurable Intelligent Surfaces (RISs) is posed as one of the key enablers for next sixth Generation (6G) of wireless networks due to its dynamic connectivity enhancements with minimal deployment and operation costs <cit.>. In fact, theoretical results have shown that an RIS of N unit cells can offer an N^2-fold increase in the end-to-end channel magnitude by optimally configuring the phase shifts of its comprising elements <cit.>. Such benefits are further amplified by the metasurfaces' near-passive energy consumption and minute manufacturing cost per unit element. In an effort to reduce hardware complexity, however, a vast majority of developed prototypes has been designed with elements admitting discrete phase shift values, often with a discretization of a single bit. Not only are the optimal gains of finite discrete RIS configurations subpar to the theoretical N^2 gain (e.g., <cit.>), but also this design principle casts the optimization problem of the surface's phase shifts into the area of discrete optimization <cit.>. In fact, most problem formulations associated with RIS phase configuration are NP-hard and the approximate optimization procedures employed often lead to degraded performance, while requiring impractical computational costs <cit.>, considering the fact that the number of elements of RISs is expected to be in the order of thousands, notwithstanding their lack of theoretical guarantees. Motivated by the above, this work specifically tackles the problem of maximization of the Signal-to-Noise Ratio (SNR) for an RIS-enabled Single Input Single Output (SISO) system, when each element of the RIS admits 1-bit quantized phase shifts with a π phase difference between them. By re-expressing the SNR maximization problem in this particular case, we derive a closed-form configuration vector for the RIS elements that is guaranteed to always achieve more than half of the achievable gain, while its expected gain is proved to be at least N^2/4 when Rayleigh channel fading or Line of Sight (LoS) conditions are encountered. We furthermore proceed to use this configuration as initial point to a standard discrete optimization approach offering both performance improvements and faster convergence. Our numerical evaluation demonstrates that the proposed closed-form approach outperforms the standard baseline of discretizing the optimal continuous RIS configuration, as commonly used. Notation: Matrices (vectors) are expressed in bold uppercase (bold lowercase) typeface. sign(·) returns the sign of its argument in {-1, +1}. Element-wise multiplication and expectation are denoted by ⊙ and 𝔼[·], respectively. The real (complex) normal distribution is expressed by 𝒩 (𝒞𝒩) and the central Chi-squared distribution with k degrees of freedom is denoted as χ^2_k. ℜ𝔢(·) and ℑ𝔪(·) return the real and imaginary parts of a complex quantity, ∠ denotes the phase of a complex number in Euler form, while denotes the imaginary unit. The transpose, conjugate, and Hermitian transpose of 𝐀, are denoted as 𝐀^ T, 𝐀^∗, and 𝐀^ H, respectively. Finally, lowercase subscripts denote elements of vectors or matrices. § SYSTEM MODEL AND DESIGN OBJECTIVE Consider a SISO system in the presence of an RIS with N unit cells, where the channel responses for the Transmitter (TX) → RIS and the RIS → Receiver (RX) links are denoted by h∈ℂ^N × 1 and g^ H∈ℂ^1 × N, respectively. For simplification of presentation, and to present theoretical results in line with the relevant literature, assume there is no direct TX → RX link. Let the diagonal matrix Φ∈ℂ^N × N include the RIS phase profile, so that each n-th unit cell (n=1,2,…,N) with configuration θ_n is modeled as Φ_n,n = exp(θ_n). By further denoting as ρ the value of the transmit SNR and as PL the multiplicative path loss of the two links, then the SISO SNR optimization objective can be formulated as follows <cit.>: 𝒪𝒫_1:  θmax{ρ PL|g^ HΦh|^2} = θmax{ρ PL|(h⊙g^∗)^ Tϕ|^2} s.t.  θ_n ∈ℱ, n=1,2,…,N, where ϕ is the N-element vectorized form of the main diagonal of Φ, and h and g are normalized such that 𝔼[h^ Hh] = 𝔼[g^ Hg] = N. In addition, ℱ represents the set of available quantized phase shift values for each RIS element. In the remainder of the paper, we will be assuming the availability of Channel State Information (CSI) and, for ease of notation, that ℱ = {0, π}, so that the phase shift for each RIS element aligns with the axis of the real numbers of the complex plane. To this end, ϕ^0≜exp( 0) = 1 and ϕ^π≜exp(π) = -1, therefore, ϕ∈{-1, +1}^N. It is noted that the derivations hold for any two phase shift values with a difference of π radians, through a rotation of axes. Based on the above and by denoting the concatenated channel vector as c≜ (h⊙g^∗) ∈ℂ^N × 1, 𝒪𝒫_1 can be re-formulated as follows: 𝒪𝒫_2:  ϕmax γ(ϕ) ≜ |c^ Tϕ |^2  =( ∑_i=1^Nϕ_iℜ𝔢(c_i) )^2_≜ A(ϕ)+ ( ∑_i=1^Nϕ_i ℑ𝔪(c_i) )^2_≜ B(ϕ) s.t.  ϕ∈{-1, +1}^N, where ρ and PL have been dropped without affecting the equivalence between the 𝒪𝒫_1 and 𝒪𝒫_2 problems. § CLOSED-FORM RIS PHASE CONFIGURATION We commence with the following remark elaborating on the optimization of the two terms appearing in 𝒪𝒫_2's objective. The maximum values of functions A(ϕ) and B(ϕ) in 𝒪𝒫_2 for the feasible values of ϕ appearing in its constraint are given respectively by A_max≜ (∑_i=1^N |ℜ𝔢(c_i)|)^2 and B_max≜ (∑_i=1^N |ℑ𝔪(c_i)|)^2, which would be respectively achieved via the RIS phase configurations ϕ^Re≜ sign(ℜ𝔢(c)) and ϕ^Im≜ sign(ℑ𝔪(c)). In Algorithm <ref>, we present a simple approach that selects between the two configurations ϕ^Re and ϕ^Im the one that maximizes the dual-summand objective of 𝒪𝒫_2, i.e., the received SNR. The computational time of this approach is Θ(N), which is optimal. In the sequel, we provide a theoretical analysis of this Sign Alignment (SA) scheme for RIS phase configuration along with guarantees on its performance. §.§ Theoretical Analysis When the RIS phase configuration is set to ϕ^SA via Algorithm <ref>, the achievable instantaneous SNR γ(ϕ^SA) is lower bounded by the quantity: f_ LB(ϕ^SA) ≜max{( ∑_i=1^N |ℜ𝔢(c_i)| )^2, ( ∑_i=1^N |ℑ𝔪(c_i)| )^2}. It follows directly from the selection criterion in Step 3 of Algorithm <ref> that either A(ϕ) or B(ϕ) are maximized, and the maximum values of A(ϕ) and B(ϕ) are obtained using the results in Remark <ref>. Let ϕ^⋆ be the RIS phase configuration obtained by optimally solving 𝒪𝒫_2, then γ(ϕ^SA) ≥ 0.5γ(ϕ^⋆). It follows from 𝒪𝒫_2's dual-summand objective, Theorem <ref>, and the fact that A(ϕ),B(ϕ)>0. When both the links TX → RIS and RIS → RX experience Rayleigh fading conditions, i.e., h_i,g_ii.i.d.∼𝒞𝒩(0, 1) ∀n=1,2,…,N, the expectation of the achievable SNR, 𝔼[γ(ϕ^SA)], is lower bounded by the quantity: f̅_ LB(ϕ^SA) ≜ 0.25N^2. Without loss of generality, let us assume that ( ∑_i=1^N |ℜ𝔢(c_i)| )^2 > ( ∑_i=1^N |ℑ𝔪(c_i)| )^2. Then, following Step 3 of Algorithm <ref> results in ϕ^SA = ϕ^Re, yielding: 𝔼[ γ(ϕ^SA) ] = 𝔼[ A(ϕ^Re) + B(ϕ^Re) ] ≥𝔼[ A(ϕ^Re) ] = 𝔼[ A_max] = 𝔼[ ( ∑_i=1^N |ℜ𝔢(c_i)| )^2 ] ≥𝔼^2 [ ∑_i=1^N |ℜ𝔢(c_i)| ] = ∑_i=1^N𝔼^2 [ |ℜ𝔢(c_i)| ] . We next express h_i = x_i + y_i and g_i^∗ = z_i + w_i with x_i, y_i, z_i,w_ii.i.d.∼𝒩(0,0.5), which leads to the formulation: 𝔼[ |ℜ𝔢(c_i)| ] = 𝔼[ |ℜ𝔢(h_i g^*_i)| ]= 𝔼[ |x_i z_i - y_i w_i| ]. It is well known that the product of the two normally distributed Random Variables (RVs) x_i and z_i follows the distribution of the difference of two chi-squared RVs, since x_iz_i=0.25((x_i+z_i)^2 - (x_i-z_i)^2). Furthermore, since both terms inside the parentheses have the same variance, it holds that they can be thought of as i.i.d. χ^2_1 RVs, i.e., their Pearson correlation coefficient in <cit.> is zero. By introducing the terms Q_i ≜ (x_i + z_i)^2 + (y_i - w_i)^2 and R_i ≜ (x_i - z_i)^2 + (y_i + w_i)^2, (<ref>) can be re-expressed as: 𝔼[ |x_i z_i - y_i w_i| ] = 0.25 𝔼[ |Q_i - R_i| ], where Q_i and R_i are i.i.d. χ^2_2 RVs. Then, capitalizing on the results in <cit.>, the probability density function of the RV S_i ≜ Q_i-R_i is obtained as follows: f_S_i(x) ≜ 0.25 exp( -|x| /2). Finally, returning to (<ref>) and exploiting (<ref>) and (<ref>), we can derive the lower bound for expected achievable SNR as: 𝔼[ |x_i z_i - y_i w_i| ] = 0.25 ∫_-∞^∞ |x|f_S(x) dx = 0.5. The proof is concluded by substituting (<ref>) into (<ref>). Assume pure LoS channel conditions for all wireless links and let h_i = exp(ϑ^h_i) and g_i^* = exp(-ϑ^g_i). The achievable deterministic SNR value γ(ϕ^SA) is again lower bounded by f̅_ LB(ϕ^SA), similar to Theorem <ref>. Let ϑ_i ≜∠c_i = ϑ^h_i - ϑ^g_i and assume, without loss of generality, that ∑_i=1^N |ℜ𝔢(c_i)| > ∑_i=1^N |ℑ𝔪(c_i)|, which is equivalently expressed as ∑_i=1^N |cos(ϑ_i)| > ∑_i=1^N |sin(ϑ_i)|. The following derivations hold: cos^2(ϑ_i) + sin^2(ϑ_i)=1 ⇒∑_i=1^Ncos^2(ϑ_i) + ∑_i=1^Nsin^2(ϑ_i) = N ⇒∑_i=1^N |cos(ϑ_i)| + ∑_i=1^N |sin(ϑ_i)| ≥ N ⇒∑_i=1^N |cos(ϑ_i)| ≥N/2 ⇒(∑_i=1^N |cos(ϑ_i)| )^2≥ 0.25N^2 (<ref>)⇒γ(ϕ^SA) ≥ 0.25N^2. The lower bound f̅_ LB(ϕ^SA) in Theorems <ref> and <ref> has an optimal asymptotic growth of Θ(N^2). This is due to the fact that the continuous case of 𝒪𝒫_1, i.e., when θ_n∈[0,2π] ∀n, can be optimally solved via Phase Alignment (PA) <cit.>, yielding the upper bound SNR value N^2 for normalized ρ PL. § ITERATIVE DISCRETE OPTIMIZATION SCHEMES As previously mentioned, for the perfect CSI availability case, the continuous version of 𝒪𝒫_1 can be optimally solved via PA <cit.>, i.e., by setting ϕ^ PA_n to -(∠h_n - ∠g_n) ∀n. The most common approach in the literature for treating the discretized case is to optimally solve the continuous problem via PA, and then quantize each element so that ϕ^QPA_n ≜ argmin_ϕ_n ∈ℱϕ_n - ϕ^ PA_n <cit.>; this will be referred to as Quantized Phase Alignment (QPA) in the sequel. It has been shown <cit.> that, under Rayleigh conditions, QPA achieves an asymptotic expected gain of about 0.4N^2. Other categories of discrete optimization algorithms include Branch and Bound (BnB) approaches that treat the quantization problem as mode selection <cit.>. Despite their effectiveness, BnB methods have exponential computational requirements which makes them less attractive for real-time RIS configuration with instantaneous CSI. Black-box optimization algorithms, such as Genetic Optimization (GO), have also been used in discrete RIS tuning problems, such as in <cit.>, with encouraging performance. However, their population-based search entails many times more SNR evaluations than N during a fixed CSI frame, which is infeasible for instantaneous RIS tuning. Those methods are in principal better tailored for statistical CSI setups <cit.> or codebook search approaches <cit.>. From the perspective of deep learning, reinforcement learning algorithms for discrete <cit.> and 1-bit <cit.> RIS phase configuration have been lately proposed, as well as a two-step supervised learning methodology that couples the problems of SNR prediction and optimization <cit.>. Despite the fact that such approaches have demonstrated promising results in real-time RIS control, their performance is dependent on collected data that are scenario specific, and often imply considerable measurement overheads. For this reason, these are not considered in the evaluation framework of this paper. §.§ Hill Climbing (HC) One of the typical baseline approaches to solve the discrete optimization problem 𝒪𝒫_2 is to: i) start with a random candidate solution; ii) flip each RIS element's phase shift from ϕ^0 to ϕ^π or vice-versa; iii) evaluate the objective function (i.e., SNR) after the flip is performed; and v) discard the change if no SNR improvement was obtained. This procedure is one of the standard generic discrete optimization approaches, commonly termed as “coordinate hill climbing” <cit.>, which is described in Algorithm <ref> specifically for the problem of the 1-bit RIS phase configuration selection. In this algorithm, we propose to use ϕ^SA from Algorithm <ref> as a starting RIS configuration. This has the two-fold benefit of starting the search from an already improved point, compared to random initialization, as well as stronger performance guarantees. Since the Hill Climbing (HC) algorithm may only improve on any given configuration, the performance bounds of Section <ref> hold for Algorithm <ref>, as well. We will be referring to the randomly initialized approach as HC, and as HC&SA when initialized by ϕ^SA. Notice that HC&SA will outperform SA by design, albeit at an iterative computational cost. § NUMERICAL EVALUATION In this section, we evaluate the performance of the proposed SA RIS configuration methodology and compare it against: i) QPA, as the main candidate closed-form solution available in the literature; ii) the HC iterative algorithm of Section <ref> that is comparably less computationally demanding than the relevant works discussed in Section <ref>; and iii) HC&SA, to assess the improvements brought by the initialization via SA. We have also evaluated the SNR obtained via the upper bound approach of the continuous case through PA, which has been used for normalizing the achievable SNRs of all previous methods. For the experimentation setup, we used the generic Ricean system model in <cit.>, while the RIS was modeled as a N^ vert× N^ hor Uniform Rectangular Array (URA) so that N = N^ vert N^ hor with |N^ vert - N^ hor| selected to be minimized. In the Cartesian (x,y,z) frame, the TX was positioned at (0,0,6) m, the RX at (0,20,1.5) m, and the RIS at (2,2,2) m, with the latter's orientation being perpendicular to the yz plane. The performance curves included in this section are reported as averages over 1000 random channel realizations, apart from Fig. <ref>, where realizations were averaged over 900 deterministic channels of different angles, and from Fig. <ref>, where 200 realizations were averaged to alleviate from the consuming computations of exhaustive search. In Fig. <ref>, the performance comparison of the considered methods is reported under different Ricean κ-factors, namely, from -30dB (rich scattering) to 25dB (almost pure LoS). As shown, SA achieves about 8% improvement on the normalized SNR over QPA under Rayleigh fading and around 17% improvements under LoS conditions. HC offers a mere 2% increase over SA under Rayleigh, while this difference is diminished for LoS, where SA performs equally well. HC&SA offers a small improvement over HC, however, it comes at a faster convergence cost, as will be discussed in the sequel. The results of Fig. <ref> could be specific to the considered setup parameters, hence, we have performed averages over Ricean fading and all possible elevation/azimuth angles in Fig. <ref>, where the performances are evaluated as N increases. Similar gains among all considered methods are observed in Fig. <ref>, as in Fig. <ref>, however, the mean SNR values among all methods are more closely comparable under LoS (Fig. <ref>), when all possible angles are averaged out, indicating that the problem of selecting effective RIS configuration schemes is tightly coupled with its positioning under LoS conditions. It is important to highlight that in Fig. <ref>, QPA converges to its asymptotic normalized expected gain of 0.4 for larger N, as stated in Section <ref>, however, SA immensely exceeds the lower bounds given in Section <ref>. The convergence iterations for Figs. <ref> and <ref> for the two proposed iterative approaches are given in Table <ref>, from which it can be inferred that, in RISs with N≥100, the SA initialization offers 15%-25% convergence speed improvements, on top of marginally improved performance. The performance evaluation displayed so far has been compared against the SNR achievable by an RIS with continuous phase shifts, which calls for investigation on the performance of the considered methods with respect to the optimal 1-bit-quantized solution of 𝒪𝒫_1. For N up to 26, where evaluating all 2^N combinations is computationally tractable, this comparison is given in Fig. <ref> for Rayleigh fading conditions. Interestingly, in the small RIS regime, SA obtains around 95% of the optimal performance, despite being a closed-form approximate solution, which hints that the results in larger RIS sizes may not be too far from the optimal value. Evidently, the proposed iterative approaches obtain near-optimal performance in these scenarios, however, there is a maintained 13%-15% performance difference between SA and QPA, indicating that the proposed closed-form RIS configuration is offering firm benefits with negligent computational overhead. § IMPLEMENTATION OF RIS SIGN ALIGNMENT Notwithstanding its performance improvements over conventional RIS configurations, SA can offer benefits over the related QPA closed-form approach also in terms of hardware requirements. To compute ϕ^ Re and ϕ^ Im from Algorithm <ref>, one needs only the signs of each channel gain c_i, which is equivalent to measuring the quadrant of ∠c_i. For each single-hop channel, those signs can be acquired through a simple Reception (RX) Radio-Frequency chain (RFC) comprising a pair of 1-bit Analog-to-Digital Converters (ADCs), tasked to measure the signs of each c_i's I/Q components <cit.>. Note that the power consumption of ADCs grows exponentially with their bit resolution, hence, an 1-bit ADC offers an important improvement over traditional high-resolution ADCs. Secondly, since no amplitude information is required to compute the SA configurations, the detection process can be further simplified and will be less susceptible to errors, equivalently to how Quadrature-Phase-Shift-Keying (QPSK) constellations are easier to detect compared to high-order Quadrature Amplitude Modulation (QAM) <cit.>. On a conceptual level, such simplified and low-power RX RFCs can be envisioned to be endowed onto the metasurface, toward autonomous RIS solutions <cit.>. In fact, the two output bits of the two ADCs, could be directly connected to the RIS element switches, so that the surface's state may be set to ϕ^ Re or ϕ^ Im without the need for involved digital controllers. Since each element's configuration is independent of the other channel coefficients, the operation can take place at the unit-cell level. Furthermore, the design is straightforwardly related to the output of the ADCs, thus no detection algorithm is needed at the RIS controller <cit.>. Notice that the SNR computations implied by Step 3 of Algorithm <ref> could be performed at the receiver, where high-resolution RX RFCs can be readily available, and the result of the comparison could be fed back to the RIS if a control channel is present. However, describing a detailed RX RFC and its integration to the RIS, that can facilitate SA on the RIS side with reduced hardware complexity, lies beyond the scope of this paper. It is finally noted that, from a communications system perspective, the considered SISO system may be extended to the general case of Multiple Input Multiple Output (MIMO) systems by leveraging a capacity analysis on the eigenvalues and eigenvectors of the covariance matrices of the wireless channels, as it has been recently studied in <cit.>. We delegate such analysis to the journal version of this work. § CONCLUSION In this paper, we considered an RIS-enabled SISO communication system and presented a novel closed-form configuration for the RIS responses for the case of 1-bit-phase-quantized unit cells where the phase difference is of π radians. Lower bounds of the proposed SA configuration under Rayleigh fading and LoS conditions, which are asymptotically optimal, were derived. Our numerical evaluation showcased that SA outperforms the state-of-the-art configuration scheme, which is obtained by quantizing the optimal continuous phase shifts. It was also demonstrated that when SA is used as initial point in the iterative HC algorithm, it can provide further performance increase with faster convergence. We finally discussed how the proposed closed-form RIS configuration can be realized at the RIS side in a standalone operation fashion via low-power RX units each comprising a pair of 1-bit ADCs. IEEEtran
http://arxiv.org/abs/2407.12633v1
20240717150133
Bayesian spatial functional data clustering: applications in disease surveillance
[ "Ruiman Zhong", "Erick A. Chacón-Montalván", "Paula Moraga" ]
stat.ME
[ "stat.ME" ]
A]Ruiman  Zhong[label=e1]ruiman.zhong@kaust.edu.sa0000-0000-0000-0000 , A]Erick A. Chacón-Montalván[label=e2]erick.chaconmontalvan@kaust.edu.sa A]Paula Moraga[label=e3]paula.moraga@kaust.edu.sa [A]Computer, Electrical and Mathematical Science and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Makkah, Saudi Arabia [presep=, ]e1,e2,e3 § ABSTRACT The ability to accurately cluster contiguous regions with similar disease risk evolution is crucial for effective public health response and resource allocation. In this article, we propose a novel spatial functional clustering model designed for disease risk mapping, utilizing random spanning trees for partitioning and latent Gaussian models for capturing within-cluster structure. This approach enables the identification of spatially contiguous clusters with similar latent functions, representing diverse processes such as trends, seasonality, smooth patterns, and autoregressive behaviors. Our method extends the application of random spanning trees to cases where the response variable belongs to the exponential family, making it suitable for a wide range of real-world scenarios, including non-Gaussian likelihoods. The proposed model addresses the limitations of previous spatial clustering methods by allowing all within-cluster model parameters to be cluster-specific, thus offering greater flexibility. Additionally, we propose a Bayesian inference algorithm that overcomes the computational challenges associated with the reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithm by employing composition sampling and the integrated nested Laplace approximation (INLA) to compute the marginal distribution necessary for the acceptance probability. This enhancement improves the mixing and feasibility of Bayesian inference for complex models. We demonstrate the effectiveness of our approach through simulation studies and apply it to real-world disease mapping applications: COVID-19 in the United States of America, and dengue fever in the states of Minas Gerais and São Paulo, Brazil. Our results highlight the model's capability to uncover meaningful spatial patterns and temporal dynamics in disease outbreaks, providing valuable insights for public health decision-making and resource allocation. Bayesian modeling Clustering Disease mapping Laplace approximation Spatial functional data § INTRODUCTION Disease surveillance relies heavily on accurately interpreting patterns to understand disease stage, manage and prevent health crises. An important task is to detect neighboring regions with similar disease risk, aiding in identifying populations with high exposure, uncovering inequalities (e.g., in health services, food security), and allocating resources depending on the risk level <cit.>. This becomes even more critical when analyzing disease risk over a period of time; in such cases, a natural extension is to identify neighboring regions with similar functional disease risk within a defined time frame. Identifying such clusters implies detecting neighboring regions where the evolution of a disease or the health status associated with certain conditions is progressing similarly over time, allowing for similar decisions to be made to address the spread or prevalence of the disease of interest and prioritize regions more effectively. While other definitions of clusters exist when working with the space and time domain, we specifically focus on the detection of spatially contiguous regions exhibiting similar functional risk over time, referring to these as spatial functional clusters. Traditional approaches that detect spatial discontinuities in the disease risk surface, often consider constant or time piece-wise constant risk <cit.>. Other approaches focus on detecting a small group of anomalous regions <cit.>. <cit.> extended <cit.> two-stage approach by incorporating cluster random effects instead of fixed effects only. <cit.> improved the cluster identification procedure by considering the uncertainty of the number of cluster using Markov chain Monte Carlo (MCMC). However, <cit.>'s two-staged approach detects spatial clustering from a list of candidates generating from pre-specified cluster algorithms such as K-means and hierarchical clustering. Whether the true cluster lies on the candidates space can not be guaranteed. Furthermore, fixed coefficients of the model are uniform across clusters, and the cluster-specific term follows a Gaussian Markov random field. This limits the flexibility of the model and requires to fit the full model for each iteration. Most mentioned approaches struggle to cope with the complexity and volume of data, particularly when it involves observations recorded continuously during a time interval at distinct locations, known as spatial functional data <cit.>. Classical functional data clustering techniques seeks to uncover diverse morphological patterns within the continuous functions that underlie discrete measurements or observations <cit.>. Popular methods include projecting functional data into a series of basis functions and then apply conventional clustering algorithms to the coefficient vectors. Typical non-parametric method are based on a pre-distance or dissimilarity, such as hierarchical clustering and K-means clustering <cit.>. Model-based clustering algorithms are based on Gaussian mixture or Gaussian mixture process <cit.>. Although functional data models help capture complexity of data in time domain, the spatial information, very important in disease mapping, is usually ignored. As the increasing demands for analyzing spatial functional data, clustering algorithms considering spatial information are proposed <cit.>. Based on the way spatial information is used, those methods can be broadly divided into two categories: non-parametric clustering based on the temporal dissimilarity measure with a spatial correlation penalty term <cit.>, and a mixture of distributions with Markov random field priors on mixture of parameters <cit.>. However, those methods do not guarantee the spatial contiguity of cluster memberships with flexible shape and size. More recently, model-based spanning tree partitioning methods have been proposed to perform spatial clustering <cit.>. <cit.> introduced a Bayesian random spanning tree (RST) partitioning method. Building on this, <cit.> combined Bayesian random spanning tree with Gaussian distributed spatial functional data. This algorithm is a model-based generative clustering procedure that iteratively samples the clustering structure using reversible jump Markov chain Monte Carlo (RJ-MCMC) to acommodate changes in the parameter space when the cluster structure is modified. Computing the acceptance probability for the proposed cluster structure relies on the conditional marginal distribution given the cluster structure, which is computationally challenging and can hinder convergence for models with more complex terms. <cit.> addressed this issue using an additional Gibss sampler for the hyper-parameters given the cluster structure and improved mixing using conjugate priors. However, this approach does not hold for more flexible priors and likelihoods, such as the Poisson distribution with random effects, which is widely used in various models for disease risk mapping. In this paper, we propose a spatial functional cluster model to extend the use of random spanning trees for spatial functional clustering to cases where the response variable comes from a distribution in the exponential family. This model utilizes within-cluster models belonging to a family of latent Gaussian models, which can represent a wide range of processes including seasonality, smooth terms, autoregressive behavior, random effect, and others. Given the complexity introduced by the lack of conjugacy in our model, the RJ-MCMC algorithm to sample the cluster structure becomes more intricate. Instead, we employ composition sampling and the Metropolis-Hastings algorithm to perform Bayesian inference. To compute the marginal distribution required for our acceptance probability, we use the integrated nested Laplace approximation (INLA). This approach improves mixing and makes inference feasible for spatial functional clustering with non-Gaussian likelihoods. We demonstrate the adequacy of this method through simulation studies and illustrate its utility in three real-world applications for disease risk mapping. The remainder of this paper is organized as follows. Section <ref> presents the basic concepts related to random spanning trees and introduces the main spatial functional clustering model proposed in this article. Section <ref> introduces the Bayesian inference framework. We evaluate the performance of our Bayesian inference algorithm in two simulation studies detailed in Section <ref>. Section <ref> analyzes three real-world applications for COVID-19 and dengue in the United States of America (USA) and Brazil. Finally, Section <ref> provides a discussion of our approach, results, and potential directions for future work. § METHODOLOGY §.§ Preliminary concepts Let 𝒟⊂ℛ^2 be the space of interest with n spatially contiguous regions. An undirected graph 𝒢 = (V,E) can be defined on 𝒟, where V is the set of the nodes representing the regions, and E the edges connecting every pair of nodes for regions that share a boundary. A 𝐩𝐚𝐭𝐡 from node v_1 to v_s is defined by distinct connected nodes v_1, …, v_s and the connecting edges e_i = (v_i, v_i+1) for i = 1, …, s-1. If a path starts and ends at the same node, it is called a circuit. A graph is connected if there exists at least one path between any pair of nodes in V. A spanning tree 𝒯 with respect to a graph 𝒢 is a connected sub-graph that contains all nodes of 𝒢 with no circuits for any node. Finally, a minimum spanning tree (MST) is a spanning tree derived from an edge-weighted graph with minimum possible total edge weight. We consider a spatial cluster with respect to the graph 𝒢 as any subset of nodes forming a connected sub-graph 𝒢_c, for c = 1, …, C ≤ n <cit.>. If a set of disjoint subgraphs M = {𝒢_c: c = 1, …, C} satisfies 𝒢 = ∪_c=1^C 𝒢_c, then the collection M is known as a partition with C clusters. For a given graph 𝒢, any partition M can be derived from at least one minimum spanning tree 𝒯⊂𝒢 by removing a subset of edges of the MST 𝒯 <cit.>. Hence, the mechanism to sample different partitions of size C for a given graph 𝒢 using minimum spanning trees consists on sampling a MST 𝒯 and removing C-1 edges. A current partition M can also be modified by removing, adding, or removing and adding edges to obtain a new partition M^*, making this approach suitable for sampling partitions in Bayesian clustering models. §.§ Bayesian spatial functional clustering model A random spanning tree partition model for spatial functional analysis comprises two main components: a spanning tree partition model that defines the clusters and region memberships, and a model for estimating the functional structure within clusters. In this section, we present both the cluster model and the within-cluster model family. Consider a set of n regions with an associated graph G that characterizes the connections among them. For a given partition M with C clusters derived from a MST 𝒯⊂ G, we assume the existence of C latent functions {h_c(t): c ∈{1,2,…, C}} for t ∈ [0,T]. These functions are not directly observable but are related to the observations through the mean function. Hence, in the within-cluster model, an observed value y_it at time t for region i = 1, …, n, which belongs to cluster c_i, is assumed to come from a exponential family distribution with mean μ_c_i,t and additional hyper-parameters θ_c_i such as: Y_it|μ_it, θ_c_i, M, C, 𝒯ind∼π(·|μ_it, θ_c_i), g(μ_it) = η_it = h_c_i(t) + ϵ_it, where g(·) is a link function connecting the mean μ_it and the linear predictor η_it, and ϵ_it is an optional error term depending on the specific distribution assumed for π(·). We focus on latent Gaussian functions h_c(t) represented as: h_c(t) = α_c + β_cZ_t + ∑_k = 1^n_ff^(k)_c(t),   for   c = 1, …, C. where α_c is the cluster-specific intercept, β_c are cluster fixed effects with respect to covariates Z_t and f^(k)_c(·) are zero-mean temporal Gaussian random effects such as the joint density function for a fixed set of times t, given θ_c, is normally distributed with precision matrix Q_f^(k). Common examples of f^(k)_c(·) include independent random effects, random walk processes, autoregressive processes, seasonal random effects, and others. Fast inference for this family of models can be performed using integrated nested Laplace approximation (INLA) <cit.>. More concisely, if we define x as the collection of all parameters {α_c}, {β_c} and random effects {f^(k)_c(·)}, and the θ = {θ_c} as the collection of hyper-parameters, then the within-cluster Bayesian model is completed by defining the priors for the Gaussian random vector x and hyper-parameters θ, given the partition M, the number of clusters C, and the minimum spanning tree 𝒯: x|θ, M, C, 𝒯 ∼MVN(0, Q^-1), π(θ | M, C, 𝒯) . The prior for each hyper-parameter in θ is defined independently, with only requirement being that it is a proper distribution on the support of the hyper-parameter. In comparison to the model proposed by <cit.>, our approach encompasses a broader family of models with different distributions and temporal components, while <cit.> assume a Gaussian distribution for the response variable and use conjugate priors to derive the posterior distribution conditional on { M, C, 𝒯} as well as the marginal distribution, we employ integrated nested Laplace approximation (INLA) to obtain the both the marginal distribution and the posterior distribution as explained in the following section. The cluster model involves specifying the model for the triple θ_𝒯 = { M, C, 𝒯} which defines the cluster membership for nodes (regions) in graph 𝒢. We use the Bayesian random spanning tree (BRST) model proposed by <cit.>, where 𝒯 is a minimum spanning tree constructed using weights 𝐰, which can be set fixed or follow a uniform distribution w_j ℓi.i.d.∼ U(0,1), and the prior of the number of clusters C is proportional to a geometric distribution: 𝒯 =MST(𝐰), π(C = c) ∝(1-q)^c. The hyper-parameter q ∈[0,1) controls the penalty for obtaining a large number of clusters; q=0 leads to a discrete uniform prior on C, while q → 1 imposes a large penalty on large values for C. Finally, a uniform conditional prior on the partition M is assumed π( M |𝒯, C) ∝ 1 such that, given the MST 𝒯 and number of clusters C, there are equal probabilities on selecting C-1 out of n-1 edges of 𝒯 to obtain a partition M. § BAYESIAN INFERENCE Bayesian inference for the spatial functional clustering model described in the previous section is achieved by obtaining samples from the joint posterior density of the within-cluster latent field x = {{α_c}, {β_c}, {f^(k)_c(·)}}, within-cluster hyper-parameters θ = {θ_c}, and cluster parameters θ_𝒯 = { M, C, 𝒯}. Common approaches use reversible jump Markov chain Monte Carlo (RJ-MCMC) due to the unknown number of clusters and the parameter space dimension changing with respect to the number of clusters C <cit.>. In this approach, at each iteration, a modification on the clustering structure is proposed θ_𝒯→θ_𝒯^*, such as splitting one cluster into two (birth), merging two adjacent clusters (death), splitting a cluster and merging a cluster simultaneously (change), and updating the minimum spanning tree 𝒯 and the hyper-parameters θ (hyper). Optionally, an update to the latent field x→x^* can be proposed if required based on the current move. The proposed cluster parameters θ_𝒯^*, and within-cluster latent field x^* parameters and hyper-parameters θ^* are then accepted with probability A, which depends on the within-cluster goodness of fit for the the current and proposed partition (see Algorithm <ref>). A critical aspect in RJ-MCMC sampling is to propose new values θ_𝒯^*, θ^* and x^*, and accept them with probability A such as the algorithm efficiently explore the parameter space. Suppose we have completed (r-1) iterations with values θ_𝒯^r-1, θ^r-1 and x^r-1. If we propose a move θ_𝒯^* ∼π(θ_𝒯), hyper-parameters θ^* ∼π(θ|θ_𝒯^*) and latent field x^* ∼π(x|θ^*, θ_𝒯^*), then the acceptance probability is A = π( y |θ^*, x^*, θ_𝒯^*)/π( y |θ^r-1, x^r-1, θ_𝒯^r-1)× π( θ_𝒯^*)/π( θ_𝒯^r-1)×π(θ_𝒯^r-1|θ_𝒯^*)/π( θ_𝒯^*|θ_𝒯^r-1) Where π( y |θ^*, x^*, θ_𝒯^*) is the likelihood for the full observed data y given θ^*, θ^* and θ_𝒯^*; π( θ_𝒯^*) is the prior density evaluated at the cluster structure {M^*, C^*, 𝒯^*}; and π( θ_𝒯^*|θ_𝒯^r-1) is the transition probability from θ_𝒯^r-1→θ_𝒯^*. This approach could be applied to the model presented in Section <ref>; however, the mixing of the chains are increasingly slow with respect to the number of within-cluster model parameters, and it has low acceptance probabilities given that the proposed values come from the prior specification. Alternatively, if we propose a move θ_𝒯^* ∼π(θ_𝒯) and parameters θ^* ∼π(θ| y, θ_𝒯^*) and marginalize over x, then the acceptance probability is A = π( y |θ^*, θ_𝒯^*)/π( y |θ^r-1, θ_𝒯^r-1)×π(θ^*|θ_𝒯^*)/π(θ^r-1|θ_𝒯^r-1)×π( θ_𝒯^*)/π( θ_𝒯^r-1)×π(θ_𝒯^r-1|θ_𝒯^*)/π( θ_𝒯^*|θ_𝒯^r-1)×π(θ^r-1| y, θ_𝒯^r-1)/π(θ^*| y, θ_𝒯^*). This approach improves mixing because the hyper-parameters are proposed based on the conditional posteriors, and the latent field is marginalized. Unfortunately, the conditional posterior π(θ| y, θ_𝒯) and the conditional likelihood π( y |θ, θ_𝒯) are not always available, which limits its use. For example, <cit.> use a similar approach for models where the response variable is assumed to be Gaussian and use conjugate priors for the hyper-parameters. Given that we are interested in perform inference in a wider family of models with good mixing properties, we sample from the posterior π(θ_𝒯, θ, x|y) = π(θ_𝒯|y) π(θ, x|y, θ_𝒯) using compositional sampling. We first aim to obtain samples from the posterior of the cluster model parameters π(θ_𝒯|y) (see Algorithm <ref>), and later obtain samples from the conditional posterior π(θ, x|y, θ_𝒯). Let consider the move θ_𝒯^r-1→θ_𝒯^*, then the acceptance probability is A = π( y |θ_𝒯^*)/π( y |θ_𝒯^r-1)×π( θ_𝒯^*)/π( θ_𝒯^r-1)×π(θ_𝒯^r-1|θ_𝒯^*)/π( θ_𝒯^*|θ_𝒯^r-1) where π( y |θ_𝒯) is the within-cluster marginal likelihood given the cluster structure θ_𝒯. This term is the most difficult to obtain with respect to the other terms and it is not always analytically tractable for the models of our interest (e.g. Poisson likelihood). Hence, we use the integrated nested Laplace approximation (INLA) proposed in <cit.> to compute the marginal likelihood conditional on the cluster structure. We show how to compute the other two components, the ratio of cluster structure, π( θ_𝒯^*)/π( θ_𝒯^r-1), and the transition probability π(θ_𝒯^r-1|θ_𝒯^*)/π( θ_𝒯^*|θ_𝒯^r-1) in Section 2 of the Supplementary material. Let consider y_c the collection of observations for all regions belonging to cluster c, and x_c, the random field associated to cluster c. Then, it holds that the conditional posterior of the hyper-parameters π(θ|y, θ_𝒯) = ∏_c=1^C π(θ_c |y_c, θ_𝒯) due to θ_c are conditionally independent given the cluster structure θ_𝒯. Under the INLA framework the posterior for the hyper-parameters θ_c given the cluster structure θ_𝒯 is approximated as π(θ_c|y_c, θ_𝒯) ∝π(θ_c, x_c, y_c |θ_𝒯) / π̃_G(x_c |θ_c, y_c, θ_𝒯)|_x_c=x_c_mode, where π̃_G(x_c |θ_c, y_c, θ_𝒯) is a Gaussian approximation to π(x_c |θ_c, y_c, θ_𝒯) and x_c_mode is the posterior mode of x_c for a given value of θ_c and θ_𝒯 <cit.>. As a consequence, a natural approximation for the marginal likelihood is π(y|θ_𝒯) = ∏_c=1^C π(y_c |θ_𝒯) ≈∏_c=1^C .∫π(θ_c, x_c, y_c |θ_𝒯)/π̃_G(x_c |θ_c, y_c, θ_𝒯)|_x_c=x_c_modedθ_c. The term inside the integral is computed using numerical integration on the space of the hyper-parameters <cit.>. Notice that the joint density π(θ_c, x_c, y_c|θ_𝒯)= π(y_c |x_c, θ_c, θ_𝒯) π(x_c |θ_c, θ_𝒯) π(θ_c |θ_𝒯) is decomposed as the product of the c-cluster likelihood π(y_c |x_c, θ_c, θ_𝒯) which is a product of densities from the exponential family, the prior of the c-cluster latent field π(x_c |θ_c, θ_𝒯) which is zero-mean Gaussian density, and the priors of the hyper-parameters π(θ_c|θ_𝒯). In addition, if we replace the prior for the latent field x_c, the following holds π(y_c |θ_𝒯) ≈ (2π)^-r/2|Q|^*^1/2.∫π(y_c |x_c, θ_c, θ_𝒯) exp(-1/2x_c^TQx_c) π(θ_c |θ_𝒯)/π̃_G(x_c |θ_c, y_c, θ_𝒯)|_x_c=x_c_modedθ_c, where r is the rank of Q and |·|^* denotes the generalized determinant. The term that can be factored out of the integral is sometimes neglected when computing the marginal likelihood. However, in our approach, this term is important for comparing the marginal likelihoods given different cluster structures as required in the acceptance probability in Equation (<ref>). We use the R-INLA package for our inference and perform the correction when required. Our approach to sample from the posterior π(θ_𝒯|y) is summarised in Algorithm <ref>, where the acceptance probability is computed according to equations (<ref>) and (<ref>). Note that for a particular move (birth, death or change), there is no need to compute the full marginals π( y |θ_𝒯^*) and π( y |θ_𝒯^r-1) because the ratio π( y |θ_𝒯^*) / π( y |θ_𝒯^r-1) cancels out all the contributions for the clusters that remain the same. Therefore, we only need to compute the marginals for the the clusters that are being modified. Finally, once the samples from π(θ_𝒯|y) are obtained, it is straightforward to obtain samples from π(θ, x|y, θ_𝒯) = ∏_c=1^C π(θ_c, x_c |y_c, θ_𝒯) using INLA or Markov chain Monte Carlo. § SIMULATION STUDY In this section, we evaluate the performance of our spatial functional clustering model and algorithm in two simulation studies. In the first, we test the model's adequacy when the latent function h_c(·) is either a polynomial or a flexible non-linear shape, using both Gaussian and Poisson data. We analyze Gaussian data to verify our model under a simple situation, and Poisson data because our goal is to apply spatial functional clustering for disease risk, where Poisson models are widely used. In the second, we examine the model's performance with imbalanced clusters and some neighboring clusters having similar functional shapes. Here, we assess the Poisson model and a Gaussian model applied to Poisson-transformed data to highlight the importance of our approach for non-Gaussian data under three scenarios. For both simulation studies, we use the same set of 100 spatially contiguous regions 𝒟 within a unit square, generated using Voronoi tessellation, and obtain 100 observations over time for each one. §.§ Simulation 1: Polynomial and non-linear shape for Gaussian and Poisson data In this simulation, we generate C = 10 spatial clusters by removing nine edges from a MST generated for the graph 𝒢 with respect to 𝒟. The generated clusters can be seen in Panel (A) of Figure <ref>. We create two scenarios: in the first, the latent functions {f_c(t): c = 1,…, 10} are polynomials, while in the second, they have flexible non-linear shapes. * Polynomial shape: The latent function h_c(t) = α_c + β_c1 t + β_c2 t^2 is characterized by the cluster parameters α_c, β_c1, and β_c_2. The parameters β_c = (β_c1,β_c_2) are defined to produce five types of increasing and decreasing functions, and these are repeated between the first five and last five clusters such as {β_c}_c=1^5 = {β_c}_c=6^10 = {(1, 0), (-1, 0), (0,0),(-3, 3), (3, -3)}. The parameters α_c are defined so the mean of h_c(t), evaluated over the sequence of observed times, is zero. * Flexible non-linear shape: The latent function h_c(t) = α_c + ∑_p = 1 ^16B_p(t) β_cp is defined using 16 basis splines with cluster intercepts α_c. The coefficients {β_c1, …, β_c,16} are simulated from an auto-regressive process of order 2 (AR2) with parameters (0.95, 0) for the first five clusters, and (0.5, 0.44) for the remaining clusters. The parameters α_c are defined in a similar way then the polynomial case. For both scenarios, we generate Gaussian and Poisson data, leading to four sub-scenarios. For the Gaussian case, the data for region i at time t is simulated from Y_it∼N(h_c_i(t), τ_c_i^2) with cluster variances {τ_c} = { 0.01, 0.05, 0.02, 0.05, 0.02,0.01, 0.05, 0.02, 0.05, 0.02}. For the Poisson case, the data is simulated from Y_it∼Poisson(λ_it) with mean λ_it = exp(log(N_i) + h_c_i(t) + ϵ_it) and random effect ϵ_it∼ N(0,τ_c_i^2), where the τ_c^2 values are defined as in the Gaussian case.The population N_i are simulated from a Poisson distribution Poi(λ_i), where log(λ_i) ∼ N(10, 0.3) for all regions. We executed Algorithm <ref> with 2000 iterations for Scenario 1 and 3000 iterations for Scenario 2. In both cases, we started with a randomly generated partition having c_0 = 15 clusters and a hyper-parameter q = 0.5 for the prior distribution of the number of clusters. In Scenario 1 (S1), we fit a within-cluster model where h_c(t) is represented using fixed effects and monomials. In Scenario 2 (S2), we fit h_c(t) using a random walk process of order 1. For the four sub-scenarios, we successfully recovered the true cluster partition. Once the sampling algorithm reached the true partition, it did not modify it further. Figures <ref> and <ref> present the results for the sub-scenario with a non-linear latent function and Poisson data. The results for the other sub-scenarios can be found in Section 3 of the Supplementary Material. Figure <ref> shows that the estimated partition matches the true cluster partition, with the actual labels of the clusters being irrelevant. Furthermore, Figure <ref> displays the estimated latent functions alongside the empirical relative risk for each region, demonstrating a common pattern among regions classified within the same cluster. §.§ Simulation 2: Imbalanced clusters with similar shape for neighboring clusters In this simulation, we partition the regions 𝒟 into C = 5 clusters with an imbalanced number of regions to evaluate our spatial clustering model in a more complex setting (see Figure <ref>). In this partition, Cluster 1 covers 64% of the regions, Cluster 2 covers around 10%, Cluster 3 around 6%, and Cluster 4 around 20%, while the smallest clusters are Cluster 3 with 6 regions and Cluster 5 with only 2 regions. Notably Cluster 3 is surrounded by Cluster 1, making them prone to merging when the their parameters and hyper-parameters are similar. We generate the observation for region i at time t from Y_it∼Poisson(λ_it) with mean λ_it = exp(log(N_i) + α_c_i + β_c_i1 t + β_c_i2 t^2 + ϵ_it) and random effect ϵ_it∼ N(0,τ_c_i^2). The population N_i is simulated from a Poisson distribution with mean μ_i, where log(μ_i) ∼ N(10, 0.3) for all regions. The values of the cluster parameters {β_c1, β_c2, τ_c} are set to define three scenarios as shown in Table <ref>. In Scenario 1 (S1), the small Cluster 3, surrounded by Cluster 1, has the same trend as Cluster 1 but with higher variability. In Scenario 2 (S2), the trends are the same, but Cluster 3's variability is relatively lower, and the mean trend of Cluster 4 becomes closer to Cluster 1 (see Figure <ref>). Additionally, the two connected clusters, Cluster 2 and Cluster 5, share the same trend, with Cluster 5 having lower variability. In Scenario 3 (S3), Cluster 1 and Cluster 3 have the same variance but slightly different mean trends. These scenarios are designed to make it more challenging to recover the true clusters. The performance of the models is evaluated using the Adjusted Rand Index (ARI) and the Normalized Information Distance (NID) suggested by <cit.>. The ARI measures the agreement between two partitions by considering all pairs of elements that are assigned to the same or different clusters <cit.>. The ARI is defined as: ARI = RI - 𝔼[RI]/max(RI) - 𝔼[RI] where RI = (a + b) / n2 is the Rand Index, which counts the proportion of pairs that are either in the same cluster (a) or to different clusters (b) in both partitions, out of the total of pairs n2. The ARI ranges from 0, indicating no agreement between the partitions, to 1, indicating perfect agreement. On the other hand, the NID quantifies the amount of information shared between two partitions relative to the total information contained in the partitions <cit.>. This is defined as: NID = 1 - 2 I(U,V)/H(U) + H(V) where I(U,V) = ∑_u ∈ U, v ∈ V p(u,v) logp(u,v)/p(u)p(v) is the mutual information between the partitions U and V, and H(·) = -∑ p(·) log p(·) is the entropy. The NID ranges from 0, indicating identical partitions, to 1, indicating dissimilar partitions. We implemented Algorithm <ref> with two different likelihoods for all the scenarios: Poisson likelihood with original data and Gaussian likelihood with log-transformed data, keeping the priors of the common parameters the same. We initialized the MST 𝒯 from a uniform distributed weights, and generated the partition M by removing 10 edges with top 10 weights, and with a penalty hyper-parameter q = 0.5. Then similarly, h_c(t) is expressed as a polynomial function of order 2. For every model, we obtain the final results using Dahl's method after 5000 iterations <cit.>. The method aims to find minimization of the posterior expected loss using a randomized greedy search algorithm to find a point estimate for a random partition based on a loss function (in our case, we use binder loss) and posterior Monte Carlo samples. Table <ref> presents the outcomes of the three scenarios, while the cluster maps, posterior means by cluster, and the marginal distribution over time are shown in the Supplementary Material. The Poisson model outperforms the Gaussian model with logarithm transformation in all the scenarios, achieving higher ARI and lower NID values. Both models, however, fail to detect the all the true clusters due to the complexity of the scenarios; larger clusters tend to absorb smaller ones when differences between the two are subtle. Nonetheles, the Gaussian model shows a reduced capability to capture imbalanced partition. For instance, Cluster 4 completely disappeared when using the Gaussian model in Scenario 3, whereas the Poisson model successfully identifies it. This indicates that miss-specification of the likelihood may diminish the quality of the clustering. § APPLICATION In this section, we apply our proposed spatial clustering framework to three real-world applications. First, we study the weekly incidence of COVID-19 per state in the United States (USA) to identify neighboring states with similar relative risk in 2020. Second, we analyze the weekly incidence of dengue in the state of Minas Gerais, Brazil, between July 2022 and July 2023, aiming to identify neighboring municipalities with similar incidence pattern evolution during this period. Finally, we examine the seasonal behavior of monthly dengue incidence in the state of São Paulo from April 2021 to April 2024. The COVID-19 USA data was obtained from <cit.>, while the dengue Brazil data was obtained from <cit.>. §.§ Weekly COVID-19 incidence in the USA in 2020 In this application, our goal is to identify neighboring states that had similar initial COVID-19 spread in 2020 from a total of 49 states. The COVID-19 pandemic rapidly spread to the United States by early 2020. Detecting spatial-functional data clusters of COVID-19 is crucial for understanding the virus's geographic spread and temporal trends <cit.>. By analyzing pre-vaccine COVID-19 data at the state level, we can identify regions with similar infection patterns. This clustering helps public health officials allocate resources more effectively, tailor interventions to specific areas, and anticipate future outbreaks. Such insights are essential for improving preparedness and response strategies, ensuring that measures are both targeted and efficient. Considering Y_it the number of new cases for state i at time t, we use the following within-cluster model: Y_it|μ_it, θ_𝒯 ind∼ Poisson(E_i×μ_it), log(μ_it) = η_it = α_c_i + f_c_i,t + ϵ_it,  where ϵ_it|θ_𝒯∼𝒩(0, 1/τ_c_i^2), f_c,1| v_c, ρ_c, θ_𝒯∼𝒩(0, v_c^-1(1-ρ_c^2)^-1),   for   c = 1, …, C, f_c, t = ρ_c f_c, t-1 + ε_c,t for  t = 2, …, T where ε_c,t|θ_𝒯∼𝒩(0, v_c^-1). Here, E_i is the expected number of cases for state i, and {f_c, t}_t=1^T in an auto-regressive process of order 1 with hyper-parameters v_c and ρ_c. The priors for these parameters are defined as follows: log(v_c(1-ρ_c^2)) |θ_𝒯∼LogGamma(1, 10^-5) and log(1+ρ_c/1-ρ_c) |θ_𝒯∼𝒩(0, 0.15). The priors for the precision hyper-parameters is τ_c^2 |θ_𝒯∼LogGamma(1, 5× 10^-4). Finally, the cluster model is as described in Section <ref> with priors for θ_𝒯 = {M, C, 𝒯} as follows: 𝒯 =MST(w), w_j ℓ i.i.d. ∼ U(0,1), p(C=c) ∝ 0.5^c, p( M |𝒯, C) ∝ 1. We executed the proposed MCMC sampling of Algorithm <ref> starting with c_0 = 10 clusters, running for 5000 iterations with 3000 iterations as burn-in. The convergence of the algorithm was assessed by analyzing the marginal likelihood (see Figure 25 in the Supplementary Material). Once the chains had converged, we reported the partition selected by the Dahl's algorithm and relabeled the clusters in descending order of size. The selected partition after convergence has a total of 8 clusters (Figure <ref>). The largest clusters are Cluster 1 (C1) to Cluster 4 (C4), located in the north with 16 regions, south with 9 regions, northeast with 9 regions, and west with 8 regions, respectively. Cluster 5 is located in the east with only 4 regions. The remaining clusters (C6-C8) each have only one region (Louisiana, Montana, and Washington, respectively), making them outliers with respect to their neighbors. For all clusters, the relative risk began to increase around March 16 (Figure <ref>). Cluster 3 and Cluster 5, neighboring regions in the east zone, experienced an outbreak in April with a higher relative risk than the other clusters. Clusters 2 and 4, located in the south and southwest zones, experienced an outbreak around July 6, with Cluster 2 exhibiting a higher relative risk. Finally, Cluster 1, in the north, experienced a major outbreak in November, while the other clusters began experiencing this outbreak with some delay. These estimated clusters provide insights into the spatial dependencies and variations in virus spread and impact, which is crucial for devising targeted public health strategies tailored to the characteristics of each cluster. §.§ Weekly dengue incidence in Minas Gerais, Brazil: July 2022 - July 2023 Currently, Brazil has experienced a significant rise in dengue cases <cit.>. The state of Minas Gerais in Brazil tops the country in cases in 2023. Clustering spatial functional data on dengue cases in Minas Gerais is crucial for understanding the spatial-temporal dynamics of the disease, identifying high-risk areas, and optimizing resource allocation for targeted public health responses. To identify neighboring municipalities with similar dengue relative risk evolution, we perform spatial functional clustering in the state of Minas Gerais using weekly dengue cases from 851 municipalities between July 2022 and July 2023. Considering Y_it as the number of new cases in region i at week t, the model we use is similar to Equation (<ref>). However, in this case, the latent effects f_c = (f_c,1, f_c,2, …, f_c,n)^T for cluster c are represented with a random walk process, imposing the conditions f_c,t - f_c,t-1∼𝒩(0, ν_c^-1) for t = 2, …, n such as: π(f_c|ν_c, θ_𝒯) ∝ν_c^(n-1)/2exp(-ν_c/2f_c^TS_ff_c), where S_f is the structure matrix obtained from the imposed conditions. The prior for the hyper-parameter ν_c is imposed as follows log(ν_c) ∼LogGamma(1, 10^-5). The priors for the cluster model is similar to Equation (<ref>), differing only in the prior for the number of cluster, π(C= c) ∝ 0.999999^c which imposes higher penalty on a large number of clusters. This means it will favour merge moves over split moves. For this model, Algorithm <ref> was executed with 100,000 iterations starting with c_0 = 100 clusters. The selected partition after convergence is shown in Figure <ref>, and the associated latent function for each cluster along with the empirical relative risk is shown in Figure <ref>. The selected partition after convergence has 63 clusters. Figure <ref> shows the 15 clusters comprising more than three municipalities. Cluster 1 and Cluster 2 are the largest and have very flexible shapes. Cluster 1 is located in the center from east to west, while Cluster 2 covers the center and extends to the west and south. The dynamics of dengue outbreaks between July 2022 to July 2023 for these clusters are shown in Figure <ref>, reveling that the dengue outbreak started in the northern part of the state, with relative risk increasing in mid-November, reaching a peak around January 8th, and returning to low levels in March for Clusters 14 and 12. The outbreak in Cluster 4, also in the north, starts in December and lasts until April, while clusters 2 (north) and 8 (west) see an increase in relative risk starting in January. Conversely, the risks in clusters 1, 3, 7, 9, 10, 11, and 13, located between the center and south of Minas Gerais, start to rise after February and return to low levels between mid-May and June. Overall, it is evident that the outbreak begins in the northern regions and moves southward. However, clusters 15 and 6, despite being located south of clusters 2 and 1, experienced an earlier outbreak, compared to regions with similar locations. §.§ Seasonality of monthly dengue incidence in São Paulo, Brazil: April 2021 - April 2024 Dengue is a seasonal disease that is strongly influenced by mosquito activity, which depends on environmental and climatic factors <cit.>. The seasonal pattern vary across regions due to differing environmental conditions and the presence of micro-regions. Understanding these seasonal variations is essential for making informed public health decisions <cit.>. Therefore, in this application, we aim to cluster 664 municipalities in the state of São Paulo based on the seasonal patterns observed in the monthly incidence of dengue between April 2021 and April 2024. Considering Y_it as the number of new cases in region i at month t, the model we use is similar to Equation (<ref>). However, in this case, the expected number of cases E_it are computed per year and the latent effects f_c = (f_c,1, f_c,2, …, f_c,n)^T for cluster c are represented with a seasonal random effect, imposing the conditions f_c,t + f_c,t+1 + ⋯ + f_c,t+m-1∼𝒩(0, ν_c^-1) for t = 1, …, n-m+1 such as: π(f_c|ν_c, θ_𝒯) ∝ν_c^(n-m+1)/2exp(-ν_c/2f_c^TS_ff_c), where S_f is the structure matrix obtained from the imposed seasonal conditions. The remaining priors are specified similar to the previous application (Section <ref>). Algorithm <ref> was executed with 60000 iterations starting with c_0 = 200 clusters. The selected partition after convergence comprises 79 clusters. The clusters with more than 5 municipalities, sorted by size, are shown in Figure <ref>, while the remaining clusters are presented in Section 4 of the Supplementary Material. Note that the largest clusters (C1-C6) are located in the center, south, southeast, center-east, northwest, and south of São Paulo, respectively. The estimated relative risk, standardized by year, is shown in Figure <ref>. Most outbreaks show a seasonal pattern with higher risk between December and June. Cluster 1 and Cluster 5, located in the center and west, respectively, exhibit outbreak peaks around March. Other clusters such as C2-C4 and C6, located in the north, east, and south, exhibit outbreak peaks around April. The risk in the southeast São Paulo (C3, C4, C6, and C7) reached its peak earlier in 2023 than in 2022, which is not the case for the other clusters. § CONCLUSION In this article, we propose a spatial functional clustering model for response variables belonging to the exponential family, utilizing random spanning trees for partitioning and latent Gaussian models for the within-cluster structure. Our approach ensures that the resulting clusters comprise neighboring regions with similar latent functions, which can be represented by different processes. We demonstrate the adequacy of our model through simulation studies and then apply it to three real-world applications for disease mapping. Compared to previous spatial clustering methods in disease mapping, our approach allows all parameters of the within-cluster model to be cluster-specific, resulting in a more flexible setting. Additionally, the number of potential clusters is unlimited, regardless of the cluster's shape and size. Unlike traditional functional clustering algorithms, our method enforces spatial contiguity, constrained by the initial full graph, and can handle non-Gaussian data with latent functions represented by different processes. This approach enhances our ability to understand, predict, and mitigate outbreak patterns, ultimately reducing morbidity and mortality. Inference in our spatial functional cluster model is feasible by marginalizing all the parameters associated with the within-cluster model and proposing an update in the cluster structure θ_𝒯 independent of the within-cluster parameters. In this case, the acceptance probability of our Metropolis-Hasting algorithm depends on the marginal distribution, which is computed using the integrated nested Laplace approximation (INLA), similar to the approach in <cit.>. Once the partitions are sampled, the within-cluster parameters can be easily sampled conditioned on these partitions. Our first simulation study shows the algorithm detected the true clusters when the mean functions or hyper-parameters of the clusters differs. In the second simulation, we compared spatial functional cluster models with Poisson likelihood and Gaussian likelihood after a log-transformation. The result shows that the Poisson case performers better than the Normal case suggesting the need of the spatial functional cluster model for different likelihoods. The experiments also illustrates the clustering perform in others challenging situations. Miss-classification could happen when both mean functions and hyper-parameters of different clusters are quite similar, especially when the clusters' sizes are imbalanced. This is reasonable given that the distance between the clusters in parameter space is reduced. In our first application, we identified clusters of U.S. states with similar relative risk patterns during the COVID-19 outbreak in 2020. Specifically, we found two clusters in the northeast that experienced a major outbreak in April, followed by two clusters that experienced an outbreak in July, and finally, a main cluster in the west that peaked significantly earlier than the others in November. In the second application, we observed that dengue outbreaks in Minas Gerais, Brazil, in 2020 began in northern clusters and gradually moved southward. Finally, in our last application, we detected municipalities with similar seasonal patterns in São Paulo between 2021 and 2024. In this case, two main clusters, located in the center and west, had peaks around March, while other clusters peaked in April. We also noticed differences in the shape of the seasonal behavior, and in some clusters, the peaks in 2023 occurred earlier than in 2022. In comparison to classical algorithms, sampling for clusters based on random spanning trees is computationally expensive and might require a large number of iterations when used in complex models with large sample sizes. A significant improvement can be achieved by using adaptive sampling, where the proposal of the cluster structure is modified based on recent samples to enhance the proposed partitions. Our future work will focus on improving the sampling using an adaptive algorithm that employs spanning trees on an auxiliary graph, as recommended by <cit.>. In conclusion, we proposed a flexible spatial functional clustering method and demonstrated its use in disease surveillance. However, this method is also applicable for clustering in other settings where the response variable comes from a distribution in the exponential family and the within-cluster models belongs to a family of latent Gaussian models. imsart-nameyear
http://arxiv.org/abs/2407.13447v1
20240718121558
Kaluza-Klein discreteness of the entropy: Symmetrical bath and CFT subsystem
[ "Harvendra Singh" ]
hep-th
[ "hep-th" ]
psf.tex -1cm 22cm -1.5cm 16cm [ ▽Φ̃ϵϵ̅łλŁΛμϕΦνψΨρ̊τκ̨∂ γΓ Tr arXiv:2409.nnnnJuly 22, 2024 3.5cm Kaluza-Klein discreteness of the entropy: Symmetrical bath and CFT subsystem 1cm Harvendra Singh Theory Division, Saha Institute of Nuclear Physics 1/AF Bidhannagar, Kolkata 700064, India Homi Bhabha National Institute (HBNI) Anushaktinagar, Mumbai 400094, India .5cm Abstract We explore the entanglement entropy of CFT systems in contact with large bath system, such that the complete system lives on the boundary of AdS_d+1 spacetime. We are interested in finding the HEE of a bath (system-B) in contact with a central subsystem-A. We assume that the net size of systems A and B together remains fixed while allowing variation in individual sizes. This assumption is simply guided by the conservation laws. It is found that for large bath size the island entropy term are important. However other subleading (icebergs) terms do also contribute to bath entropy. The contributions are generally not separable from each other and all such contributions add together to give rise a fixed quantity. Further when accounted properly all such contributions will form part of higher entropy branch for the bath. Nevertheless the HEE of bath system should be subjected to minimality principle. The quantum minimality principle S_quantum[B]={S[A], S_total+S[A]}_min, is local in nature and gives rise to the Page curve. It is shown that the changes in bath entropy do capture Kaluza-Klein discreteness. The minimality principle would be applicable in finite temperature systems as well. =16.2pt § INTRODUCTION The AdS/CFT holographic duality <cit.> has provided a big insight to our understanding of entanglement in strongly coupled quantum theories. Our focus here is on simple cases of the entanglement between two similar type of quantum systems with common interfaces. Generally one believes that sharing of quantum information between systems is guided by unitarity and the locality. Under this principle the understanding of the formation of gravitational black holes like states evolving from a loosely bound pure quantum matter state and the subsequent evaporation process (via Hawking radiation) still remains an unsolved puzzle. Though it is believed that the whole evolution process would be unitary and all information inside the black hole interior (behind the horizon) will be recovered once the black hole fully evaporated. Related to this aspect there is a proposal that the entanglement entropy curve for the bath radiation should bend when the half Page-time is crossed <cit.>. This certainly is often true when a pure quantum system is divided into two smaller subsystems. But for mixed states of the hawking radiation, or for finite temperature CFTs dual of the AdS-black holes, it is not that straight forward to obtain the Page curve. However, important progress has been made recently in certain models by coupling holographic CFT to an external radiation system (bath), and in some other examples by involving nonperturbative techniques such as wormholes, riplicas and islands <cit.>. Some answers to the difficult questions have been attempted. [ Also see a review on information paradox along different paradigms <cit.> and for list of related references therein; see also [<cit.>-<cit.>].] Particularly the AMM proposal for generalized entanglement entropy <cit.> involves the hypothesis of islandic (I) contribution, and by including gravitational entropy of respective island boundary (∂ I). According to this proposal the `quantum' entropy of a 2-dimensional radiation bath subsystem (B) can be expressed as S_quantum[B]= [ ext{Area(∂ I) 4G_N + S[B  U  I]}]_min This model uses a hybrid type `gravity plus gauge theory' holographic model. It includes contribution of (gravitational) island entropy to the bath entanglement entropy. The islands are usually disconnected surfaces inside bulk JT gravity. The JT (conformally or nearly AdS) gravity is a dual theory of `dot' like quatum system on the boundary. The same dot lives on the boundary of an infinte CFT_2 bath system. In the low energy description the JT gravity is treated as a system which is in contact with a 2-dim radiation bath over a flat Minkowski coordinate patch. So there are both field theoretic (S[B  U  I]) as well as gravitational contributions (Area(∂ I) 4G_N) present in the formula ficti1. Secondly there is a need to pick the lowest contribution out of a set of many such possible extremas, which may include entropy contributions of islands and the radiation. [In other extensions of the hybrid models one also includes wormhole contributions, see <cit.>.] Although complicated looking, the expression in ficti1 seemingly reproduces a Page-curve for the bath radiation entropy <cit.>. However AMM proposal ignores contributions of several subleading terms which we shall altogether pronounce as ' icebergs' terms. Here it should be clear that the icebergs also are contributions of various disconnected elements to the bath entropy, similar to the islands entropy. The important feature of AMM-proposal is that it highlights the appearance of island inside bulk gravity. The islands are normally situated outside the black hole horizon. These arise by means of a dynamical principle, and usually the islands are associated with the presence of a bath system when in contact with the quantum-dot. We highlight in present work that other subleading (disconnected) contributions of `icebergs' entropies would have to be properly accounted for in the bath entropy. Here we will be extending our 2-dimensional proposal <cit.> to the case of higher dimensional CFTs. We provide the picture that no doubt there would exist island contributions but we should not ignore (subleading) icebergs contributions to the bath entropy. If these are ignored we only end up with incomplete interpretation of the Page curve for quantum entropy. We clearly show that when island, icebergs and leading (pure) bath entropy once added together as a series, that gives rise to a system dependent constant contribution to the entropy of bath system. These contributions are naturally inseparable from each other as they compensate each other perfectly well no matter how large or small their individual contributions could be. We have explicitly shown this phenomenon for limiting case of a quantum dot like system located at the interface of symmetrical CFT bath system <cit.>. Correspondingly we find that there are gravitational contributions to the bath entropy arising from island and the subleading icebergs, for system-A attached to large bath (subsystem-B). We must note that the entanglement entropy of two such systems together (A U B) can be expanded as S_total[A U B]=S_pure bath+S_island+S_icebergs≡ S_l This expansion can always be done whenever system-B is sufficiently large compared to system-A, e.g. see drawing in lower figure in fig22b. Note S_l≫ 0 is the entropy contained in the total system (AUB). It is a constant quantity once the total system size is fixed, and if the system being conserved. Furthermore, it is proposed that the quantum entropy of entanglement of a large bath subsystem-B should be obtained from local minimality principle S[B]_quantum={S[A],  S_l+S[A]}_min=S[A] where S[A] is the entropy of system-A. To make it further clear the above minimality principle works because both S[A] and S[B], as systems being in contact, involve identical local entropies, corresponding to common RT surfaces they do share. Note two systems in contact will have common interfaces fig.fig23b. The complementary bath system (B^c) which is semi-infinite on both sides does not play direct role here. For non-contact type systems one should study them separately. In conclusion, the above two equations reproduce the Page curve for the entropy of a large bath system, including for finite temperature systems. The formula is definitely valid at least in static (equillibrium) cases. For time dependent processes involving black hole evaporation, or if there is a continuous change in the total system size (e.g. due to change in mass, energy, or horizon size), the scenario is expected to be similar at any point in time, hence it might be applicable for slow enough processes. The rest of the article is organized as follows. In section-2 we explain the island and icebergs contributions and define the generalized entropy formulation for pure AdS case. On the boundary we take a finite size system in contact with a symmetrical bath. In section-3 we also discuss a limiting case when system-A becomes very small and appears as point like (where we would use Kaluza-Klein scenario) and is in contact with large size CFT bath. Under Kaluza-Klein perspective the situation emerges similar to that of 2n parallel quantum (strip) systems attached to some large bath. The results for the AdS black holes cases are covered in section-4. The section-5 contains a summary. § ISLANDS AND ICEBERGS AND BATH SUBSYSTEM ENTROPY Let us consider a system (A) in contact with a bath subsystem (B) in fully symmetrical set up, and both having finite sizes, and living on the boundary of the AdS_d+1 spacetime. Thus it is assumed that both systems A and B are made of identical field species, i.e. described by same field contents, for simplicity of the problem. The pure AdS_d+1 spacetime geometry is described by following line element ds^2=L^2 z^2 (- dt^2 + dx_1^2 +⋯ + dx_d-1^2+ dz^2) where constant L represents a large radius of curvature (in string length units) of spacetime. The coordinate ranges lie -∞≤ (t, x_i)≤∞ and 0≤ z ≤∞, where coordinate z represents the holographic range of boundary theory.[ The Kaluza-Klein compactification over a circle (say x_d-1≃ x_d-1+ 2π R) produces a conformally anti-de Sitter solution in lower dimensional gravity. Particularly, for d=2 case one obtains Jackiw-Teitelboim type 2-dim dilatonic background <cit.>, ds^2_JT=L^2 z^2 (- dt^2 + dz^2) e^-2 (ϕ-ϕ_0)= √(g_xx)=L z where ϕ is a 2-dimensional dilaton field of the effective bulk gravity theory, all written in the standard convention (effective string coupling vanishing near the boundary). The two Newton's constants get related as 2π R G_d+1≡1 G_d, with G_2 being dimensionless for AdS_2.] The CFT_d theory lives on d-dimensional (t,x⃗) flat Minkowski spacetime describing boundary dynamics of AdS_d+1 bulk geometry. Consider a set up in which a finite size bath subsystem-B lives on the coordinate patches [-(b+a),-a]&[a, (b+a)] along x_d-1 direction, whereas system-A, sandwiched in between the bath, lives over the patch [-a,a]. The sketches are provided in figures fig23b, fig22b for clarity. The entire systems set-up is arranged in a symmetrical way for the convenience. The states of the system-A and the bath subsystem-B are obviously entangled. The complementary bath system B^c lives over the coordinate patches [b+a,∞] and [-(b+a),-∞]. The B^c is semi-infinite on either sides. From Ryu-Takayanagi holographic prescription the entanglement entropy of a d-dimensional CFT strip-shape subsystem of width l (l=2a+2b) is given by S_total[AUB]= L^d-1V^(d-2) (2d-4) G_d+1( 1ϵ^d-2-b_0^d-1(a+b)^d-2)≡ S_l where ϵ≃ 0 is the UV cut-off of the CFT_d (we shall only consider examples where d>2). V^(d-2) is the volume of (d-2) spatial directions perpedicular to the strip width direction x_1.[ Note b_0=1 2(d-1) B(d 2d-2,1 2) is specific dimension dependent coefficients involving explicit Beta-functions, see more in appendix of <cit.>.] We take l sufficiently large but fixed, so that S_l has a constant value. Our aim is to determine entanglement entropy where size b is varied from b≃ 0 to b≃ l/2, by hand. We simply assume local conservation laws, so that the net gain (loss) of system-A is compensated by equal loss (gain) in the size of bath subsystem-B and vice-versa. Note that such a process will keep l fixed. Especially for explicit time dependent cases one may have some definite rate of change, while ȧ=- ḃ is true due to conservation of energy. The exact rate of loss or gain and the mechanism by which it may happen is not important here and the actual details of the physical process is also not required here. All we are considering is that local conservation laws are at work for complete system within total box size, l.[ Any explicit time dependent processes are not studied here.] Obviously we are assuming here that the system and the bath are made up of identical (CFT) field content. Let us consider two extreme cases below. Case-1: Independent entropies When b≪ a, for the bath subsystem-B being very small in size, the entanglement entropy of the larger system-A can be found by its extremal surface area as <cit.> S[A] = L^d-1 V^(d-2) (2d-4) G_d+1( 1ϵ^d-2 -b_0^d-1 a^d-2) While the extremal surfaces of bath subsytem on both sides become disconnected. The entropy of small subsystem-B become independently, S[B]= L^d-1V^(d-2) (d-2) G_d+1( 1ϵ^d-2-2^d-2b_0^d-1 b^d-2) Eq.fin1 involves area contributions from two disconnected but identical extremal surfaces, which contribute to the bath entropy, see the upper graph as in fig.fig22b. Note that S[A] and S[B] have the local parts which depend on individual strip parameters a and b, respectively, Thus both systems entropies are completely independent even though systems are in contact with each other. The entropy of the small bath system-B would be defined by equation fin1 until the crossover point is reached. After the crossover new extremal (connected) surfaces will emerge as drawn in lower graph of fig.fig22b. We will discuss it next. Case-2: Entropies with identical local components When b≫ a, in this regime of large bath system the entanglement entropy of system-B is given by the equation S[B]= L^d-1V^(d-2) (2d-4) G_d+1( 1ϵ^d-2 -2^d-2b_0^d-1 (l-2b)^d-2)+S_l where S_l is total entropy of the systems together, which is a fixed quantity. S_l gets the contribution from the outer RT surface connecting two farther ends of the symmetrical bath on either side, follow the lower graph in figfig22b. Note S_l is independent of individual sizes b or a, and it is a fixed quantity for given l. We might still vary individual system sizes such that we keep l fixed. Now for smaller system-A of strip width 2a the entropy is given by S[A]= L^d-1V^(d-2) (2d-4) G_d+1( 1ϵ^d-2 -2^d-2b_0^d-1(l-2b)^d-2) where we have simply used 2a=l-2b. From eqs. fin2 and fin2j, we observe that the S[A] and S[B] differ only by an overall constant, S_l, otherwise they have the same type of local dependence on b. It also implies that under small change in the system sizes ∂ S[A]∂ b =∂ S[B]∂ b ∂ S[A]∂ a =∂ S[B]∂ a Put in other words, S[A] and S[B] represent two independent extrema of the same observable and only differ up to a constant. Classically the areas of these extremal surfaces is such that S[B]>S[A], but quantum entropy of large bath instead would be governed by the minimization S^quantum_b≫ a[B] ={S[A], S[A]+S_l}_min=S[A] . This states that the quantum entropy of a large bath is the same as the entropy of smaller system-A. Furthermore we note that the quantum entropy of bath decreases as b gets larger and larger but at the same rate as that of system-A. So the Page-curve for entropy of large bath system (b≫ a) follows from the principle of minimum entropy, if there exist multiple extrema separated by constants, like S_l here. The system-A and the bath subsystem-B entropy otherwise have identical local dependences. This is the net conclusion of the proposal given in eq.fin2s. Although we might still wonder that the bath entropy ought to have been taken simply as S[B] given in fin2, which is net classical area of bulk extremal surface. Instead, as per quantum minimality proposal fin2s, the bath entropy has to be given by smaller quantity S[A], mainly because both have the same local dependence. The latter is in agreement with the Page curve expectation and unitarity for quantum systems that follows from conservation laws. The quantum entropy proposal shound be taken as complete result for extremal CFTs (at zero temperature) being in contact with symmetrical bath. Island and Icebergs: full system entropy We did not explicitly encounter any island like isolated (disconnected) surface area contributions to the entropy so far.[In higher dimensional bulk theories the 'islands' are multidimensional (co-dim-2) surfaces. These are no longer point objects as in JT gravity and radiation bath models.] So where are these contributions hidden in the above analysis? We have got the Page curve using quantum minimality principle for large bath system without knowing about the island (fragments) entropy contributions. To understand it we need to dissect the total entropy of system-A and system-B. It is vital to understand first what S[A U B] teaches us, even though it is only some system parameter and a fixed quantity. We explore this systematically for d=3, 4 and 5 dimensions. (For d=2 CFT case, this analysis has been presented in <cit.>.) i) For d=3 CFT: The total system entropy S_l is given by (l=2a+2b) S_l=S[A U B] = L^2 l_2 2 G_4( 1ϵ-b_0^2 a+b) where l_2 is the length of the strip. Making an expansion on the r.h.s. in small ratio s≪ 1(  s=a b), one can find S_l = L^2l_2 2 G_4 (1ϵ- b_0^2 b) +L^2l_2 2 G_4b_0^2 b(s-s^2+s^3- ⋯ ) ≡ S^(0)_bath + S_Island + S_Icebergs where expressions in the last line have been identified as: 1) entropy of pure bath: S^(0)_bath=L^2 l_2 2 G_4 (1 ϵ-b_0^2 b) ,      2) subleading gravitational entropy of island-ic boundary: S_island= L^2 l_2 a 2 G_4b_0^2 b^2,     3) rest all sub-subleading ( icebergs) entropies together as: S_icebergs=- L^2l_2b_0^2 2 G_4( a^2 b^3- a^3 b^4 + ⋯). Note S^(0)_bath in eq.dind3ab1 represents the HEE of CFT_3 strip system having width 2b (net size of the bath subsystem on both sides). Note it is entropy of pure bath system without the presence of system-A. Whereas S_island, which genuinely represents the interactions between bath-B and system-A, is the `gravitational' entropy of island-like boundary situated at z= b (e.g. corresponding to an island-like region lying in between z=∞ and z=b inside the AdS_4). The codim-2 island boundary located at z=b has a geometrical area: A_island=2L^2 l_2 a b^2 It is proportional to actual size 2 l_2 a of boundary CFT system-A. Thus geometric entropy of island eq.j7t may be expressed as S_island= A_island 4G̃_4 Here G̃_4=G_4/b_0^2 is treated as effective Newton's constant. Other sub-subleading terms, which are altogether named as icebergs' entropies, include contributions from remaining terms in small s expansion. Thus by doing a series expansion it gets revealed that the various terms in the series find3ab1 although may have got distinct interpretations, but they are actually inseparable from each other. No matter what these individual values might be, all the terms are important because these add up nicely to constitute S_l, i.e. the total entropy. The total entropy, including island and icebergs, thus has a constant value, for given l. (The S_l depends only on the parameter l, which is measure of total systems size. In this sense S_l is actually a global quantity.) One could still vary a and b individually but keeping l(=2a+2b) fixed. That means the bath size could grow at the cost of size of system-A and vice versa, under mutual local exchanges or those processes which may lead to shift in the mutual interface of systems A and B. (This appears akin to what might happen in black hole evoparation processes also, e.g. through Hawking radiation, where the Hawking radiation is treated as bath.) Perhaps simple CFT models may teach us something about the black hole evoparation process! The relevant entropy graphs are plotted in the figure figent3a with a discussion in the caption for small and large bath cases. Unitarity and the locality of entropy The question still arises what would happen if we ignored the subleading (icebergs) contributions in eq.find3ab1, due to their smallness, as being subleading O(s^2) and even smaller. Although one is free to do so but we immediately find that the r.h.s. of S_l will now start depending on a and b in independent ways! This will lead to varied conclusions regarding the Page curve, unitarity and about the information content of the systems. Alternatively we may decide that S_icebergs terms should not be dropped from leading terms in any situation. In other words precise knowledge of isolated contributions is vital for the unitarity! Furthermore, the island and the icebergs would remain invisible, as these contribute to eq. fin2, which is an higher entropy extrema and hence unphysical as per `quantum entropy' proposal of bath-B. The latter reason solely arises from the quantum entropy principle that for large size bath entropy should be taken with smaller value amongst (S_l + S[A]) and S[A], where S_l≫ 0 is total entropy. ii) d=4 case: For CFT_4 the total system entropy S[AUB] can be written as S_l = L^3 l_3 l_2 4 G_5( 1ϵ^2- b_0^3 (a+b)^2) where l_2 & l_3 are the sizes of two transverse spatial coordinates of the CFT_4 on the boundary of AdS_5 geometry. The systems are separated along x_1 direction. By making an expansion on the r.h.s. in find4ab, for small s we get S_l = L^3 l_3l_2 4 G_5( 1ϵ^2 -b_0^3 b^2 +b_0^3 b^2s (1+s)^-2) = L^3 l_3l_2 4 G_5( 1ϵ^2 -b_0^3 b^2 +b_0^3 b^2(s-2s^2+O(s^3) ) ) ≡ S^(0)_B + S_Island + S_Icebergs where break up of various expressions in the last line is as: the leading pure bath entropy, S^(0)_B=L^3 l_3l_2 4 G_5 (1 ϵ^2-b_0^3 b^2) ,      the island-ic (gravitational) entropy, S_island= L^3 l_3 l_2 b_0^3 a 4 G_5 b^3≡A_island 4 G̃_5 ,     and other subleading icebergs entropies: S_icebergs=- L^3 b_0^3 l_3l_2 2 G_5[ a^2 b^4- a^3 3b^5 + ⋯]. Note again S_island is proportional to 2 L^3 l_3 l_2 a b^3 which is the geometrical area of extended 3-dim island boundary located at z=b inside AdS_5 spacetime, with a redefined 5-dim Newton's constant as G̃_5=G_5/b_0^3. The related entropies are plotted in the figure figent4a. They exhibit similar properties as d=3 case for small and large bath cases. iii) For d=5 case: Above results clearly tell us that for CFT_5 the subsystem entanglement properties would also be similar to d=3, 4 cases. We may convince ourselves that it will be a common occuring phenomenon whenever there are two systems (system and the bath) in contact, i.e. separated by interfaces. The islands and icebergs typically arise as a result of interactions and information sharing between the dofs of the systems. § LOWER DIMENSIONAL KALUZA-KLEIN PERSPECTIVE: MULTIPLE THIN STRIPS It is important to discuss special case of the systems described by eq.fin2. Consider very small size for system-A, such that a≃ R, where R is Kaluza-Klein scale of the theory (analogous to JT-gravity and near CFT_1 case). (We expect that for such small size system, with a narrow width, system-A could be effectively treated as being compactified on S^1, with a compactification radius R, but R≫ϵ.) In that case we can safely take a≃π n R, with n being a positive integer. [Note that it is our assumption that there is an intermediate Kaluza-Klein compactification on S^1 when size a becomes approximately ≃ R at shorter scales. Then it becomes plausible to study a lower dimensional dual (gravitational) description for system-A (viewed as mutiple strips of narrow width π R wrapped along circle (KK direction) but fully extended along other transverse directions).] For simplificity we shall consider only small n values, also ϵ≪π R≪ b (If there is any difficulty one can simply take n=1). The system-A can essentially be treated as assembly of 2n narrow (parallel) strips sandwiched between symmetrical bath system-B (on either sides of total size 2b). We have a situation as depicted in the figure fig21b, where transverse directions x_2 and x_3 are suppressed. An expansion on the r.h.s of fin2, for small n ≪bπ R, gives bath entanglement entropy (for d=3) S[B] =S_l +L^2 l_2 2 G_4(1ϵ-b_0^2 n π R) ≡ ( S^(0)_bath + S_island+ S_icebergs) +S_2n-strips[R] where leading term S^(0)_bath is same as described earlier, and other expressions are S_island= L^2 l_2 4 G_3n b^2,     S_icebergs=-L^2 l_2 4 G_3 (n^2π Rb_0^2 b^3) + O(n^3) The G_3=G_4/(2 π R) is 3-dimensional Newton's constant. The islandic contribution, especially for n=1, is similar to the gravitational entropy of the island boundary situated at z=b inside bulk. The S_icebergs includes rest all subleading contributions. The eq.fin2an involves an infinite series which is perturbative. Actually it will not be wise to separate icebergs from first two leading terms in fin2an at all! They all remain important as the total sum of these terms (within the parenthesis) sum up to l-dependent entropy S_l. It is quite clear from the starting line of the perturbative expansion in fin2an. While the entanglement entropy of the 2n parallel strips, assembled side by side, and in contact with bath, is simply (for d=3) S_2n-strips[A]= L^2 l_2 2 G_4(1ϵ-b_0^2 n π R) and the bath system entropy is S[B]=S_l + S_2n-strips[R] The local R-dependent terms in eqs. fin2an2 and fin2an1 are identical, and since the two expressions differ only by an overall constant S_l, the actual `quantum entropy' of bath would be given by the local entropy contained in system-A (multiple narrow strips) only. So under minimality selection rule quantum entropy of bath should simply be stated as S^quantum[B] ={S_2n-strips, S_l+S_2n-strips}_min= S_2n-strips where entopy of all 2n parallel strips arranged together is given above. Note l_2 is length of these strips, and the individual strip width is taken as π R. This is the net entanglement entropy of a large bath system, when b≫ R, i.e. towards the end of the Page curve. It entirely gets its contribution from single RT surface homologous to central 2n strips system. Note the narrow strips have width π R(≪ b). It is important to note that the bath entropy is discretized since KK-level n takes integral values (n∈ Z). Obviously we would trust these results for small n values only. For large n it would be good to use noncompact continuum description in one higher dimensions. It can be concluded that the entropy for a large bath will necessarily show discrete jumps as and when KK level changes. This is an example of strongly coupled system of 2n parallel sheets (system-A) placed in the middle of large symmetrical CFT baths on either side. The conclusions shall remain unchanged even under infinite size limit (b→∞,  l→∞) and thus should be treated universal. We conclude that we would not be able to see island and icebergs physically, as their net contribution to entropy always results in a fixed constant only. §.§ Entropy spectrum of strip systems We are interested in exchanging small number of strips between bath (B) and the multi-strip system (A). This will lead to changes in the systems entanglement entropy. Note a small number of strip exchanges between B and A will not change the total entropy S_l. Thus net change in entropy of system-A with KK-strips number n_2 and with KK-strips number n_1 can be found to be (n_1<n_2)△ S_1→ 2 =S_strips[n_2]-S_strips[n_1] =L^2 l_2 2 G_4(1 n_1-1 n_2) b_0^2π R ≡1 T_E△ E_1→ 2 In the last line the entanglement temperature of bath can be taken as T_E ≃1 l<cit.>. Empirically we may determine that typical change in energy density of systems (energy per unit strip-length) △ E_1→ 2 = △ E_1→ 2 l_2 = L^2 b_0^2 2π G_4 l R(1 n_1-1 n_2) = L^2 b_0^2 4π^2 G_3 l R^2(1 n_1-1 n_2) This energy spectrum is obviously discrete in nature! A fixed quantum of energy exchanges is required to take place between bath-B and multi-strip system-A during an exchange of the strips. Note we are discussing the CFT ground state (zero temperature) only. The spectrum appears analogous to the atomic spectrum, but system-A here is made up of discrete number of strips whereas a finite number of strips (quantum matter and not the photons) are exchanged between various KK levels. This exchange process entails KK-level `jumps'. Here n_1=1 may be treated as the lowest level (for smallest size single strip system) while n_2>1 will correspond to higher levels (more than one strip cases). Note a level jump in the strip number is necessarily associated with discrete (quantum) CFT matter exchanges between system and its surrounding bath. The relevant physical scale is compactification radius R. A plausible interpretation of above energy spectrum may be given as follows. The discrete KK modes have typical momentum ∼1 R. (The strip length l_2 is some large quantity (fixed), L≫ 1 is AdS radius of curvature, G_3≡ G_4/(2π R) is 3-dim Newton's constant.) Thus △ E_1→ 2∝1 R^2 appears primerily due to the KK momentum modes in this case. The energy-matter exchanges between central strip system-A and the bath are precise and discrete! We emphasize that with more careful analysis of subleading terms in the entropy, we might be able to see winding modes contributions! We hope to report about it in subsequent communications. A similar analysis for 4-dimensional CFT_4 would provide energy density in exchanges △ E_1→ 2 l_3 l_2 = L^3 b_0^3 8π^3 G_4 l R^3(1 n_1^2-1 n_2^2) where (l_3l_2) is the transverse size of the 3d strips, where the individual width of strips is ∼π R. We will show that change in entanglement entropy under strip exchange (or KK level jumps) can be determined for finite temperature cases as well. There the |△ E| will involve thermal corrections. Perturbatively we shall estimate and find that the leading thermal correction grows proportional to R, the width of the strips. We guess these corrections involve string winding modes. § FINITE TEMPERATURE SYSTEMS The previous exercise can be extended to the case of CFT at finite temperature as well. The whole process goes in parallel for any CFT system which is in a mixed state. The limitation is that for d>2 case the HEE at finite temperature can only be estimated by using perturbative methods. Alternative options would be resort to the numerical approach. Consider the asymptotically AdS_d+1 geometry that has Schwarzschild black hole at the center ds^2=L^2 z^2(-f(z) dt^2 +dz^2 f(z) + dx_1^2+⋯ +dx_d-1^2) where f(z)=(1-z^d z_0^d) with z=z_0 being the location of black hole horizon. There is a finite temperature in the field theory on AdS boundary. Assume now that the strip shaped system-A with strip width 2a is taken in thermal equillibrium with symmetrical bath system-B on either sides, so that system-A is located in the middle of bath (net width of bath system being 2b). Other transverse directions of the systems are infinitely extended. Here both systems A and B have same temperatures. We only discuss the case when a≪ b, because a ≫ b case is rather straight forward. The entanglement entropy of the strip like bath system-B on the boundary of btz23 can be written as S[B]= S_l(l, z_0) +S(2a, z_0) =S_l(l, z_0) +S(l-2b, z_0) In our notation a functional S(x, z_0) represents the HEE for strip system having width x obtained from area of extremal surfaces in a black-brane AdS geometry <cit.>. The z_0 dependences indicate there is finite temperature effect (horizon dependence). Note the first term on the r.h.s. of ft1 involves those constants which treat system-A and bath-B together as single entity. It measures a global information involving A and B. Only second term has the local entropy information regarding central system-A like the size and location of interface boundaries between system-A and bath-B. Usually for small width x one can estimate S(x, z_0) using perturbative tools as discussed in <cit.>. It is the best method so long as we have x≪ z_0. Also this is what one would be requiring most when approaching the end of the Page curve involving two systems A & B. Typically the perturbative series looks like <cit.> S(x, z_0)= S_0(x) +S_1(x,z_0)+S_2(x,z_0)+⋯ where leading term S_0(x) is the entropy of AdS ground state, see eqs.fim1 and fin1, while the first order term is given by S_1(x, z_0)= L^d-1V^(d-2) 16 G_d+1(d-1)a_1 x^2 (d+1)b_0^2 z_0^d note it has explicit horizon (z_0) dependence, and same is true for second order term and so on. (Here a_1 and b_0 are specific dimension dependent coefficients involving explicit Beta-functions, see appendix of <cit.>). At the same time the entanglement entropy of central system-A, having width 2a, is given by S[A]= S(2a, z_0) = S(l-2b, z_0) In second equality we used l=2a+2b, as the bath has size 2b. Again one can compare that the expressions in equations ft1 and fin1a differ only by an over all constant (global) quantity S_l(l, z_0), that depends only on full size (system plus symmetrical bath) l. From the perspective of system-A S_l is some global fixed quantity. Therefore we conclude that these two equations represent two different extrema of the same entropy observable, primerily because both entropies contain identical local terms. The local terms arise from extremal RT surface that connects common interfaces of systems A and B. These local terms contain mutual entanglement information which two systems share between them. Further eqs. ft1 and fin1a are telling us that they contain identical entanglement entropy (through local exchanges along common interfaces) for system A and system B. Classically they differ only up to overall constant. We conclude that `quantum' entanglement entropy measure at finite temperature for large bath subsystem-B (i.e. b≫ a) ought to be taken as the smaller value between ft1 and fin1a. Therefore the quantum entropy of any large thermal bath system (in contact with smaller system-A) would be S_quantum[B] = {S[A], S_l+ S[A],⋯}_min=S[A] . The above result in fin17 is consistent with the expectations of the quantum entropy Page curve when bath is very large. Note this conclusion is independent of how large l might be, so long as b≫ a is obeyed! In infinite bath limit b→∞ ( l→∞, a=fixed), the constant S_l has a limit: S_l(l,z_0)→ S_BH(z_0) where S_BH is black hole entropy. So we will get lim_b→∞ S_quantum[B] = {S[A], S_BH+S[A],⋯}_min≡ S[A]=S(2a, z_0) . The right hand side of the eqs. fin17 and fin17a are identical. Hence for mixed states irrespective of the overall system size, the entropy of a large bath system is determined by the entropy of smaller system-A, under quantum minimality principle. S[A] again accounts for the smallest entanglement entropy between two contact systems A & B, even in thermal case! Perhaps an interesting stage (during blackhole evaporation) can arise when system-A size becomes very small such that it becomes comparable to intermediate Kaluza-Klein scale of the theory, i.e. a≃ nπ R, and b≃l 2-π n R, where R is radius of Kaluza-Klein circle. Then bath entropy may be expressed as S_quantum[B]= S(2π n R, z_0) which is the entropy of 2n independent strips (or sheets) put together. It is the smallest quantum entropy for a large bath, and is quantized by the virtue of the presence of KK scale in the (bulk) theory. However we should trust this result for small n∈ Z values only. It is clear that any change in large bath entropy tends to be discrete in nature, if there exists KK scale. Similarly, one can deduce that during quantum evolution of a black hole state after long times, i.e. towards the end of evaporation, when system-A will shrink to zero size, the bath entropy S_quantum[B]→ 0 . That means all the information which was kept within system-A has been transferred to system-B (a very large bath). No loss of information can be expected in this process even for thermal systems! §.§ Spectrum of narrow KK strips When the size of the system-A becomes very small such that it is closer to a Kaluza-Klein radius, R, we can set system size as 2a ≃ 2 n π R, for some integer KK-level n ≥ 1. (n=0 means there is no system-A.) We are now interested in exchang of a small number of KK strips between bath (B) and the strip system (A). We may call this process as `sheets of CFT matter' exchange between bath and system-A at their mutual interfaces. This will result in change of level n for the system-A. The resulting change in entanglement entropy from level n_2 to n_1 can be calculated (n_1<n_2) as △ S_1→2 =S_strips[n_2]-S_strips[n_1] =L^2 l_2 2 G_4(1 n_1-1 n_2) 1π R+ △ S_thermal ≡1 T_E△ E_1→ 2 Assuming entanglement temperature can be set as T_E= 1 l. So empirically it can be determined that change in energy of strips is discrete, △ E_1→ 2 = L^2 l_2 G_3 l (2π R)^2 (1 n_1-1 n_2) + △ E_1→ 2^thermal A conclusion is drawn from here that a discrete quantum of energy would have to be exchanged between outer bath system-B and central (strip-like) system-A for finite temperature CFT. The length of strips l_2 is some large value (fixed), L≫ 1 is AdS radius of curvature, and G_3 is the Newton's constant. The thermal part of entropy is difficult to estimate exactly. However in the regime of our interest, when a ≪ z_0, we can evaluate it perturbatively. From ther1 up to first order (say, for the CFT_3) △ E_1→ 2^thermal =L^2 l_2 32 G_3 la_1 π (n_2^2-n_1^2)R 2b_0^2 z_0^3 Thus schematically we get |△ E_1→ 2|∝1 n_1n_2 R^2+ # (n_2+n_1) R z_0^3 Indeed the first term appears primerily due to the KK momentum modes. While the second term (due to thermal correction) grows linearly with R, which we guess presumably is consequence of string winding (wrapping) modes. We conclude that these energy and matter exchanges are all discrete! The same analysis can also be done for other CFT_d. § SUMMARY We have proposed that quantum entropy of entanglement for a large bath system-B (CFT) when in contact with small subsystem-A follows quantum minimality principle S_quantum[B]= {S[A],S[A]+S_l, ⋯}_min =S[A]. where S_l is full system (AUB) entropy. Any small fluctuation in the size of system-A (due to the matter exchange with bath) would not alter this conclusion provided systems follow conservation laws. Thus the equation realizes the Page curve for the entropy of quantum matter in contact with sufficiently large bath system. This conclusion is based upon the observation that for small subsystem-A and relatively large bath-B the respective entanglement entropies differ only by an overall constant. The constant does depend on the total systems size (l), which is fixed. For this reason S_l is conserved and quantum information it contains is essentially global. We have explicitly shown that islands and subleading entropis (icebergs) contribute to the unphysical extremum of bath entropy. Actually all these contributions form various parts of S_l only. The (physical) quantum entropy of bath however does not get contribution from these fictitious parts! In this light the entropy expression ficti1 is very close to our proposal, but it is only approximate and may not cover full account of quantum entanglement between systems, mainly it ignores vital subleading contributions beyond the islands. On the contrary, we have shown that there will be an infinitum of such subleading contributions. It is shown to be true for all CFT_d systems in equillibrium. Furthermore we have analysed our results when subsystem-A size becomes `point-like', similar in size as Kaluza-Klein scale of the theory, if there exist such an scale. As the small system size approaches KK-scale, we find necessary discreteness in the entropy and the energy spectrum due to existence of low lying KK towers. Should there be no spontaneous compactification scale in the theory, the entropy of a large bath would vanish smoothly as and when the subsystem-A disappears. In summary it is indicated that the change in bath entropy does capture Kaluza-Klein discreteness. 1cm Acknowledgments: It is pleasure to thank Stefan Theisen for several insightful discussions on this subject. I am thankful to MPI Golm for the kind hospitality where part of this work was carried out. The financial support from the Alexander-von-Humboldt foundation is also highly acknowledged. .5cm § AN EFFECTIVE CONSTRUCTION OF THE HYBRID GRAVITY AND CFT SYSTEMS For small size central CFT subsystem-A, such that its size 2a≈ 2π R, i.e. when the system size can approximately fit within the Kaluza-Klein radius, the system-A may be treated as being point like. The symmetrical bath subsystem-B on either side is being comparatively very large so it can continue to be described by respective (noncompact) CFT. However, for all practical purposes, with out any loss of physical picture, the system-A can also be replaced by dual `near AdS' geometry in one lower spacetime dimensions. The Newton's constant for near-AdS bulk geometry would become G_d=G_d+1/(2π R). We have tried to draw these situations in the figure figeent23Pen for systems A, B and the compliment system B^c. The mixed gravity and CFT systems set up has been a favourable arrangement for an island proposal <cit.>. .5cm 99malda J. Maldacena, Adv. Theor. Math. Phys. 2, 231 (1998) 9711200; S. Gubser, I. Klebanov and A.M. Polyakov, Phys. lett. B 428, 105 (1998) 9802109; E. Witten, Adv. Theor. Math. Phys. 2, 253 (1998) 9802150. pati A.K. Pati and S.L. Braunstein, NATURE 404, 164 (2000); S.L. Braunstein and A.K. Pati, Phys. Rev. Lett. 98, 080502 (2007). pati1 J. R. Samal, A. K. Pati, and Anil Kumar, Phys. Rev. Lett. 106, 080401 (2011). almheri A. Almheiri, R. Mahajan and J. Maldacena, “Islands outside the horizon”, 1910.11077. replica19 G. Penington, S. H. Shenker, D. Stanford and Z. Yang, “Replica wormholes and the black hole interior”, 1911.11977; A. Almheiri, T. Hartman, J. Maldacena, E. Shaghoulian and A. Tajdini, “Replica Wormholes and the Entropy of Hawking Radiation”, JHEP05 (2020) 013, arXiv:1911.12333. page D. N. Page, “Average entropy of a subsystem”, Phys. Rev. Lett. 71 (1993) 1291–1294, [gr-qc/9305007]. raju S. Raju, “Lessons from the Information Paradox”, arXiv:2012.05770. susski L. Susskind, 1810.11563; Fortschr. Phys. 64 (2016) 24; D. Stanford and L. Susskind, Phys. Rev. D90 (2014) 126007. CA2 A.R. Brown , D.A. Robert, L. Susskind, B. Swingle, and Y. Zhao, Phys. Rev. Lett 116 (2016) 191301; A.R. Brown , D.A. Robert, L. Susskind, B. Swingle, and Y. Zhao, Phys. Rev. Lett 93 (2016) 086006; law1 A. Bernamonti et.al, Phys. Rev. Lett. 123 (2019) 081601. hashi K. Hashimoto, N. Iizuka, and S. Sugishita, Phys. Rev D96 (2017) 126001; K. Hashimoto, N. Iizuka, and S. Sugishita, 1805.04226JT R. Jackiw, "Lower Dimensional Gravity", Nucl. Phys. B252, 343 (1985). JT1 C. Teitelboim, " Gravitational and Hamiltonian Structure in Two Space-Time Dimensions", Phys. Lett. 126B, 41 (1983). RT S. Ryu and T. Takayanagi, 862006181602, 0603001; S. Ryu and T. Takayanagi, "Aspects of Holographic Entanglement Entropy", JHEP 0608 (2006) 045, 0605073. HRT V. Hubeny, M. Rangamani and T. Takayanagi, JHEP 0707 (2007) 062, arXiv: 0705.0016 [hep-th]. jyoti Jyotirmoy Bhattacharya, Masahiro Nozaki, Tadashi Takayanagi, and Tomonori Ugajin Phys. Rev. Lett. 110, 091602. hs2015 R. Mishra and H. Singh, JHEP 10 (2015) 129, e-Print:1507.03836 [hep-th]; S. Maulik and H. Singh, JHEP 04 (2021) 065, e-Print: 2012.09530 [hep-th] hs2022 H. Singh, "Islands and Icebergs may contribute nothing to the Page curve", e-Print: 2210.13970 [hep-th] ]
http://arxiv.org/abs/2407.11935v1
20240716172634
Learning Multi-view Anomaly Detection
[ "Haoyang He", "Jiangning Zhang", "Guanzhong Tian", "Chengjie Wang", "Lei Xie" ]
cs.CV
[ "cs.CV" ]
SUBMITTED TO IEEE TRANSACTIONS ON Multimedia Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Learning Multi-view Anomaly Detection Haoyang He, Jiangning Zhang, Guanzhong Tian, Chengjie Wang, Lei Xie Haoyang He and Lei Xie are with the State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou 310027, China (e-mail: haoyanghe@zju.edu.cn;leix@iipc.zju.edu.cn). Guanzhong Tian is with the Ningbo Innovation Center, Zhejiang University, Hangzhou 310027, China (e-mail: gztian@zju.edu.cn). Jiangning Zhang and Chengjie Wang are with the YouTu Lab, Tencent, Shanghai 200233, China (e-mail: 186368@zju.edu.cn; jasoncjwang@tencent.com). Received —; accepted — ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT This study explores the recently proposed challenging multi-view Anomaly Detection (AD) task. Single-view tasks would encounter blind spots from other perspectives, resulting in inaccuracies in sample-level prediction. Therefore, we introduce the Multi-View Anomaly Detection (MVAD) framework, which learns and integrates features from multi-views. Specifically, we proposed a Multi-View Adaptive Selection (MVAS) algorithm for feature learning and fusion across multiple views. The feature maps are divided into neighbourhood attention windows to calculate a semantic correlation matrix between single-view windows and all other views, which is a conducted attention mechanism for each single-view window and the top-K most correlated multi-view windows. Adjusting the window sizes and top-K can minimise the computational complexity to linear. Extensive experiments on the Real-IAD dataset for cross-setting (multi/single-class) validate the effectiveness of our approach, achieving state-of-the-art performance among sample 4.1%↑/ image 5.6%↑/pixel 6.7%↑ levels with a total of ten metrics with only 18M parameters and fewer GPU memory and training time. Multi-view Learning, Anomaly Detection, Attention Mechanism. § INTRODUCTION Anomaly detection (AD) is a critical application within computer vision, focusing on identifying anomalies to ensure quality and mitigate potential risks <cit.>. This task is widely applicable in industrial <cit.>, medical <cit.>, and video surveillance <cit.> anomaly detection. Diverse anomaly detection datasets have been curated to cater to various scenarios, encompassing 2D <cit.>, 2D with depth maps <cit.>, and 3D datasets <cit.>. Recently, the Real-IAD <cit.> dataset was introduced for multi-view anomaly detection. In contrast to traditional single-view 2D images or 3D point cloud data, the multi-view images in this dataset offer multiple perspectives of each object, where anomalies may manifest in one view while appearing normal in others due to interrelations among different views. This work addresses the intricate task of multi-view anomaly detection. In traditional single-view tasks, as depicted in Fig. <ref>-(a), the current single-view is isolated from other views, leading to predictions of normal in the current view despite the actual anomaly present in the sample. Therefore, the concept of multi-view anomaly detection is proposed, as illustrated in Fig. <ref>-(b) with a detailed definition in Sec. <ref>. Anomaly scores are computed for each view, and the maximum score across all views is selected as the final anomaly score for this sample. Real-IAD <cit.> endeavours to employ existing AD methods to address the task of multi-view anomaly detection. Although  <cit.> constructs a multi-view anomaly detection dataset, it only conducts experiments with existing methods in the multi-view setting of the dataset, without proposing algorithm designs specifically for multi-view anomaly detection tasks. Current 2D AD methods can be broadly classified into three categories. 1) Data augmentation-based methods <cit.> enhance anomaly localization performance by introducing synthetic anomaly data during training. 2) Reconstruction-based methods, utilizing an encoder-decoder structure, learn the distribution of normal samples during training, and reconstruct abnormal regions to normal regions during testing,e.g., GANs <cit.> and Diffusion models <cit.> 3) Embedding-based methods <cit.> map features of normal samples to compact feature space and compare them at the feature level. Prior research <cit.> has investigated four benchmark frameworks encompassing fusion, alignment, tailored, and self-supervision methodologies. Nevertheless, no individual approach has consistently exhibited effectiveness and efficiency across diverse datasets. While, multi-view learning methods are commonly categorized into two groups: CNN-based fusion techniques <cit.> and attention-based fusion approaches <cit.>. However, existing AD methods fail to integrate information across multiple views and cannot address the issue of aligning positions across different perspectives. Furthermore, the majority of existing multi-view fusion methods are only suitable for integrating two perspectives, which presents significant limitations. To address this limitation and learn the correlations between different views as well as feature fusion across views, we propose the Multi-View Adaptive Selection (MVAS) attention mechanism, as shown in Fig. <ref>-(c). The proposed MVAS divides the input image features into neighbourhood attention windows. Then, the multi-view windows adaptive selection algorithm is implemented to compute the semantic correlation matrix between each single-view window and the concatenated multi-view windows. Multi-view neighbourhood windows with darker colours indicate more substantial semantic relevance to the corresponding single-view window. The top-K number of window indexes is obtained with the correlation matrix, which equals four windows, as shown in this figure. The top-K most correlated multi-view windows are selected as keys and values, enabling neighbourhood correlative cross-attention between the single-view window query and the corresponding keys and values. By focusing only on the most correlated windows for attention mechanisms, the computational complexity is significantly reduced, which is a minimum linear complexity, by altering the window size and number of top-K. Based on the MVAS algorithm, we propose a multi-view anomaly detection (MVAD) framework, as illustrated in Fig. <ref>. This framework comprises a pre-trained encoder for extracting features at different scales. Subsequently, three MVAS blocks of varying scales act on these encoder features to facilitate multi-view feature fusion, resulting in enhanced features for each view. The strengthened multi-view features then pass through an FPN-like framework, where information of different scales is fused through convolutional downsampling. The fused features are learned and restored by a decoder with a structure and dimensions number equivalent to the encoder. Finally, the features corresponding to different scales from the encoder and decoder are used to calculate MSE losses, which are summed to form the ultimate training loss. Our contributions are summarized as follows: * We propose a novel framework MVAD for multi-view anomaly detection, which firstly tackles multi-view learning in anomaly detection. * We introduce the MVAS algorithm, which adaptively selects the most semantic correlated neighbouring windows in multi-view for each window in a single-view through attention operations, enhancing detection performance with minimum linear computational complexity. * We conducted experiments on the multi-view Real-IAD dataset in multi-class, single-class and cross settings. Abundant experiments demonstrate the superiority of MVAD over SoTA methods on a total of 10 metrics at the sample 4.1%↑/image 5.6%↑/pixel 6.7%↑ levels for cross-setting. § RELATED WORK §.§ Anomaly Detection Recently, AD has included the following mainstream settings: zero-/few-shot <cit.>, noisy learning <cit.>, and multi-class AD <cit.>. The unsupervised anomaly detection method mainly includes three methodologies: 1) Data augmentation-based methods have shown the potential to enhance the precision of anomaly localization by incorporating synthetic anomalies during the training phase. DRAEM <cit.> generates anomaly samples by utilizing Perlin noise. DeSTSeg <cit.> adopts a comparable approach to DRAEM for synthesizing anomaly samples but introduces a multi-level fusion of student-teacher networks to reinforce the constraints on anomaly data. Additionally, SimpleNet <cit.> generates anomaly features by introducing basic Gaussian noise to normal samples. Despite these advancements, the inability to anticipate and replicate all potential anomaly types and categories prevalent in real-world scenarios limits comprehensive anomaly synthesis. 2) Reconstruction-based methods learn the distribution of all normal samples during training and reconstruct anomaly regions as normal during testing. OCR-GAN <cit.> decouples image features into various frequencies and employs a GAN network as a reconstruction model. The remarkable generative capacities demonstrated by recent diffusion models have prompted certain researchers to engage these models in anomaly detection tasks. DiffAD <cit.> employs synthetic anomalies with the diffusion model as a reconstruction model alongside an additional Discriminative model. DiAD <cit.> introduces a semantically guided network to ensure semantic consistency between the reconstructed and input images. Nonetheless, reconstruction-based techniques encounter challenges in effectively reconstructing extensive anomaly areas and demonstrating precision in anomaly localization. 3) Embedding-based methods can be further classified into three categories: memory bank <cit.>, knowledge distillation <cit.>, and normalizing flow <cit.>. PatchCore <cit.> constructs a memory bank by approximating a set of features that describe normal sample characteristics through the collection of a coreset. During testing, anomaly scores are calculated using the nearest neighbour method. RD4AD <cit.> proposes a teacher-student model of reverse knowledge distillation paradigm, effectively addressing the issue of non-distinguishing filters in traditional knowledge distillation frameworks. §.§ Multi-view Learning Multi-view feature fusion techniques are currently being applied in diverse scenarios. MVCNN <cit.> introduces a novel CNN framework for efficiently compressing multi-view information. MV3D <cit.> integrates region-wise features from each view through deep fusion. ZoomNet <cit.> combines features from different scale views and integrates them through a sequence of convolution operations. CAVER <cit.> merges RGB and depth view features using cross-attention. MVDREAM <cit.> proposes 3D Self-Attention for fusing multi-view feature information. AIDE <cit.> introduces two multi-feature fusion methods: cross-attention and adaptive fusion modules. PPT <cit.> first extracts feature tokens from each view, concatenates them, and then utilises Self-Attention for feature fusion. MVSalNet <cit.> employs element-wise multiplication and addition to merge multi-view features. MVSTER <cit.> suggests Epipolar Transformer guided aggregation to effectively capture 2D semantic and 3D spatial correlations. FLEX <cit.> integrates multi-view convolution layers to capture features from multiple perspectives. PatchmatchNet <cit.> utilizes Group-wise Correlation to compute matching costs between each view and other views. Although many excellent anomaly detection algorithms and multi-view fusion methods are available now, an effective multi-view feature fusion algorithm needs to be designed explicitly for AD tasks. Therefore, we propose a novel framework MVAD for multi-view anomaly detection tasks, significantly improving both effectiveness and efficiency. § METHOD §.§ Preliminaries Task Definition of multi-view AD. Compared to the traditional anomaly detection input of n∈ℕ batch-size images, the input for multi-view tasks is based on the number of samples. Each input consists of p∈ℕ samples, where each sample contains v∈ℕ images from different views. Therefore, the actual input batch size is p× v∈ℕ. During training, the features of each view X_s ∈ℝ^p× c× h× w need to be fused with the features of the other views X_m ∈ℝ^p× (v-1)c× h× w to obtain the enhanced features Y_s^o ∈ℝ^p× c× h× w of the current view. During testing, the anomaly map S_px∈ℝ^pv× H× W obtained serves as pixel-level anomaly scores. Taking the maximum value of the entire anomaly map as the image-level anomaly score S_im∈ℝ^pv. Finally, the maximum image-level anomaly score S_im of the five views in each sample is taken as the S_sa∈ℝ^p sample-level anomaly score. For sample-level GroundTruth G_sa∈ℝ^p, if any view within a sample contains an anomaly, then the sample is considered anomalous. Conversely, the sample is considered normal if no abnormal regions occur in any view of it. Attention. For input queries Q ∈ℝ^N_q × C, key K ∈ℝ^N_k × C, and value V ∈ℝ^N_v × C, the weights between the queries and keys are calculated by scaled dot-product attention. The weighted sum of the value is calculated by: Attention(𝐐, 𝐊, 𝐕)=softmax(𝐐𝐊^T/√(d_k)) 𝐕, where √(d_k) is used to avoid too large of the result of softmax and gradient vanishing. Cross-attention involves queries from one input sequence and keys and values from another, making it conducive to multi-view feature integration. §.§ Multi-View Adaptive Selection To facilitate multi-view anomaly detection and fusion while minimizing computational complexity and memory usage, we propose a Multi-View Adaptive Selection (MVAS) attention mechanism. Detailed explanations will be provided in the following sections, and the entire algorithm is presented in the form of PyTorch-like pseudo-code in Algorithm <ref>. Neighborhood Attention Window. Given an input multi-view feature map 𝐗_i ∈ℝ^v× h× w× c, where v denotes the number of views, the feature map is initially partitioned into neighbourhood attention windows of size a × a referred to as 𝐗_a ∈ℝ^v× a^2 ×hw/a^2× c, each window encompassing hw/a^2 feature vectors. A linear mapping is applied to feature map 𝐗_s ∈ℝ^ a^2 ×hw/a^2× c from a specific view to generate Query, while linear mappings are conducted on features 𝐗_m ∈ℝ^(v-1) × a^2 ×hw/a^2× c from other views to produce Key and Value. 𝐐_𝐬=𝐗_s 𝐖^q, 𝐊_m=𝐗_m 𝐖^k, 𝐕_m=𝐗_m 𝐖^v, where 𝐖^q, 𝐖^k, 𝐖^v ∈ℝ^c × c are linear mapping weights. Multi-View Windows Adaptive Selection. Subsequently, the objective is to identify the most correlated windows between the current view feature window and the multi-view feature windows to obtain a correlation matrix. Specifically, based on 𝐗_s ∈ℝ^ a^2 ×hw/a^2× c, the partitioned window features for a single view 𝐀_s ∈ℝ^ a^2 × c are further derived, along with multi-view window features 𝐀_m ∈ℝ^ (v-1)a^2 × c from 𝐗_m ∈ℝ^(v-1) × a^2 ×hw/a^2× c. Then, the correlation matrix 𝐀_c ∈ℝ^a^2 × (v-1)a^2 can be computed using the following formula: 𝐀_c = 𝐀_s (𝐀_m)^K The correlation matrix 𝐀_c unveils the semantic correlation between single-view window features and multi-view window features. Once the correlation matrix is obtained, the aim is to calculate the top-K windows in which each window feature of the single view has the closest semantic correlation with the features of the multi-view windows. The ultimate objective is to derive the index matrix 𝐈_m^K ∈ℝ^a^2 × k of the highest correlation of the multi-view windows: 𝐈_m^K = TopK_Index (𝐀_c). The features of each window in the current view are adaptively computed for semantic similarity with all windows from other views, selecting the top-K most correlative windows to be the focus of subsequent attention computation. Neighbourhood Correlative Cross-Attention. To compute the cross attention of the feature map of a single view toward the most correlated top-K windows of feature maps from other views, it is necessary to obtain the top-K most correlated neighbourhood windows Key and Value 𝐊_m^K, 𝐕_m^K ∈ℝ^a^2 ×khw/a^2× c by applying the most correlative index matrix 𝐈_m^K to the multi-view feature tensors 𝐊_m and 𝐕_m. The cross-attention is applied to the input Query to get the enhanced multi-view feature fusion output 𝐗_s^o: 𝐗_s^o=Attention(𝐐_s, 𝐊_m^K, 𝐕_m^K). Finally, the enhanced single view feature 𝐗_s^o should be transformed to the original input shape to get the un-patched output single view feature 𝐘_s^o ∈ℝ^h× w× c. Iterate v times, where v represents the number of views in multi-view scenarios, and concatenate the results to obtain the final output 𝐘_o ∈ℝ^v × h× w× c. §.§ Computation Complexity Analysis of MVAS The complexity Ω of cross-attention for single-view as the query Q ∈ℝ^(hw)× c input and multi-view as the key/value K,V ∈ℝ^(vhw)× c input is as follows: Ω(Cross-Attention) = 2v(hw)^2c + v(hw)^2                         = (2c+1)v(hw)^2≈ O((hw)^2), where v is the number of other multi-views except the single-view and c is the dimension of the features. Cross-attention has a high computational complexity of O((hw)^2) and occupies a vast number of GPU memory. Therefore, MVAS is proposed with less computational complexity, which consists of two parts: the multi-view windows adaptive selection and neighborhood correlative cross-attention. The total computational complexity is: Ω(MVAS) = 2v(a^2)^2c + v(a^2)^2c + 2hw/a^2khw/a^2c              = 2c(va^4 + k(hw)^2/a^4) ≥ 4c(va^4 ·k(hw)^2/a^4)^1/2              = 4c(vk)^1/2(hw) ≈ O((hw)), where a is the neighbourhood attention window size and k is the number of the most correlated neighbourhood windows. The inequality between arithmetic and geometric means is applied here. The equality in Eq. <ref> holds if and only if: va^4 = k(hw)^2/a^4 ⇒ a = (k/v)^1/8(hw)^1/4 . Therefore, MVAS could achieve approximately linear computational complexity O((hw)) by changing the neighbourhood attention window size. §.§ Overall Architecture of MVAD We propose a multi-view anomaly detection (MVAD) framework comprising a teacher-student model with a reverse knowledge distillation structure <cit.> and an intermediate multi-view fusion module at three different scales as shown in Fig. <ref>. The intermediate layer performs multi-view feature fusion on features extracted from the pre-trained teacher model at different scales. Specifically, for each j-th scale, there are several corresponding MVAS^j blocks with the same number of dimensions. Each single-view feature, after incorporating positional encoding ℙ, is fed into the MVAS block, which is defined as: X_o^j=LN^j(MVAS^j(X_i^j + ℙ))+X_i^j, X_o^j=LN^j(MLP^j(X_o^j))+X_o^j, where X_o^j is the enhanced multi-view features for j-th scale. Then, the enhanced features at each scale are integrated using an FPN-like structure and fed into the decoder model. Finally, the loss function is computed by summing the MSE loss at each scale of the encoder f_ℰ^j and decoder f_𝒟^j: ℒ=∑_j ∈ J{1/H× Wf_ℰ^j - f_𝒟^j_2^2}, where J is the number of feature stages used in experiments. During the testing phase, we utilize cosine similarity to calculate pixel-level anomaly scores for the encoders and decoders across three different stages. The maximum value of the anomaly map was directly taken as the image-level anomaly score. The maximum anomaly score among images of the same sample from different views is taken as the sample-level anomaly score. § EXPERIMENTS §.§ Datasets and Implementations Details Real-IAD Dataset. The Real-IAD <cit.> dataset is a large-scale multi-view anomaly detection dataset collected from the real world. Real-IAD contains a total of 30 different classes of actual industrial components, each of which has five viewing angles of images. Among them, there are 99,721 normal samples and 51,329 anomaly samples, with a total of 150K images for training and testing. In addition, the dataset is acquired by a professional camera with an ultra-high resolution of 2K∼5K. Pixel-level segmentation masks and multi-view image labels are provided for the evaluation of multi-view anomaly detection. Task setting. We conducted experiments on both single-class and multi-class settings in the Real-IAD dataset. Each category is associated with a unique model in the single-class setting, whereas in the multi-class setting, all categories are trained by a single model. To further assess the robustness of the methods across different settings, we averaged them to obtain cross-setting results. Evaluation Metric. The Area Under the Receiver Operating Characteristic Curve (AUROC), Average Precision (AP), and F1-score-max (F1-max) are simultaneously used for the pixel-level, image-level, and sample-level evaluations. Per-Region-Overlap (PRO) is used for anomaly localization at the pixel-level. The Average metrics (cf. Tab. <ref>) demonstrate the average values of all indicators for samples, images, and pixels in various settings, including single-class and multi-class scenarios, showing the robustness of the methods. Implementation Details. All input images are resized to 256×256 without additional data augmentation. For multi-view tasks, training is conducted using all available view images v=5, while the input samples are 4, resulting in a batch size of 20. A pre-trained ResNet34 model serves as the teacher feature extractor, with a corresponding reverse distillation Re-ResNet34 model employed as the student model for training. AdamW optimizer is utilized with a decay rate of 1e-4 and an initial learning rate of 0.005. The model is trained for 100 epochs for both single-class and multi-class settings on a single NVIDIA RTX3090 24GB. For a balance between effectiveness and efficiency, we set a_1,2,3=8, k_1,2,3=16,32,64, and N_1,2,3=1,2,4 for the size of the neighbourhood attention window, the top-K selection, and the number of MVAS blocks for each stage. The selection of a and k is further discussed in the ablation study (cf. Sec.<ref>). §.§ Comparison with SoTAs on multi-view Real-IAD Because there is currently no algorithm specifically designed for multi-view AD, we selected the UniAD <cit.> and SimpleNet <cit.> algorithms, which achieve SoTA performance on both multi-class and single-class tasks, as our comparative methods. Our method and the SoTA methods both use all view images for training and testing. Quantitative Results. Due to space constraints, we present the quantitative results of UniAD and SimpleNet in the main text, while the results of other methods will be fully displayed in the appendix. In Tab. <ref>, we present the results of cross-setting, averaging all metrics for sample/image/pixel levels, and further averaging the results obtained from single-class and multi-class settings. This comprehensive approach integrates all metrics and both settings, emphasizing the model's effectiveness and generalization. The table indicates that we have achieved the State-of-the-Art (SoTA) performance with sample 4.1%↑/image 5.6%↑/pixel 6.7%↑ in the cross-setting analysis. In Tab. <ref>, we compare AUROC/ AP/F1-max evaluation metrics at the image level, with multi-class on the left and single-class multi-view anomaly detection on the right. The results indicate that our method outperforms UniAD by 2.2%↑/2.3%↑/1.7%↑ in multi-class anomaly detection and achieves an improvement of 1.3%↑/0.8%↑/0.6%↑ over SimpleNet in the single-class setting. Furthermore, we evaluate the AUROC/AP/F1-max metrics at the sample level in Tab. <ref>. Our method surpasses UniAD by 1.9%↑/1.7%↑/ 0.7%↑ in the multi-class task and is competitive with SimpleNet in the single-class settings which requires four times the training time (cf. <ref>). Finally, at the pixel level in Tab. <ref>, we assess the metrics AUROC/AP/F1-max/PRO, showing that our method achieves an average improvement of 0.4%↑/ 5.9%↑/5.0%↑/2.9%↑ over UniAD in the multi-class settings and 2.1%↑/13.8%↑/ 11.7%↑/9.0%↑ over SimpleNet in the single-class task. The results show that UniAD performs better in multi-class settings but decreases in single-class settings. SimpleNet excels in single-class tasks but performs relatively poorly in multi-class settings. Our method demonstrates strong performance in both multi-class and single-class settings (cross-setting). More results will be presented in the appendix. Qualitative Results. To further visually demonstrate the effectiveness of our approach, we conducted qualitative experiments to showcase the results of anomaly localization compared with SimpleNet and UniAD. The comparison focused on two classes of complex industrial components from five perspectives of the same sample, as illustrated in Fig. <ref>. SimpleNet fails to recognize tiny anomalies, which are depicted in red on the anomaly maps, in the audiojack category. UniAD contains more false positives in the PCB category. Our approach demonstrates high accuracy and low false alarm rate, showcasing strong multi-view anomaly localization capabilities. Contrasts between all other category samples and methods will be presented in the appendix. §.§ Ablation Study Efficiency Comparison. Tab. <ref> shows that our approach not only has the fewest parameters with only 18.7M and the shortest training time of 8 hours but also requires minimal FLOPs and GPU utilization, while achieving the best performance (cf. Sec.<ref>) compare with other SoTA methods. UniAD boasts minimal computational complexity, but its effectiveness falls far short of ours. Although SimpleNet performs on par with our method in certain metrics, it requires nearly 4 times the training time. RD4AD and DeSTSeg perform well on some metrics, but their FLOPs are three times higher. The Effectiveness of Difference Window Size and Top-K. In this section, we investigate the impact of the size of the neighbouring attention window and the number of top-K most relevant windows in MVAS. As shown in Tab. <ref>, we first analyze the effects of different window sizes and top-K values on evaluation metrics (e.g., S-AUROC/I-AUROC/P-AUPRO), FLOPs, training time, and training GPU memory usage. Considering that the pre-trained encoder extracts features of different scales, it is intuitive to use different sizes of neighbouring attention windows for different scale feature maps. Therefore, based on feature maps of sizes 64, 32, and 16 extracted by the pre-trained encoder from input images of size 256×256, we determined two sets of window sizes: a_1,2,3=16,8,4 and a_1,2,3=8,4,2. With these window sizes, we vary the top-K values, simply setting it as the square of each window size to select the top-K most relevant windows from all other views for the current scale. We also increase the number of top-K values to assess their impact, as 173 and 176. Additionally, we introduce additional linear layers before the windows' adaptive selection as 174 and 177. Results from 172 and 175 indicate that using smaller window size combinations and fewer top-K values can reduce training time while maintaining consistency in evaluation metrics, FLOPs, and GPU memory. Increasing the number of K as in 173 and 176 leads to higher FLOPs, training time, and memory usage without significant performance improvement. The additional linear layers in 174 and 177 not only consume computational resources but also result in performance degradation. Considering training time, FLOPs, and memory usage, we adopt the approach with a top-K value of 178, which yields the best overall performance. The Efficiency of Difference Window Size and Top-K. In Fig. <ref>, we fix K=4 to investigate the impact of window size on FLOPs and training memory. It reveals that computational complexity and training memory decrease as the window size increases. Subsequently, in Fig. <ref>, with a fixed window size of 8, an increase in top-k values leads to a significant rise in computational complexity and training memory. The Effectiveness of Difference Pre-trained Backbone. We investigate the impact of different pre-trained backbones, ResNet18, ResNet34, ResNet50, and WideResNet50, with the results presented in Tab. <ref>. ResNet50 and WideResNet50, due to their high dimensions at each scale, resulted in increased complexity for attention computation and convergence challenges. Conversely, ResNet18's low dimensions led to poor anomaly localization performance during testing. Considering these factors, we adopt ResNet34 as the backbone. § CONCLUSION The paper introduces the MVAD framework, the first to solve the challenging task of multi-view anomaly detection. To address the fusion and learning of multi-view features, we propose the MVAS algorithm. Specifically, the feature maps are divided into neighbourhood attention windows, and then the semantic correlated matrix between each window within single-view and all windows across multi-views is calculated. Cross-attention operations are conducted between each window in a single-view and the top-K most correlated windows in the multi-view context. The MVAS can be minimized to linear computational complexity with proper window size. The entire MVAD framework employs an encoder-decoder architecture, utilizing MVAS blocks with varying dimensions at each feature scale and an FPN-like architecture for fusion. Extensive experiments on the Real-IAD dataset for multi/single-class settings demonstrate the effectiveness of our approach in achieving SoTA performance. Broader Impact. We conducted a systematic investigation into the task of multi-view anomaly detection, providing a comprehensive definition for multi-view tasks and proposing an attention-based research methodology for future studies. The diffusion model demonstrates robust generative and learning capabilities, and we intend to explore its application in multi-view anomaly detection tasks in future research. IEEEtran
http://arxiv.org/abs/2407.13552v1
20240718142746
Number of bound states of the Hamiltonian of a lattice two-boson system with interactions up to the next neighbouring sites
[ "Saidakhmat N. Lakaev", "Shakhobiddin I. Khamidov", "Mukhayyo O. Akhmadova" ]
math-ph
[ "math-ph", "math.MP" ]
equationsection =140mm =200mm =8mm
http://arxiv.org/abs/2407.12320v1
20240717053313
Flexible String Model of Unsaturated Lipid Bilayer
[ "Boris Kheyfets", "Sergei Mukhin" ]
cond-mat.soft
[ "cond-mat.soft" ]
APS/123-QED kheyfboris@misis.ru Theoretical Physics and Quantum Technologies Department, National University of Science and Technology MISIS § ABSTRACT An analytically solvable model of unsaturated lipid bilayer is derived by introducing finite bending angle of the unsaturated bond relative to straight part of the lipid chain considered previously in our model of semi-flexible strings. It is found that lateral pressure profile of unsaturated lipids has distinct maximum in the unsaturated bond region due to enhanced excluded volume effect caused by the bent bond, leading to an increase of entropic repulsion between the lipid chains. Simultaneously, just away from the unsaturated bond some parts of the neighbouring lipid chains have less probability to collide by geometrical reasons, causing depletion of entropic repulsion relative to saturated lipid chains case and resulting in the local minima of the lateral pressure profile surrounding the maximum at the bent unsaturated bond. Lipid chain order parameter, lateral pressure profile, and area per lipid are computed for POPC and compared with those for DPPC lipid bilayer. Flexible String Model of Unsaturated Lipid Bilayer Sergei Mukhin July 22, 2024 ================================================== § INTRODUCTION Consider lipid bilayers that differ only by lipid chain unsaturation: one bilayer consists of saturated lipids, and the other one of unsaturated lipids. Compared to saturated lipid bilayer, unsaturated one would have larger area per lipid (at the same temperature) <cit.>, lower gel phase transition temperature <cit.>, a drop in the chains order parameter <cit.>, and a different lateral pressure profile <cit.>. In this paper we reproduce all these characteristic features by making minimal, but pivotal changes to our previously developed analytically solvable flexible string model <cit.> of saturated lipid bilayer. The changes are two. The principal one is made in the self-consistent calculation procedure: lipid unsaturation is introduced by allowing only bent chain conformations at the double bond position, while the others are filtered out from the summation over lipid chain conformations in the partition function of the lipid bilayer. The second change lies in a slight (≤ 10%) tuning of the flexible-string model parameter for incompressible chain's cross-section area of unsaturated lipid relative to saturated one. The latter was done to match equilibrium area per lipid for the unsaturated bilayer found in <cit.>. For reference we used DPPC (16:0) lipids for saturated bilayer, and POPC (18:1/16:0) lipids for unsaturated bilayer. These two cases were chosen as ones of the most well-characterized lipids. POPC is monounsaturated lipid; poly-unsaturated lipids can also be modelled within our model approach, however here we have concentrated solely on DPPC-POPC bilayer differences. We start with a short overview of the flexible string model for saturated lipid bilayer, including liquid-gel phase transition visible in a free energy plot. These calculations mostly reproduce our previous work <cit.> with minor simplifications. We then introduce lipid chains unsaturation via specially developed conformational filtering procedure in partition function of the lipids system, which is followed by derivations of analytical expressions for the order parameter and lateral pressure profile. Finally, we apply these approaches to DPPC and POPC models and discuss the results. §.§ Flexible String Model of Saturated Lipid In dynamic mean-field approach, lipid bilayer can be modelled as a single lipid exposed to the self-consistently derived potential, see Fig. <ref>. Mean-field parameters being derived from the chosen bilayer composition and environment parameters. For a concrete approach we use a model of a single lipid in the mean-field potential that arises from entropic repulsion between fluctuating neighbouring lipid chains due to excluded volume effect <cit.>. The minimal model consists of a flexible string in a parabolic mean-field potential, see Fig. <ref>. The energy functional of the string consists of kinetic energy, bending energy, and energy arising from the confining mean-field potential: E_t = ∫_0^L [ ρ𝐑̇^2 /2 + K_f (𝐑”)^2/2 + B 𝐑^2/2 ] d z , here R(z)={R_x(z),R_y(z)} is a vector in the plane of the membrane giving deviation of a string from the z axis (see Fig. <ref>), ρ is a string linear density, K_f is a string bending rigidity, and B is self-consistently derived parameter of entropic repulsion strength, that characterises the parabolic mean-field potential as function of a deviation of the string centres from z-axis, |𝐑(z)|, L is a hydrophobic thickness of the monolayer. Let's find expression for the free energy with the energy functional E_t, Eq. <ref>. For this, we impose boundary conditions on the 𝐑, this in turn allows us to represent E_t in the operator form, and finally express free energy F_t in terms of eigenvalues of that operator. Boundary conditions for a model flexible string take into account the following physical assumptions (same is assumed also for component R_y(z)): R_x'(0) = 0 - chain director is parallel to the monolayer normal at membrane surface; R_x”'(0) = 0 - net force at chain's head is zero; R_x”(L) = 0 - net torque at chain free end is zero; R_x”'(L) = 0 - net force at chain free end is zero. The first boundary condition reflects the orientational asymmetry of the monolayer due to the water-lipid interface which is clearly seen from data on the molecules orientational order parameter <cit.>: lipid tails are more ordered in the vicinity of head-groups constrained by the hydrophobic tension. Yet, the tilt transition when director forms finite angle with the monolayer normal is not considered here <cit.>. The other boundary conditions reflect the freely moving lipid head-group and hydrocarbon tail end: zero force acting on the head-group and zero torque and force acting on the lipid tail end. Under the boundary conditions Eq. (<ref>) potential energy part of the functional Eq. (<ref>) can be equivalently rewritten in terms of linear Hermitian operator Ĥ = B + K_f ∂/∂ z^4 in the form: E_t(pot) = ∑_α=x,yE_α E_α ≡1/2∫^L_0[ R_α(z)ĤR_α(z) ] d z Then, an arbitrary conformation of the chain can be expressed as the deviation of the centers of the string, R_x,y(z), from the straight vertical line (see Fig. <ref>), and is parameterized by a set of coefficients C_n of the linear decomposition of the function R_x,y(z) over the eigenfunctions R_n(z) of the operator Ĥ: R_α=x,y(z) = ∑_n C_n,αR_n(z) ĤR_n(z) = E_nR_n(z) Number of C_n coefficients is related to the number of CH_2 groups N_max in the single chain of the lipid. Substituting Eq. (<ref>) into Eq. (<ref>) and using the standard orthogonality property of the eigenfunctions of operator Ĥ, enables a simple decomposition of the energy functional into the series: E_t=∑_n1/2{ρĊ_n^2+E_nC_n^2 } Hence, Eq. (<ref>) reduces the problem of calculating the free energy of the spring to a sum of free energies of non-interecting 1-dimensional harmonic oscillators with "spring rigidities" E_n and "coordinates" C_n. The corresponding eigenvalues E_n and orthonormalized eigenfunctions R_n(z) of the operator Ĥ are <cit.>: n = 0 ⇒ { E_0 = B R_0(z) = √(1/L). n ∈ [1..N_max] ⇒ { E_n = B + c_n^4 K_f/L^4; c_n=π n-π/4; R_n(z) = √(2/L) [ cos (c_n z/L ) + cos(c_n)/cosh(c_n) cosh (c_n z/L ) ] . Arbitrary chain conformation is thus : R_x(z) = ∑_n=0^N_max R_n(z) C_n. Due to normalization, R_n(z) dimensionality is ∼ L^-1/2, and hence dimensionality of C_n ∼ L^3/2. Assuming membrane to be locally isotropic in the lateral plane, we split partition function into a product of two equal components, Z=Z_xZ_y=Z_x^2, and thus free energy of the lateral oscillations of the chain equals to <cit.>: F_t = - 2 k_B T log Z_x; Z_x=∏_n^N_maxk_B T/ħω_n; ω_n=√(E_n /ρ) . In order to see how F_t depends on the area per lipid, A (see Fig. <ref>), we note that expression ∂ F_t/∂ B can be computed in two ways - directly via Eqs.(<ref>,<ref>): ∂ F_t/∂ B = k_B T ∑_n=0^N_max1/B + c_n^4 K_f/L^4 , and also by representing partition function, Z_x as path integral over all chain conformations, Z_x = ∫exp(-E_tx(R_x) / k_B T)DR_x: ∂ F_t/∂ B = - 2 k_B T ∂ Z_x/∂ BZ_x^-1= =∫ [ ∫_0^L R_x^2(z) z ] ^-E_tx{ R_x(z) }/k_B T D R_x /∫^-E_tx{ R_x(z)}/k_B T D R_x = = ⟨∫_0^L R_x^2(z) z ⟩ = L ⟨ R^2 ⟩ , here D denotes path integration. Hence, Eq.(<ref>) shows that ∂ F_t/∂ B is proportional to the mean area swept by the centers of the chain. The latter area is also related to the mean area swept by the chain, A, and incompressible area of the chain, A_n, as: (√(A/π) - √(A_n/π))^2. Hence we conclude that: ∂ F_t/∂ B = L ( √(A) - √(A_n) )^2/π Comparing Eq.(<ref>) with Eq.(<ref>) one finds: k_B T ∑_n=0^N_max1/B + c_n^4 K_f/L^4 = L ( √(A) - √(A_n) )^2/π We conclude that B is a monotonically decreasing function of A, and hence F_t is also monotonically decreasing function of A (see Eqs. (<ref>,<ref>)). Eq.(<ref>) constitutes self-consistency equation as it relates mean-field parameter, B, with the area per lipid, A. Another contribution to the string's free energy is surface energy, F_s = γ A, where γ is a hydrophobic tension of the lipid membrane, and A is mean area swept by the string. Thus, F_s is monotonically increasing function of mean area swept by the chain. F_t + F_s has a single minimum as a function of A, while two minima are needed in order to reproduce liquid to gel phase transition. Hence, in order to get liquid-gel first-order phase transition we need to add one more contribution to the free energy. That contribution is van der Waals interaction between the neighbouring lipids. Analytical expression for attractive van der Waals interaction between two hydrocarbon chains has been calculated in <cit.> on the basis of quantum mechanics: F_VdW = -U 3π/8LN_CH_2^2/D^5 here U is an interaction constant, and D is a lipid's cross-section diameter. In the equilibritum free energy of the chain, F_T = F_t + F_s + F_VdW, is minimal, hence we require that ∂ F_T/∂ A = 0 ⇒ k_B T ∂ B/∂ A∑_n=0^N_max1/E_n + γ + 5/2Ũ N_CH_2^2/L A^7/2 = 0 Here we used renormalized van der Waals coefficient Ũ = 9 π^7/2 U / 2^8 which arises from using area per lipid instead of lipid diameter, and additional factor of 3 accounts for the fact that every lipid is surrounded by other lipids, assuming a tightly packed lattice, which has a coordination number 6. Solving minimum energy condition, Eq.(<ref>), together with self-consistency equation, Eq.(<ref>), for A and B allows to find equilibrium area per lipid at the free energy minima, see Figs. <ref> and <ref>. We used the following input parameters: incompressible area A_n = 20 Å^2, L = 15 Å, γ = 19.37 erg/cm^2, string bending rigidity K_f = k_B T L/2, U = 761 kcal Å^6/mol, N_CH_2 = 16 number of CH_2 groups per chain, N_max = 10 as a maximal number of oscillation modes of the string. § CALCULATION §.§ Flexible String Model of Unsaturated Lipid Let us introduce unsaturated bond in the lipid chain by allowing only bent chain conformations in the partition function of the string. Dropping conformation independent kinetic energy contributions, one finds: Z_x ≡ Z^1_x+Z^2_x= 1/2∫_-∞^+∞∏_n^N_max^ - C_n^2 E_n/2 k_B T·δ ( R_x'(z_0) - C )d C_n+ 1/2∫_-∞^+∞∏_n^N_max^- C_n^2 E_n/2 k_B T·δ (R_x'(z_0) + C ) d C_n Here Dirac delta function δ (R_x'(z_0) ± C ) filters out string conformations that do not have a particular bending angle at the depth coordinate z_0 inside the bilayer , i.e. at the position of the double bond. R_x'(z_0) = tan(α[z_0]) = ∑_n=0^N_max C_n R_n'(z_0) = ± C, and α[z_0] is tangent angle of the chain at the double bond, 1/2 takes into account that the chain is bent in either of two sides, i.e. tan(α[z_0])=± C correspondently, with the same probability. For self-consistency equation, Eq.(<ref>), we can concentrate on the parts of the Z_x that include B. Z^1_x= 1/2∫_-∞^+∞^- ∑_n=0^N_maxC_n^2 E_n/2 k_B T·δ ( ∑_n=0^N_max C_n R_n'(z_0) - C )× ∏_n=0^N_maxdC_n/L^3/2 where L^3/2 is introduced to keep the corresponding Z_x part dimensionless. One has to use the following trick to calculate the above path-integral. Namely, the Dirac delta function is expressed using the well known identity: δ(x)=∫_-∞^+∞^ixkdk/2π Hence, substituting the Dirac delta function in Eq.(<ref>) by identity from Eq.(<ref>) and integrating first over all the ∏_n=0^N_maxdC_n and then over dk, one finds: Z^1_x =1/2 [ ∏_n 1/√(E_n L^3) ] ·exp{ - C^2/2 k_B T∑_n [R_n'(z_0) ]^2/E_n}× 1/√(∑_n [R_n'(z_0) ]^2/E_n)=Z^2_x; Z_x=2Z^1_x where the last equality in Eq. (<ref>) follows from the fact that Z_x^1 depends only on C^2, and hence Z^1_x=Z^2_x=1/2 Z_x. Substituting this into Eq.(<ref>) one finds: F_t = k_B T ln ( [ ∏_n E_n L^3 ] ∑_n [R_n'(z_0) ]^2/ E_n )+ +C^2/∑_n [R_n'(z_0) ]^2/E_n Self-consistency equation for unsaturated lipids then reads: k_B T ( ∑_n 1/B + c_n^4 K_f/L^4 )=L (√(A) - √(A_n))^2/π+ + ∑_n [R_n'(z_0)]^2/E_n^2/∑_n [R_n'(z_0)]^2/E_n [k_B T - C^2/∑_n [R_n'(z_0)]^2/E_n ] (compare with Eq.(<ref>)). The equilibrium condition is: ∂ F_T/∂ A = 0 ⇒ k_B T ∂ B/∂ A{ ( ∑_n 1/E_n ) - ∑_n [R_n'(z_0)]^2/E_n^2/∑_n [R_n'(z_0)]^2/E_n [1- C^2 K_f/k_B T L ∑_n [R_n'(z_0)]^2/E_n ] } + γ + 5/2Ũ N^2/L A^7/2 = 0 (compare with Eq.(<ref>)). Solving self-consistency condition, Eq.(<ref>), together with equilibrium condition, Eq.(<ref>), for A and B allows to find equilibrium area per lipid, see Fig.<ref>. All the distinct differences with the saturated lipids case in Figs. <ref> and <ref> are solely due to inclusion of Dirac delta function filter into saturated lipids partition function, Eq.(<ref>), where we have used C = 0.5 corresponding to double bond bending angle α = 26.4. Slight decrease of incompressible string area for unsaturated lipids: A_n =18 Å^2 instead of saturated lipid case: A_n = 20 Å^2 was necessary to keep equilibrium area per lipid in the range found earlier <cit.>. Computed equilibrium areas per lipid for POPC shown on Fig. <ref> are slightly above the values reported in the literature <cit.>. This means that one should change some other parameters of the calculation for the closer match with experimental data on area per lipid. However, in this study we focus on a single most important feature of the unsaturated lipids, namely, the double bond, which is introduced by imposing a filter in the partition function, Eq.(<ref>). In order to see the effect of this particular change, we had chosen to make as little changes as possible to the other calculation parameters. §.§ Lateral Pressure Profile Lateral pressure profile is a function Π(z) which is equal to hydrophobic tension, γ, when integrated over the thickness of the monolayer: ∫_0^L Π(z) d z = γ Comparing this with equilibrium condition, Eq.(<ref>) or Eq.(<ref>), we see that it has two components – one is due to entropic repulsion, F_t, and the other is due to van der Waals attraction, F_VdW: ∫_0^L Π(z) d z = - ∂ F_t/∂ A - 5/2Ũ N_CH_2^2/L A^7/2 For saturated lipids, lateral pressure profile due to entropic repulsion is ∫_0^L Π_t(z) d z = - ∂ F_t/∂ A = - k_B T ∑_n=01/E_n∂ E_n/∂ A By definition, E_n and R_n(z) are eigenvalues and eigenfunctions of operator Ĥ in Eq. <ref>: Ĥ R_n(z) = E_n R_n(z). Hence we write ∫_0^L R_n Ĥ R_n d z = ∫_0^L R_n E_n R_n d z≡ E_n ∫_0^L R_n^2 d z ⇒ ∂ E_n/∂ A = ∂ E_n/∂ A∫_0^L R_n^2 d z Comparing this with Eq.(<ref>) we conclude that Π_t(z) = - k_B T ∑_n=01/E_n∂ E_n/∂ A R_n^2(z) ≡ - k_B T ∂ B/∂ A∑_n=0R_n^2(z)/E_n∂ E_n/∂ B = - k_B T ∂ B/∂ A∑_n=0R_n^2(z)/E_n Similarly, for unsaturated lipids, one arrives at the following expression for lateral pressure profile: Π_t(z) = - k_B T ∂ B/∂ A∑_n=0 R_n^2(z) [ 1/E_n - [R_n'(z_0)]^2/E_n^2/∑_m [R_m'(z_0)]^2/E_m{ 1 - C^2 K_f / k_B T L ∑_m [R_m'(z_0)]^2/E_m} ] Lateral pressure profile plots for saturated and unsaturated lipids are plotted in Fig. <ref>. We see that lateral pressure peak at the midplane area (z/L = 1) becomes smaller with unsaturation. Lateral pressure profile was studied in <cit.> by means of molecular dynamics: in Fig. 7 of their paper <cit.> the authors plot lateral pressure profile of lipids with increasing unsaturation and also find that the peak at the midplane area diminishes with unsaturation. §.§ Order Parameter By definition, molecular order parameter is S_mol(z) = 1/2 ( 3 ⟨cos^2 θ(z) ⟩ - 1 ) In flexible string model we have an approximation for tangent: tanθ(z) ≈ R_x' = ∑ C_n R_n'(z) hence we express cosθ via tanθ as: cos^2 θ = 1/1 + tan^2θ Also note that 1/1 + a(z)^2 = 1/2∫_0^∞^-x [ ^ a(z) x + ^- a(z) x ] d x ⇒⟨1/1 + a(z)^2⟩ = 1/2∫_0^∞^-x [ ⟨^ a(z) x⟩ + ⟨^- a(z) x⟩ ] d x Thus, intermediate expression for the order parameter is S_mol(z) = 1/2 [ 3 ∫_0^∞^-x [ ⟨^ x ∑ C_n R_n'(z)⟩ + ⟨^- x ∑ C_n R_n'(z)⟩ ] d x - 1 ] Starting with saturated lipids, we write: ⟨^·∑ C_n R_n'(z) · x⟩ = ∫^·∑ C_n R_n'(z) · x·^-∑ C_n^2 E_n/2 k_B T∏ d C_n/∫^-∑ C_n^2 E_n/2 k_B T∏ d C_n The numerator integral here is: ∫^ - ∑_n E_n/2 k_B T [ C_n^2 - 2 C_n x R_n' k_B T/E_n + ( x R_n' k_B T/E_n )^2 - ( x R_n' k_B T/E_n )^2 ] ∏ d C_n = ^ - x^2 k_B T/2∑_n (R_n')^2/E_n∏√(π· 2 k_B T/E_n) The denominator integral is the same, but with x=0. Hence, one obtains: ⟨^·∑ C_n R_n'(z) · x⟩ = ^ - x^2 k_B T/2∑_n (R_n')^2/E_n Thus, we see that expression in Eq.(<ref>) is real, hence it coincides with its complex conjugate, so we conclude that Eq.(<ref>) for saturated lipids reads: ⟨1/1 + a(z)^2⟩ = 1/2∫_0^∞^-x^-x^2 k_B T/2∑_n (R_n')^2/E_n d x = ^1/4y(z)√(π) erfc ( 1/2 √(y(z)) )/2 √(y(z)) where: y(z)=k_B T/2∑_n (R_n')^2/E_n and order parameter for saturated lipids is S_mol(z) = 1/2 ( 3 √(π)^1/4 y(z)erfc ( 1/2 √(y(z)) ) /2 √(y(z)) - 1 ) Similarly, for unsaturated lipids we start with ⟨^ a(z) x⟩: ⟨^·∑ C_n R_n'(z) · x⟩ = ∫^·∑ C_n R_n'(z) · x·^- ∑ C_n^2 E_n / 2 k_B T ·δ [ ∑ C_n R_n'(z_0) - C ] ∏ d C_n/∫^- ∑ C_n^2 E_n / 2 k_B T ·δ [ ∑ C_n R_n'(z_0) - C ] ∏ d C_n Consider the numerator integral first. Using Dirac delta function representation in Eq.(<ref>), completing the square for C_n, integrating over C_n and then again completing the square for k and integrating over k afterwards one finds: ∫^·∑ C_n R_n'(z) · x·^- ∑ C_n^2 E_n / 2 k_B T ·δ [ ∑ C_n R_n'(z_0) - C ] ∏ d C_n = 1/2 π ( ∏√(π· 2 k_B T/E_n) ) ·√(2 π/ k_B T ∑[R_n'(z_0)]^2/E_n) ·^ - C^2 / 2 k_B T ∑[R_n'(z_0)]^2/E_n·^k_B T/2 x^2 [ ∑R_n'(z_0) R_n'(z)/E_n ]^2 + 2 x C/k_B T∑R_n'(z_0) R_n'(z)/E_n/∑[R_n'(z_0)]^2/E_n·^ - k_B T/2 x^2 ∑[R_n'(z)]^2/E_n The denominator integral in Eq.(<ref>) will be equal to the one in Eq.(<ref>) with x=0. Thus: ⟨^ a(z) x⟩ = ^k_B T/2 x^2 [ ∑R_n'(z_0) R_n'(z)/E_n ]^2 + 2 x C/k_B T∑R_n'(z_0) R_n'(z)/E_n/∑[R_n'(z_0)]^2/E_n·^ - k_B T/2 x^ 2∑[R_n'(z)]^2/E_n = ^ -x^2 k_B T / 2 {∑[R_n'(z)]^2/E_n - ( ∑R_n'(z_0) R_n'(z)/E_n )^2 /∑[R_n'(z_0)]^2/E_n} + Cx ∑R_n'(z_0) R_n'(z)/E_n/∑[R_n'(z_0)]^2/E_n Mean for complex conjugate, ⟨^- a(z) x⟩, will be the same, but with changing x to -x in Eq.(<ref>). Sum of these two is real: ^ -y(z) x^2 + w(z) x + ^ -y(x) x^2 - w(z) x = =2 ^ -y(z) x^2 cos w(z) x where now: y(z)= k_B T / 2 [ ∑[R_n'(z)]^2/E_n - ( ∑R_n'(z_0) R_n'(z)/E_n )^2 /∑[R_n'(z_0)]^2/E_n] w(z)= C ∑R_n'(z_0) R_n'(z)/E_n/∑[R_n'(z_0)]^2/E_n Then, Eq.(<ref>) for unsaturated lipids is: ∫_0^∞^ -y(z) x^2 - x cos w(z) x d x Calculating this integral we finally arrive at the expression for order parameter of unsaturated lipids: S_mol(z) = 1/2 ( 3 √(π)^- (w(z) + )^2/4 y(z) [ 1 + ^ w(z)/y(z)erfc ( 1 + w(z)/2√(y(z)) ) + erfi ( + w(z)/2 √(y(z)) ) ] / 4 √(y(z)) - 1 ) Order parameter plots for saturated Eq.(<ref>) and unsaturated Eq.(<ref>) lipids are plotted in Fig. <ref>. The order parameter curve Eq.(<ref>), corresponding to e.g. the case of unsaturated POPC, has a drop in the middle of the monolayer, where double bond is located at z_0=0.5 in our model. This result coincides with the molecular dynamics study, <cit.> (see Fig. 10 therein) and <cit.> (see Fig. 2 therein). In passing, we note that order parameter for DPPC and POPC has also been reported in <cit.> (Fig. 13 therein), but the authors had plotted order parameter only for saturated POPC chain, which closely matches order parameter for DPPC. § DISCUSSION To summarise, we have modelled unsaturated POPC lipid in our flexible strings model by allowing only strings conformations that possess fixed nonzero tangent angle of the double bond with respect to preceding it saturated bonds at some definite position z_0 inside the bilayer, e.g. tan[α(z_0=0.5 L)] = 0.4, 0.6, ... etc. This results in larger area per lipid compared to saturated lipids, as e.g. in DPPC lipid, and produces lateral pressure profiles and order parameter distributions that match qualitatively the ones reported in <cit.>. The most important physics conclusion that follows from our results is obvious from Figs. <ref>, <ref>. Namely, the lateral pressure profiles of unsaturated lipids near the unsaturated bonds go beyond saturated lipids profile, as would be expected from considerations of excluded volume effect on the entropic repulsion between the lipid chains. The unsaturated bond has finite bending angle with respect to straight line of the lipid chain and, therefore, increases locally the lipid occupied area, thus causing stronger entropic repulsion between the lipid chains. Simultaneously, away from the unsaturated (bent) bond the neighbouring lipid chains have less probability to collide just by geometrical reasons, and therefore, the entropic repulsion between them is depleted relative to saturated lipid chains case, thus causing relative decrease of the lateral pressure profile with respect to saturated lipids case in these same regions away from the bent unsaturated bond, as is seen in Figs. <ref>, <ref>. In support of the above reasoning we also have checked predictions of our model for the cases of unsaturated double bond with different values of the bending angle and with different positions of the unsaturated bond along the lipid chains. Let's start with different values of the double bond angle. Presented here Fig. <ref> shows lateral pressure profiles for various values of the fixed double bond tangent angle calculated in our model, see Eq. (<ref>). We see that with decreasing double bond tangent angle the lateral pressure profile approaches the one of the saturated lipid, calculated using Eq. (<ref>). Next, we consider effect of double bond tangent angle on the behaviour of the order parameter for various values of the angle. Results of calculations using Eq. (<ref>)are presented in Fig. <ref>. We see that for angles less then tan(α[z_0=0.5 L]) ≈ 0.45 at z_0=0.5 L the order parameter exceeds the one of the saturated lipid (S_mol(z)≈ 0.8), see Fig. <ref>. This means, that average tangent value of an angle of saturated lipid at z=0.5 L is about 0.45. Farther, we investigated an influence of the position of the double bond on the membrane characteristics. Lateral pressure profiles calculated for various double bond positions using Eq. (<ref>) are exhibited in Fig. <ref>. Comparison of the latter with the lateral pressure profile of the saturated lipid in Fig. <ref> supports proposed above relation between position of unsaturated bond and local excluded volume effect. Also, we see that in our model the closer double bond is to the headgroup of the lipid, the less is its impact on the shape of the lateral pressure profile. This is clearly due to the fixed angle at the headgroup, R'(0)=0, see Eq.(<ref>). Computed order parameter distributions for various values of the double bond position using Eq.(<ref>) are shown in Fig. <ref>. Comparing the latter with the order parameter distribution of the saturated lipid, Fig. <ref>, we see that the double bond position perturbs order parameter locally, so that before and after the double bond the order parameter curve gradually converges to the one of the saturated lipid. § ACKNOWLEDGEMENTS Authors acknowledge support by the Federal Academic Leadership Program Priority 2030 (NUST MISIS Grant No. K2-2022-025).
http://arxiv.org/abs/2407.12931v1
20240717180436
Catalog-level blinding on the bispectrum for DESI-like galaxy surveys
[ "S. Novell-Masot", "H. Gil-Marín", "L. Verde J. Aguilar", "S. Ahlen", "S. Brieden", "D. Brooks", "T. Claybaugh", "A. de la Macorra", "J. E. Forero-Romero", "E. Gaztañaga", "S. Gontcho A Gontcho", "G. Gutierrez", "K. Honscheid", "C. Howlett", "R. Kehoe", "T. Kisne", "A. Lamber", "M. E. Levi", "M. Manera", "A. Meisner", "R. Miquel", "G. Niz", "F. Prada", "G. Rossi", "E. Sanchez", "M. Schubnell", "H. Seo", "D. Sprayberry", "G. Tarlé", "B. A. Weaver" ]
astro-ph.CO
[ "astro-ph.CO" ]
Distributions and correlation properties of offshore wind speeds and wind speed increments H. Eduardo Roman July 17, 2024 =========================================================================================== § INTRODUCTION In recent decades, cosmology has undergone remarkable progress, entering the era of precision cosmology, with the ΛCDM model as a cornerstone, bridging theory and observations throughout all observable epochs of the Universe. However, there are still several open questions at the heart of our understanding of cosmology. We still do not know the fundamental physics of dark matter and dark energy, which amount to roughly 95% of the Universe, and some tensions arise from the ΛCDM model when applied to both early and late-time physics. Current and forthcoming experiments—both early-time, such as the Southern Pole Telescope (SPT) <cit.>, the Atacama Cosmology Telescope (ACT) <cit.>, the Simons Observatory <cit.>, as well as late-time, such as the Dark Energy Spectroscopic Instrument (DESI) <cit.>, Euclid <cit.>, Vera Rubin <cit.>—will play a pivotal role in these unresolved issues, yielding complementary cosmological information with an unprecedented level of precision. In order to ensure the reliability of results and mitigate confirmation bias, blind analyses, whereby the true result is hidden to the experimenter until the full analysis is done and “frozen”, have been increasingly adopted in cosmology. Blind analyses, already standard practice in fields like experimental particle physics <cit.>, have been extended to cosmology in recent years <cit.>. However, there is no one-size-fits-all standard for making an analysis blind; hence there is a need to develop and use a suitable method for each context. Hereafter, we refer to the action of making an analysis blind as simply “blinding”, while we denote as “unblinding” the procedure of recovering the original, non-blind results. One can organize the different approaches to blinding relative to the stage of the analysis in which they are implemented. Generally, in large-scale structure surveys, the earlier it is implemented the more difficult it is to accidentally unblind. For example, while adding a blinding scheme just at the final analysis stages, e.g. shifting the recovered cosmological parameters by an unknown amount, is certainly convenient, it is nevertheless relatively easy to accidentally unblind. An intermediate approach involves adding a random shift in the summary statistics or their covariance <cit.>, e.g. shifting the wave-vectors in the power spectrum. The strongest methods are generally the ones that are implemented already at the phase of catalog production, as done for example in <cit.>. In the framework of large-scale structure analyses, focusing on the power spectrum baryon acoustic oscillations (BAO) feature and redshift-space distortions, the method developed by <cit.> operates at catalog-level, which is currently part of the official DESI pipeline <cit.>. While this method has been extensively validated in the context of a power spectrum template-based approach <cit.>, it still has not been tested on higher-order statistics such as the bispectrum, which is what we set out to do. The purpose of this work is to validate the blinding strategy of <cit.> for an analysis featuring the combination of the power spectrum and bispectrum multipoles in DESI-like periodic boxes and cutsky mocks. In Section <ref> we review the main aspects of this blinding procedure. In Section <ref> we describe the simulations and the main ingredients of our analysis. In Section <ref> we present our results and we conclude in Section <ref>. § BLINDING THE DATA AT THE CATALOG LEVEL The full blinding procedure consists of two separate steps, which we refer to as respectively AP and RSD parts of the blinding, following the nomenclature from <cit.>. It was shown in <cit.> that this combination results in a modular effect in the cosmological parameters (with the AP and RSD parts affecting different sets of cosmological parameters) while being very difficult to accidentally unblind. In particular, the RSD blinding modifies the linear growth rate parameter f, while the AP part of the blinding results in shifts in the Alcock-Paczyński parameters {α_∥,α_}, which are respectively the parallel and perpendicular to the line of sight dilation scales with respect to the fiducial cosmology. At any redshift z, given the Hubble parameter H(z), the sound horizon scale at baryon drag redshift r_s(z_d) and the angular distance parameter D_A(z), the Alcock-Paczyński parameters are defined as <cit.> α_∥(z)=H^fid(z)r_s^fid(z_d)/H(z)r_s(z_d); α_(z)=D_A(z)r_s^fid(z_d)/D_A^fid(z)r_s(z_d), with the “fid” superscript referring to the parameter corresponding to the fiducial cosmology. In this section, we provide an updated summary of the procedure of <cit.>. In a nutshell, the AP component of the blinding acts only along the line of sight by hiding (blinding) the reference cosmology used to convert the redshifts (z) into comoving distances. It does so by converting the original redshifts (indicated by z) of the catalog to distances using a hidden (blind or shifted in <cit.> nomenclature) cosmology and then transforming back the distances into redshifts using a reference (known) cosmology, thus producing an effectively blind redshift catalog. Blind redshifts are indicated by z'. It is easy to see that this transformation only affects the dilation parameters α_∥, α_. Let us refer to the vector of cosmological parameters that completely specify a given cosmological model by Ω. For example, in a standard ΛCDM model, Ω includes parameters such as the matter density parameter Ω_m, the baryon density parameter Ω_b, the primordial power spectrum spectral slope n_s, the Hubble constant H_0, the rms dark matter fluctuations filtered on 8 Mpc/h scales σ_8. The blinding scheme that the DESI collaboration uses shifts a given reference cosmology Ω^ ref by a (hidden) shift ΔΩ such that a blind cosmology is obtained: Ω'=Ω^ ref+ΔΩ≡Ω^ ref+(Δ f,Δ w_0,Δ w_a) where the linear growth rate is taken to be f=Ω_m^γ,[Here, the exponent γ is chosen following the General Relativity prescription, i.e. γ≈ 6/11.<cit.>] and w_0,w_a describe dynamical dark energy with dark energy equation of state parameter being a function of the scale factor: w(a) = w_0+w_a(1-a). When the analysis on the blind data is performed, a fiducial cosmology Ω^ fid is used (hence transforming equation <ref> such that Ω^ ref→Ω^ fid). Since in this specific application we do not use real data or an unknown true cosmology, the prime quantities are the blinded ones, while the non prime quantities are the reference (and true) ones. The resulting shifts from Equation <ref> in the Alcock-Paczyński parameters are: α_∥'(z')/α_∥(z)=H'(z)/H(z); α_'(z')/α_(z)=D_M(z)/D'_M(z), where α with no superscript refer to the reference, fiducial depending on the specific application, with H and D_M being respectively the Hubble expansion parameter and the angular comoving distance at redshift z, H(z) =H_0√(Ω_m(1+z)^3+(1-Ω_m)(1+z)^3(1+w_0+w_a(1-a(z)))) D_M(z) =∫_0^zdz'c/H(z'). In other words, the dilation parameters expected to be measured from the blinded catalog α^ blind_∥,⊥ are related to the pre-blinding ones α_∥,⊥ by the same ratio as the α''s to the α^ refs. Of course, in tests applied to simulations the reference quantities coincide with the fiducial and true quantities, and the prime coincides with the blind, i.e. Ω^ ref=Ω^ fid=Ω^ true and Ω'=Ω^ blind. The RSD part of the blinding operates in a different way. Built on the RSD part of the reconstruction algorithm <cit.>, this step of the blinding procedure smooths the density field with a Gaussian filter and then uses the reconstructed field to shift the redshift space positions r according to, 𝐫'=𝐫-f^ref(Ψ·𝐫̂)𝐫̂+ f'(Ψ·𝐫̂)𝐫̂. The quantity Ψ·𝐫̂, where Ψ is the displacement field, can be inferred from the reconstructed real-space positions, such that the comoving distance is changed by d'=d^rec+f'/f^ref(d^ref-d^rec), with d^rec,d^ref being respectively the reconstructed and reference comoving distances, and f',f^ref the shifted and reference values of the growth factor f. In this way, the redshifts of the tracers are modified such that the catalog has effectively different RSD signal of amplitude given by f^ blind=f^ true+Δ f. Both parts of the blinding procedure have been tested to be robust with a template-based power spectrum analysis <cit.>, and we set to quantify their performance (both separately and in combination) for analyses involving the bispectrum. § TEST DESIGN AND ANALYSIS SET-UP §.§ Test design and current limitations In this paper we perform the following sequence of tests for a joint power spectrum and bispectrum analysis which employs cubic (periodic) boxes and more realistic mock surveys which include partial sky coverage (cutsky mocks):[While cubic mocks simulate the cosmic structure within a periodic cubic volume, cutsky mocks apply observational features to these simulations to mimic real sky survey data, accounting for survey geometry and selection effects.] * Performance of the RSD part with cubic boxes for the combination of the power spectrum and bispectrum redshift space multipoles. * Performance of the AP part with cutsky mocks for the combination of redshift space power spectrum multipoles and bispectrum monopole. * Performance of the full AP+RSD blinding with cutsky mocks for the combination of redshift space power spectrum multipoles and bispectrum monopole. The reason why we first test the RSD part of the blinding algorithm in cubic boxes is that the RSD part of the algorithm could be more sensitive than the AP part to the specific analysis settings. Implementing the AP part of the blinding (which was designed with cutsky data in mind) in a cubic box is challenging both conceptually and practically. First of all, AP blinding cannot be done correctly in plane-parallel approximation (unless in the limit of large distances and small survey areas, which is not the case here). Secondly, the advantage of using cubic boxes is their periodicity, but periodicity is lost when applying the AP blinding. For more details see Section <ref>. Given this, we separately test the AP part of the blinding in step 2. To perform step 1, we modify the blinding code to be used by DESI in order to apply the RSD part of the blinding to cubic boxes.[<https://github.com/desihub/LSS/>] For this test, we use the global plane-parallel approximation by obtaining the redshifts of the galaxies from the third direction of the box, x̂_3. Then, the first two coordinates (x_1,x_2) are left equal, while the third coordinate x_3 is transformed according to Equation <ref>[If the transformed value for x_3, which we can denote as x_3, falls outside of the box, it is reintroduced assuming periodic boundary conditions as x_3 modulus L_box]. In this work, for steps 2 and 3 we limit ourselves to adding only the bispectrum monopole to the power spectrum multipoles. The currently available treatment of the bispectrum window function <cit.> is only applicable to the bispectrum monopole B_0. We recognize that this bispectrum window function treatment is known to suffer from some degree of inaccuracy, especially in the squeezed configurations <cit.>; as we will show, these do not interfere with the performance of the blinding (and unblinding) procedure. We leave to future work to update steps 2 and 3 to include also the bispectrum quadrupoles, subject to the availability of an improved window function implementation suitable for the bispectrum multipoles. §.§ Simulations and blinding parameters We utilize 25 AbacusSummit LRG simulations at redshift z=0.8, which we may refer to as simply as the AbacusSummit LRG mocks <cit.>. LRG stands for Luminous Red Galaxies, which constitutes the highest signal-to-noise DESI sample at z≲ 0.8 <cit.>. For the first test in this work (RSD blinding, as presented in Section <ref>) we use the 25 available periodic boxes, while for the second and third tests, we use the corresponding 25 cutsky mocks. The sky area and completeness of the window that we use in cutsky mocks coincide with the DESI Year 1 footprint, and we measure the power spectra and bispectra using random catalogs (randoms) with 20 times the density of the data. The cosmology of all the mocks as well as our adopted reference and fiducial cosmology are the same and compatible with the Planck best fit ΛCDM model <cit.>: Ω_m=0.3138,Ω_b=0.0493, n_s=0.9649,σ_8=0.8114,h=0.6736,w=-1. The cubic boxes have a size of L_box=2000 Mpc h^-1, resulting in a total physical volume for the 25 boxes of V_25=200 (Gpc/h)^3. The effective volume of a single cubic box—computed as in <cit.>—is ∼6.4 (Gpc h^-1)^3, for a total effective volume of V^ eff_25∼ 160 (Gpc h^-1)^3. The cutsky mocks are generated by the Generate Survey Mocks code[<https://github.com/Andrei-EPFL/generate_survey_mocks>] <cit.> and matched the DESI Year 1 survey footprint with the mkfast-Y1 code[<https://github.com/desihub/LSS/blob/main/scripts/mock_tools/mkfast_Y1.py>]. The effective volume of each cutsky mock is estimated to be ∼ 1.6 (Gpc h^-1)^3 resulting in the 25 cutsky mocks having a cumulative effective volume[The effective volume of each cutsky mock is calculated by multiplying the effective volume of its correspondent cubic mock (∼6.4 (Gpc h^-1)^3 <cit.>) by the ratio between the number of particles in the cutsky mock and the cubic box.] of V^ eff_ 25cutsky∼40 (Gpc h^-1)^3, of the order of the expected DESI final volume (5 years of survey) for LRGs <cit.>. From the original AbacusSummit suite, we produce a total of 75 blinded mocks, obtained by applying to each original AbacusSummit LRG simulation the corresponding blinding as per Section <ref> (25 blinded mocks for each of the 3 tests). We blind the cubic boxes by a different amount than the cutsky mocks. The blinding is such that the expected shift in the growth rate f is Δ f=f'-f^ref=-0.068 for the cubic boxes (where the value of f^ref is set to 0.8), while for the cutsky mocks we change the sign of the expected shift so that Δ f=f'-f^ref=0.060. The AP part of the blinding in the cutsky mocks has expected shifts of Δα_∥=Δα_=-0.013.[Note that in general the α_∥ and α_ parameters are not necessarily shifted equally. The fact that in our case the shift is the same is just a coincidence.] The values of the shifts were chosen in order to be close to the limit of the DESI blinding pipeline, which enforces |Δ f|≤ 0.08 |Δα_∥|≤ 0.03, |Δα_|≤ 0.03 <cit.>. Compared to the expected standard deviation of the parameters recovered from the total volume of the 25 mocks (V_25), these offsets represent a blinding shift in f for the cubic boxes of ∼3σ, and for the cutsky mocks of ∼1–2σ for all the parameters f,α_∥,α_. We choose not to go beyond 3σ of the expected errorbars in order to not produce a degradation in the recovered constraints <cit.>. It should now be clearer why we do not implement the AP blinding on cubic boxes. The size of the cubic boxes (L_box=2000 Mpc h^-1) is of comparable magnitude to the comoving distance from z=0 to z=0.8 where the tracers are located. Hence, to introduce a significant AP shift at z=0.8 would require arbitrarily setting the observer at a relatively short distance from the box. This would break periodicity and the plane parallel approximation for RSD would not apply. This limitation does not apply to the cutsky mocks. For estimating the covariance matrices, we measure the power spectra and bispectra from 1000 EZmocks, which generate the density field using the effective Zel'dovich approximation <cit.>. Both the underlying cosmology and the tracers' clustering properties are designed to match the AbacusSummit LRG mocks. Throughout, we measure the power spectrum and bispectrum of both cubic and cutsky mocks using the publicly available Rustico code[<https://github.com/hectorgil/Rustico>] <cit.>. §.§ Modeling Our theoretical modelling is the one adopted in our previous works <cit.>, where the interested reader can find an in-depth description. The modelling of the power spectrum follows the renormalized perturbation theory (RPT) prescription <cit.>. For the bispectrum we use the phenomenological GEO-FPT model <cit.>, as provided in the publicly available GEO-FPT code[<https://github.com/serginovell/Geo-FPT>]. In particular for this application, the phenomenological parameters that model the bispectrum shape and scale dependence, {f_1,...,f_5}, are obtained at z=0.8 by interpolation from the tabulated values provided by GEO-FPT. In all cases use the power spectrum monopole and quadrupole data-vector, P_02={P_0,P_2}, while we consider different parts of the bispectrum multipole expansion according to the analysis settings: for cubic mocks (Section <ref>) we use the bispectrum monopole together with the first two quadrupoles,[We found in <cit.> that it is redundant to use all three quadrupoles B_200,B_020,B_002. We follow the bispectrum multipole expansion convention by <cit.>.] and refer to our full bispectrum data-vector as B_02={B_0,B_200,B_020}; for cutsky mocks (Section <ref>) we only consider the bispectrum monopole B_0. As in our previous works <cit.>, we perform the cosmological parameter inference via a Markov chain Monte Carlo sampling (MCMC), specifying broad uniform priors in all parameters. We use the full shape of the power spectrum and bispectrum, under the assumption of local Lagrangian bias <cit.>. The parameters of interest in this work are {f,σ_8,α_∥,α_}, and we marginalize over the nuisance parameters {b_1,b_2,A_P,A_B,σ_P,σ_B}, respectively: the linear and quadratic bias parameters, the shot noise amplitude correction for the power spectrum and bispectrum, and the power spectrum and bispectrum peculiar velocities rms. The main difference with the analysis presented in previous works is that we limit the bispectrum k-range to 0.02 h Mpc^-1<k<0.11 h Mpc^-1 (the bispectrum model was calibrated for k_max=0.12 h Mpc^-1). The reason for this choice is that the size of the data-vector is limited by the available number of simulations (1000 EZmocks). In order to estimate the full covariance matrix (as recommended in <cit.>) the number of simulations should be much bigger than the size of the data-vector <cit.>. By adopting this k_max and the relatively large binning size of Δ k=0.01 h Mpc^-1≈ 3.2k_f,[The fundamental wave-vector, k_f, is defined as k_f=2π/L_box.] the full P_02+B_02 data-vector size is of 342 elements, while the P_02+B_0 data-vector used in the cutsky mocks has 152 elements. We further account for the errors due to the finite number of simulations in the covariance matrix estimation by employing the Sellentin-Heavens likelihood function <cit.>. Finally, for the cases where we use cutsky mocks, we model the survey window for the power spectrum and bispectrum exactly as in <cit.>. In short, the galaxy power spectrum model P_gal is convolved with the window function W_2 (obtained by performing pair counts in the random catalog, as defined in <cit.>) to obtain the windowed power spectra P^W as P^W(k)=∫d^3k'/(2π)^3P_gal(k')|W_2(k-k')|^2+P_noise(k). where P_ noise denotes the shot noise contribution to the power spectrum. Due to the computational challenge of operating with the analogous expression of Equation <ref> in the case of the bispectrum, in this paper we use the approximation of <cit.>, which ignores the effect of the window on the bispectum kernel. In fact, we can factorize the model of the galaxy bispectrum as B_gal(k_1,k_2,k_3)=P_NL(k_1)P_NL(k_2)𝒵(k_1,k_2,k_3)+cyc., where P_NL is the non-linear matter power spectrum, and the function 𝒵(k_1,k_2,k_3) encompasses the perturbation theory kernels, the geometrical correction and the bias expansion. In our adopted approximation, the 𝒵 function is unaffected by the window, resulting in a windowed bispectrum monopole theory vector B^W that reads B^W_0(k_1,k_2,k_3)=∫_-1^1dμ∫_0^2πdϕ𝒵(k_1,k_2,k_3)P_NL^W(k_1)P_NL^W(k_2)+cyc. In the above expression, μ_i is the cosine of the angle of the i-th k-vector with respect to the line of sight, and ϕ is such that μ_2=μ_1cosθ_12-√((1-μ_1^2)(1-cosθ_12^2))cosϕ, with θ_12 being the angle between k_1 and k_2. Since this approximation cannot be easily generalized to the bispectrum quadrupoles, and to date there is no suitable window modelling for the bispectrum quadrupoles in this particular multipole expansion, in this work for cutsky tests we only use the bispectrum monopole.[As a consequence, in the cutsky cases, where the bispectrum quadrupoles are not used, we do not consider variations from Gaussian shot-noise in the bispectrum, as we do with the full joint power spectrum and bispectrum multipoles analysis via the A_P and A_B parameters respectively. We find that letting A_B vary when only using the bispectrum monopole B_0 opens up an unnecessarily large and unconstrained degeneracy direction where cosmological parameters take unphysical values, e.g. f>1.] For the purpose of testing the blinding, we find the current approximation of Equation <ref> to be sufficient. To show that the adopted model for the data vector does not display obvious signature of systematics, Appendix <ref> shows the best fitting theoretical model for the data vector from the blind and not blind catalogs, along with relevant χ^2 values. § RESULTS In order to test the validity of the RSD part of the blinding scheme for the addition of the bispectrum in the data-vector, we perform similar tests as in <cit.>, while using the set-up and analysis previously done in <cit.> and adapted as mentioned in Section <ref>. §.§ Cubic mocks: RSD blinding We start by running the cosmological pipeline using only the power spectrum monopole and quadrupole P_02 on the 25 original and RSD-blinded AbacusSummit mocks. Additionally, we repeat the procedure for the mean of the 25 simulations, taking advantage of the fact that we have blinded all realizations with the same shift Δ f. The results are displayed in Figure <ref>, where we plot the scatter among the 25 boxes of the maximum likelihood of the cosmological parameters {f,σ_8,α_∥,α_} before and after blinding. In this figure, each of the filled blue circles corresponds to one of the 25 boxes, and the coordinates represent the maximum likelihood parameter values before (x axis) and after (y axis) blinding. The green stars correspond to the maximum likelihood values for the mean signal from the 25 boxes. The expectation is that symbols should scatter around the line of equation y=x+ΔΩ, where ΔΩ is the (blinding) shift in the given parameter.[Note that ΔΩ=0 for parameters not expected to be affected by blinding. In this test for RSD blinding in boxes, the parameters with ΔΩ=0 are {σ_8,α_∥,α_}.] This corresponds to the red dashed line. The true underlying (and expected) values of the parameters before (and after) blinding are indicated by the dashed vertical and horizontal blue lines. This preliminary step shows that we can replicate the results obtained in <cit.>. As expected, if only P_02 is used, the f–σ_8 degeneracy is only weakly broken by the non-linear terms of the power spectrum monopole and quadrupole. This degeneracy is not fully disentangled unless the bispectrum monopole and quadrupoles are included in the data-vector <cit.>. The corresponding results of the P_02+B_02 joint analysis are shown in Figure <ref> using the same conventions. As with the case of the power spectrum, we find excellent agreement between original and blinded results, given the expected shift in f. Note the different scales in the figure: the errorbars on the parameters are significantly reduced by including the bispectrum monopole and quadrupoles in the analysis. The cosmological constraints obtained with the mean signal of the 25 simulations and the corresponding covariance (i.e., for a physical volume of V_25=25×2^3 (Gpc h^-1)^3) are shown in Figure <ref>. The original (blue), blinded (orange) and unblinded (green) posteriors for all the possible pairs of the {f,σ_8,α_∥,α_} parameters are shown. By “unblinded” we refer to the posteriors of the blinded case numerically shifted back by the designed shift i.e. by -Δ f=0.068. As we mentioned in Section <ref>, the shift in f produced by the blinding procedure is equivalent to ∼3σ of the parameter error in the P_02+B_02 analysis in the joint volume covered by the 25 simulations. We can see that the original analysis, including the bispectrum multipoles, recovers the underlying cosmological parameters within 1σ, even for a volume (and thus statistical precision) that far exceeds the total DESI volume. The overlap of the original (blue) and unblinded (green) posterior shows that the RSD blinding component shifts the recovered posteriors exactly as predicted. §.§ Cutsky mocks The AP component of the blinding scheme, by acting simply on the fiducial cosmology adopted in the redshift-distance relation, by design shifts the dilation parameters exactly as modeled in all the established BAO analyses, e.g. <cit.>. Nevertheless, for the sake of systematically validating the blinding pipeline, we first test the behaviour of the AP blinding in the presence of a survey window, in isolation of the RSD part, which corresponds to our second step as mentioned in Section <ref>. In Figure <ref> we show the posterior distribution for the main cosmological parameters of our analysis, {fσ_8,α_∥,α_} for both the unblinded and AP blinded cutsky mocks, together with the shifted (“unblinded”, in green) posteriors. The statistical errors correspond to the cosmological volume spanned by the 25 individual mocks, which is ∼ 40 (Gpc h^-1)^3 as explained in Section <ref>. In this application to cutsky mocks, we report constraints on the product fσ_8 instead of the separate parameters f and σ_8 since the bispectrum monopole cannot fully resolve the well-known f–σ_8 degeneracy <cit.>. The inclusion of the bispectrum quadrupoles (once a suitable modeling of the window becomes available) is expected to resolve this. However, at this time, this is beyond the scope of this work, centered on validating the blinding scheme. The figure shows that the posterior distributions for the parameters of interest are all within 1σ of the true values, which is a validation of both the power spectrum and bispectrum models and bias expansions, together with the window approximation we use for B_0. The close similarity of the blue (before blinding) and green (after blinding, but numerically shifted back) posterior contours indicates that for the adopted magnitude of the blinding shift of α_∥=Δα_=-0.013 (corresponding to 1–2σ of our errorbars for the total of 25 mocks), the blinding procedure performs as expected. Finally, Figure <ref> shows the results of the test of the full blinding pipeline—AP and RSD—applied to the cutsky mocks, with shifts of Δ f=0.06, Δα_∥=Δα_=-0.013. The α_∥,α_ parameters are (correctly) shifted as in the case without the RSD part of the blinding: even with the addition of the bispectrum, and in the presence of a sky cut, the two parts of the blinding do not interfere significantly. As for the redshift-space distortions parametrization fσ_8, the RSD part of the blinding we applied is designed to shift f while not affecting σ_8, and the recovered posterior distribution for fσ_8 is exactly as predicted. The original and shifted (unblinded) posteriors in Figures <ref> and <ref> indicates that estimated blind parameter constraints are slightly tighter than the original ones. This is not unexpected, as for this application we do not transform the covariance matrix under the blinding operation. The clustering properties of the EZmocks used to compute the covariance are designed to match those of the (original) before blinding catalogs. Blinding changes slightly the clustering properties making the covariance matrix slightly mis-calibrated. While this effect can be corrected—albeit in a somewhat time-consuming way, by applying the blinding transformation to the 1000 realizations of EZmocks—for this application it is small enough that can be left uncorrected. § CONCLUSIONS In the era of large stage IV galaxy surveys, blind analyses have become essential to safeguard against confirmation bias in cosmology, where researchers may unconsciously influence their analyses to conform to their prior beliefs. By masking certain aspects of the data or results until the analysis is complete, blinding adds to the integrity of findings from present and upcoming galaxy surveys <cit.>. The blinding technique of <cit.> has been validated for two-point statistics and currently adopted by and implemented in the official DESI pipeline. This blinding scheme modifies the distance-redshift relation and thus the tracer's redshifts in two complementary parts. The first part, named AP blinding in this work, is equivalent to modifying the cosmology used in transforming redshifts into distances, by changing the default ΛCDM values of the w_0,w_a parameters in the dark energy equation of state parameterization given by w(z)=w_0+(1-a(z))w_a. This generates a (theoretically) predictable shift in the recovered values of α_∥,α_. The second part of blinding, the RSD blinding, combines the observed density field with a suitably reconstructed density field in order to modify the effective value of the linear growth rate parameter f. In this paper we have explored the effectiveness and performance of the catalog-level blinding technique proposed by <cit.> within DESI-like galaxy spectroscopic surveys, for a data-vector which combines the power spectrum and bispectrum summary statistics. We have confirmed that such a blinding scheme performs as expected when including the bispectrum data-vector, in the three sequential tests below: * RSD-only blinding on cubic boxes. For a blinding shift of Δ f=-0.068 (which corresponds to ∼3σ of the recovered errorbars[In all cases, both cubic and cutsky mocks, the total volume of both the signal and covariance corresponds to the sum of the 25 available mocks. This yields an effective volume of ∼ 160 (Gpc h^-1)^3 in the case of cubic mocks and of ∼ 40 (Gpc h^-1)^3 in the cutsky mocks.] in f) and with a data-vector consisting of power spectrum and bispectrum monopole and quadrupoles {P_0,P_2,B_0,B_200,B_020}, we found perfect agreement between the predicted blinded and recovered blinded parameters. * AP-only blinding on cutsky mocks, with a data-vector composed of power spectrum multipoles and bispectrum monopole {P_0,P_2,B_0}. The modeling of the window function on the bispectrum monopole follows the approximation proposed by <cit.>. As expected, given that this component is directly modifying the cosmology—which the α_∥,α_ parameters naturally quantify—the parameters recovered in the blinded mocks closely follow the predictions. In this case, we shifted the w_0,w_a parameters such that the expected shifts are Δα_∥=Δα_=-0.013, about 1–2σ of errorbars obtained in α_∥,α_. * Full blinding pipeline, including both AP and RSD parts on cutsky mocks for {P_0,P_2,B_0} data vector. The AP part is implemented as above, and the RSD part yields a theoretical shift of Δ f=0.060. The AP part behaves exactly as expected. For this data vector, the RSD part can only be reliably tested by considering the parameter combination fσ_8: the addition of the bispectrum monopole is not sufficient to break the well-known degeneracy between these two parameters (see <cit.> for a recent discussion). We conclude that these results provide sufficient validation to the blinding pipeline for present-day analyses involving the galaxy bispectrum. To date, there is no sufficiently precise modeling of the survey window function on the bispectrum multipoles. Should one become available, steps 2 and 3 above could be extended to the full data vector including bispectrum multipoles. We leave this to future work. The work presented in this paper represents the first step to validate our pipeline for the joint power spectrum and bispectrum analysis of the DESI survey. Forthcoming work will include improvements of the survey window approximation for the bispectrum and the treatment of the imaging and systematic weights on the bispectrum. § DATA AVAILABILITY All data from the tables figures are available in machine-readable format at https://zenodo.org/uploads/11984896?token=eyJhbGciOiJIUzUxMiJ9.eyJpZCI6IjhkYmUyODdkLWIxZTQtNGUxNS05YzExLWJhNzk4MWQxMjllOCIsImRhdGEiOnt9LCJyYW5kb20iOiJhYjQxOGRjYzc2NTIwZGRlOTlkMDAxNTVkNTU5ZDc1YyJ9.FTTFsCqQ6v_hDLvHZ7EVyXhLmgJZcrNEZYMPFlnWE1gFRkXIG1KJX-9ak67RIfpXWJFg6byDC13qc9yVnPjL7A10.5281/zenodo.11984896 in compliance with the DESI data management plan. § ACKNOWLEDGEMENTS SNM, HGM and LV thank Santiago Ávila, Mike Shengbo Wang and Benjamin Weaver for the helpful discussion and suggestions. SNM acknowledges funding from the official doctoral program of the University of Barcelona for the development of a research project under the PREDOCS-UB grant. HGM acknowledges support through the program Ramón y Cajal (RYC-2021-034104) of the Spanish Ministry of Science and Innovation. LV and HGM acknowledge the support of the European Union’s Horizon 2020 research and innovation program ERC (BePreSySe, grant agreement 725327). Funding for this work was partially provided by the Spanish MINECO under project PGC2018-098866-B-I00MCIN/AEI/10.13039/501100011033 y FEDER “Una manera de hacer Europa”, and the “Center of Excellence Maria de Maeztu 2020-2023” award to the ICCUB (CEX2019-000918-M funded by MCIN/AEI/10.13039/501100011033). This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of High-Energy Physics, under Contract No. DE–AC02–05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract. Additional support for DESI was provided by the U.S. National Science Foundation (NSF), Division of Astronomical Sciences under Contract No. AST-0950945 to the NSF’s National Optical-Infrared Astronomy Research Laboratory; the Science and Technology Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the French Alternative Energies and Atomic Energy Commission (CEA); the National Council of Humanities, Science and Technology of Mexico (CONAHCYT); the Ministry of Science and Innovation of Spain (MICINN), and by the DESI Member Institutions: <https://www.desi.lbl.gov/collaborating-institutions>. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U. S. National Science Foundation, the U. S. Department of Energy, or any of the listed funding agencies. The authors are honored to be permitted to conduct scientific research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation. This work has made use of the following publicly available codes: https://github.com/serginovell/geo-fptGEO-FPT <cit.>, https://github.com/hectorgil/RusticoRustico <cit.>, https://github.com/hectorgil/BrassBrass <cit.>, https://emcee.readthedocs.io/en/stable/index.htmlEmcee <cit.>, https://www.gnu.org/software/gsl/GSL <cit.>, https://scipy.org/SciPy <cit.>, https://numpy.org/NumPy <cit.>, https://getdist.readthedocs.io/en/latest/GetDist <cit.>, https://www.astropy.orgAstropy <cit.>, https://matplotlib.orgMatplotlib <cit.>. We are grateful to the developers who made these codes public. § ADDITIONAL BLINDING CONSIDERATIONS In Figure <ref> we offer an alternative visualization of the parameter constraints for both the fully blinded (AP+RSD) and original mocks: in this case, differently than in Figure <ref>, we only show the fσ_8 results. We show in Figure <ref> the best fitting theoretical model for the joint data-vector {P_0,P_2,B_0} of the cutsky mocks in the original and blinded cases (both AP-only and AP+RSD). There is no significant difference in the χ_H^2 values (where the H subscript stands for Hartlap-corrected covariance matrix) when the data-vector is blinded. Furthermore, the values for χ^2_H are close to the number of degrees of freedom, which is equal to the number of elements of the full data-vector (141) minus the number of free parameters (9). Even in this case, where the errorbars correspond to an effective volume of ∼40 (Gpc h^-1)^3, the values for χ_H^2 are all within ∼10% of the number of degrees of freedom. This (along with the results shown in the main text) is an indication that the adopted models for the power spectrum and bispectrum do not show evidence for large systematic errors given the expected statistical error of present and upcoming surveys. ieeetr
http://arxiv.org/abs/2407.13506v1
20240718133950
$B_{s}^0\to K^0\overline{K}{}^0$ beyond the Standard Model
[ "Yuval Grossman", "Yosef Nir", "Matthias Neubert", "Yogev Shpilman", "Yehonatan Viernik" ]
hep-ph
[ "hep-ph" ]
=1 range-phrase=–– sectionSec.Secs. appendixApp.Apps. [enumerate,2]leftmargin=0.45em Umathx45 Umathxmn<-> mathx10 mathxUmathxmn 0mathx"73
http://arxiv.org/abs/2407.13324v1
20240718092407
A millisecond pulsar position determined to 0.2 milliarcsecond precision with VLBI
[ "Hao Ding", "Adam T. Deller", "Paulo C. C. Freire", "Leonid Petrov" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.HE" ]
Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-12 Hoshigaoka-cho, Mizusawa, Oshu, Iwate 023-0861, Japan hdingastro@hotmail.com Centre for Astrophysics & Supercomputing, Swinburne University of Technology, PO Box 218, Hawthorn, Victoria 3122, Australia Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany NASA Goddard Space Flight Center Code 61A, 8800 Greenbelt Rd, Greenbelt, 20771 MD, USA Precise millisecond pulsar (MSP) positions determined with very long baseline interferometry (VLBI) hold the key to building the connection between the kinematic and dynamic reference frames respectively used by VLBI and pulsar timing. The frame connection would provide an important pathway to examining the planetary ephemerides used in pulsar timing, and potentially enhancing the sensitivities of pulsar timing arrays used to detect stochastic gravitational-wave background at nano-Hz regime. We aim at significantly improving the VLBI-based MSP position from its current ≳1 mas precision level by reducing the two dominant components in the positional uncertainty — the propagation-related uncertainty and the uncertainty resulting from the frequency-dependent core shifts of the reference sources. We introduce a new differential astrometry strategy of using multiple calibrators observed at several widely separated frequencies, which we call PINPT (Phase-screen Interpolation plus frequeNcy-dePendent core shifT correction; read as "pinpoint") for brevity. The strategy allows determination of the core-shift and mitigates the impact of residual delay in the atmosphere. We implemented the strategy on , an MSP well constrained astrometrically with VLBI and pulsar timing. Using the PINPT strategy, we determined core shifts for 4 AGNs around , and derived a VLBI-based pulsar position with uncertainty of 0.17 mas and 0.32 mas in right ascension and declination, respectively, approaching the uncertainty level of the best-determined timing-based MSP positions. Additionally, incorporating the new observations into historical ones, we refined the pulsar proper motion and the parallax-based distance to the ≲10 level and the sub-pc level, respectively. The realization of the PINPT strategy promises a factor-of-5 positional precision enhancement (over conventional VLBI astrometry) for all kinds of compact radio sources observed at ≲2 GHz, including most fast radio bursts. A millisecond pulsar position determined to 0.2 mas precision with VLBI Hao Ding 1EACOA Fellow Adam T. Deller2 Paulo C. C. Freire3 Leonid Petrov4 ================================================================================================================================ § INTRODUCTION Millisecond pulsars (MSPs) refer to recycled fast-spinning neutron stars, which exhibit unparalleled spin stability compared to other pulsars <cit.>. Using the pulsar timing technique <cit.> that time and model the pulse arrival times, astronomers have delivered the most stringent tests of gravitational theories with MSPs <cit.>. Collectively, an array of MSPs scattered across the sky, as known as a pulsar timing array (PTA), can be used to directly probe the stochastic gravitational-wave background (GWB) at the nHz regime <cit.>. Recent years have seen the major PTA consortia closing in on achieving high-significance detections of a homogeneous GWB <cit.>. Despite the breakthrough, to deepen our understanding of the sources of the GWB still requires continuous improvement of the PTA sensitivities. The optimal strategy to sustain PTA sensitivity enhancement is to regularly add new MSPs to the PTAs <cit.>, as has been adopted by the MPTA <cit.>. However, it is generally difficult to quantify the red timing “noises” (in which the GWB signal resides) for a shortly timed (≲3000 days) MSP; one way to overcome this difficulty is to incorporate independent astrometric measurements (i.e., sky position, proper motion and parallax) into the inference of timing parameters <cit.>. The very long baseline interferometry (VLBI) technique can provide precise, robust and model-independent astrometric measurements for MSPs, and takes much shorter time to achieve a certain astrometric precision, as compared to the astrometric determination made with pulsar timing <cit.>. Therefore, incorporating precise VLBI astrometric measurements into timing analysis of MSPs plays an essential role in testing gravitational theories <cit.>, and may substantially enhance the PTA sensitivities <cit.>. However, the incorporation is technically challenging. Firstly, incorporating precise VLBI proper motion and parallax into timing analysis can be limited by potential temporal structure evolution of the reference sources used in VLBI astrometry. Secondly, incorporating a VLBI pulsar position into timing analysis hinges on a good understanding of the transformation between the two distinct kinds of reference systems used by VLBI astrometry and pulsar timing. Though both VLBI astrometry and pulsar timing are usually presented with respect to the barycenter of the solar system, VLBI astrometry is conducted in the kinematic reference frame established with remote AGNs quasi-static on the sky (hence being robust to inaccurate planetary ephemerides), while pulsar timing studies are carried out in the dynamic reference frame that requires reliable planetary ephemerides to convert Earth-based pulse arrival times to the barycenter of the solar system. The dynamic coordinate is anchored to a kinematic coordinate system through observations of common objects, for instance, differential observations of asteroids with respect to stars. VLBI observations of MSPs with respect to AGNs will allow us to determine a rotation of the dynamic coordinate system defined by planetary ephemerides with respect to the inertial coordinate system (based on VLBI observations of AGNs) with an accuracy of 0.1 mas <cit.>. The residuals in pulsar positions from VLBI and timing observations after a subtraction of the rotation will allow us to provide an independent assessment of pulsar timing errors and validate the PTA error model. We consider that question as a matter of great importance because a claim that a PTA has detected GWB is based upon a model of pulsar timing errors. So far, planetary-ephemeris-dependent frame rotation remains poorly constrained, mainly limited by relatively large (≳1 mas) VLBI position uncertainties of MSPs <cit.>, as compared to the ≲0.2 mas timing position uncertainties for the best-timed MSPs <cit.>. Therefore, to reduce the VLBI position uncertainties of MSPs holds the key to building the planetary-ephemeris-dependent frame tie, examining the quality of planetary ephemerides, and hence facilitating the quantification of the timing noises resulting from inaccurate planetary ephemerides. In this paper, we introduced and tested a novel method to significantly improve the precision of pulsar VLBI positions. Throughout the paper, uncertainties are provided at 68% confidence, unless otherwise stated; mathematical expressions (including the subscripts and superscripts) defined anywhere are universally valid. § A NOVEL OBSERVING STRATEGY & TEST OBSERVATIONS §.§ The PINPT strategy As radio pulsars are generally faint (sim1 mJy at 1.4 GHz, and with steep spectra), the standard approach for directly determining positions in the quasi-inertial VLBI frame involving the measurement of group delays is not feasible. Instead, pulsar absolute positions have been determined using differential astrometry with respect to relatively bright reference sources, which are normally AGNs that are not point-like. Since the position of a suitable nearby AGN can be determined using standard absolute astrometry techniques, such relative position measurements allow a connection of the pulsar position into the quasi-inertial frame. However, standard absolute astrometry techniques do not account for any structure in the AGN, and so the position extracted for these sources is effectively that of the peak brightness in the image. The brightest spot in the 2D brightness distribution (or simply image) of an AGN also serves as the reference point of differential astrometry, which is usually the optically thick jet core (as long as the AGN is not flaring). As the AGN images normally vary with observing frequency ν, the reference point (or the jet core) evolves with ν as well; additionally, frequency-dependent image models of the reference sources are required for pulsar astrometry. Three main error sources contribute to the error budget of the absolute pulsar position (see Section 3.2 of ): i) the uncertainty in the absolute position of the primary phase calibrator (derived and registered in the Radio Fundamental Catalogue[<http://astrogeo.org/>], or the ICRF3 catalogue, ), ii) the unknown frequency-dependent core shifts (hereafter simply referred to as core shifts) <cit.> of the reference sources (or more generally, the frequency evolution of reference source structures), and iii) differences in the line of sight propagation delay between the direction of the calibrator source (where it has been solved) and the direction of the target. At L band, core shifts amounting to ∼1.2 mas <cit.> usually dominate the error budget. Second to that, the propagation-related systematic error of the absolute pulsar position is also prominent, given the relatively large (≳1 deg) separation between the pulsar and its primary phase calibrator. To suppress the aforementioned positional uncertainties, we designed a special observing strategy of differential astrometry, which extends the core-shift-determining method pioneered by <cit.>, and combines the method with the Multi-View (referring specifically to 2D interpolation throughout this paper) strategy <cit.>. The proposed observing strategy, referred to as the PINPT (Phase-screen Interpolation plus frequeNcy-dePendent core shifT correction; read as "pinpoint") strategy, requires a group of ≲6 observations for absolute position determination: ≲3 L-band Multi-View observations of the pulsar (pulsar sessions), and ≲3 core-shift-determining observations on nearby AGNs (that include the 3 calibrators used in the Multi-View session), each at different observing frequency ν. Where suitable L-band in-beam calibrators are identified, the pulsar sessions can also be used as the L-band core-shift-determining session, hence reducing the required observing time. Our observing strategy of multi-frequency observations with multiple phase calibrators has three advantages. Firstly, with the multi-frequency observations of AGNs, the core shifts of the AGNs would be well determined, which would significantly reduce the core-shift-related errors of the absolute pulsar position. Secondly, the use of Multi-View strategy would remove propagation-related systematic errors to at least the first order <cit.>. Finally, with three phase calibrators of the Multi-View setup, the pulsar position uncertainties due to uncertain reference source positions would drop by a factor of ≤√(3) (see Section 4.4 of ), compared to using only one phase calibrator. §.§ Observations To test the PINPT strategy, we ran four observing sessions in December 2021 and January 2022 using the Very Long Baseline Array (VLBA) on , a millisecond pulsar well determined astrometrically by previous campaigns (, hereafter referred to as D13 and G21). The observations, carried out under the project code BD244, include two 2-hr L-band Multi-View observations and two other 2-hr core-shift-determining observations at S/X band and Ku (∼15 GHz) band, respectively. The first L-band observation was scheduled within 2 days of the S/X- and Ku-band observations (in order to minimize structure evolution of the phase calibrators), and served as both pulsar session and core-shift-determining session, whereas the second L-band observation was solely a pulsar session. More specifically, in the L-band observations, a “target pointing” covers and two in-beam (at L band) calibrators identified by D13, i.e., (J2221) and (J2222); scans on this field were interleaved with scans on three brighter but off-beam AGNs (hereafter referred to as off-beam calibrators), i.e., (J2218), (J2219), and (J2226), with a cycle time of 5 minutes. In the S/X- and Ku-band observations, J2221 and the three off-beam calibrators (hereafter referred to as the core-shift-probing AGNs) were observed alternately (by 5-min and 2-min cycles, respectively). All of the 4 core-shift-probing AGNs are selected to have displayed resolved jet-core radio features in VLBI data, which eases the core shift determination (see Section <ref>). Being fainter at higher observing frequencies, was skipped in the S/X- and Ku-band observations. The calibrator plans of the observations are displayed in Figure <ref>. In addition to the phase calibrators relatively close to on the sky, the bright blazar , further away from the sky region of interest, was used as the fringe finder to correct instrumental delays and filter bandpass. The unresolved flux densities and the angular separations of the aforementioned sources can be found in Table <ref>. To precisely determine the core shifts, a wide variety of observing frequencies are required. The S/X-band observation simultaneously covers two frequency bands, i.e., at around ∼2.3 GHz and ∼8.4 GHz. Additionally, the L-band observations were designed to cover two separate frequency ranges centered around 1.44 GHz and 1.76 GHz, respectively. Altogether, we sampled 5 distinct frequency ranges, for the purpose of core shift determination. The VLBA data prior to data reduction can be accessed with the project code BD244 at <https://data.nrao.edu/portal>. § DATA REDUCTION & DIRECT RESULTS All data reduction was performed with the psrvlbireduce[<https://github.com/dingswin/psrvlbireduce>] pipeline, which runs functions of AIPS <cit.> through ParselTongue <cit.>, and images sources with DIFMAP <cit.>. Despite the total observing time of only 8 hours, the data reduction of this work is sophisticated. There are two different procedures of data reduction, which depend on the purpose of the session, i.e., whether it is target session or core-shift-determining session. As noted in Section <ref>, the first L-band observation serves as both target session and core-shift-determining session, therefore being reduced twice in two distinct procedures. §.§ Target sessions For the data reduction of target sessions, we did not split the data by observing frequency into two (one at around 1.44 GHz and the other at around 1.76 GHz). By doing so, the average central frequency 1.6 GHz agrees with previous astrometric campaigns of (D13, G21), and the acquired image S/N (and hence the positional precision) is not lowered. As mentioned in Section <ref>, the two L-band observations involve 5 phase calibrators, including off-beam calibrators and two in-beam calibrators (also see Figure <ref> and Table <ref>). We implemented the phase calibration of in two different ways, depending on the astrometric goal. §.§.§ In-beam astrometry Previous astrometric campaign of was carried out between October 2010 and June 2012, spanning 1.7 yr (D13, G21). The two new target sessions extend the astrometric time baseline by a factor of 6.6 to 11.3 yr, promising higher astrometric precision (especially for proper motion). To capitalize on the long time baseline, we phase-referenced to the same reference source (i.e., J2222) used in the previous campaign, following the data reduction procedure of D13 (including using the same image model of J2222). The updated results of in-beam astrometry is reported in Section <ref>. §.§.§ Multi-View (2D interpolation) As one form of the Multi-View strategy, 2D interpolation uses ≥3 reference sources to derive the phase solution at the sky position of the target <cit.>, which can at least remove propagation-related systematic errors to the first order. In order to determine precise absolute position of , we applied Multi-View (as part of the PINPT strategy) with the off-beam calibrators, all of which have well determined absolute positions<ref>. We reiterate that realizing the PINPT strategy requires a combination of Multi-View session(s) and core-shift-determining session(s); the two kinds of sessions need to be arranged close to each other to minimize the effects of structure evolution. The second L-band observation is ≈40 days apart from the core-shift-determining sessions. Therefore, Multi-View was applied to only the first L-band observation, but not the second one. In practice, we realized the Multi-View in two different approaches described in Appendix <ref>. Both approaches consistently render the position 22^h22^m0599997±0.1 mas, -0137'157825±0.2 mas for , and 22^h22^m01373131±0.03 mas, -0132'3697654±0.06 mas for J2222. For further examination, we also applied the two approaches of Multi-View to the X-band data (where the chance of phase wrap errors is much smaller than at L band), and confirmed with J2221 that the two approaches give almost identical positions. The availability of the J2222 position, as well as the J2221 position 22^h21^m12680887±0.01 mas, -0128'0630985±0.02 mas, offers a distinct pathway to the absolute position of (see Section <ref>). We note that the three positions presented above can only be considered relative positions with respect to the off-beam calibrators; and the positional uncertainties only include the statistical component. §.§ Core-shift-determining sessions The determination of AGN core shifts requires a wide coverage of observing frequencies. As mentioned in Section <ref>, the core-shift-determining sessions cover 5 frequency ranges. We first split the S/X-band data into S-band data and X-band data, and likewise split the wide-L-band data into two datasets — one at ∼1.44 GHz and the other at ∼1.76 GHz. Thereby, we acquired altogether 5 datasets taken at 5 different observing wavelengths λ — 2 cm, 4 cm, 13 cm, 17 cm and 21 cm. From each of the 5 datasets, we measured the J2221 position with respect to each off-beam calibrator by 1) phase-referencing J2221 to each off-beam calibrator, and 2) dividing the phase-referenced J2221 data by its final image model obtained at λ, and 3) fitting the J2221 position. In phase referencing, the final image model of an off-beam calibrator at λ was applied to the phase calibration of the off-beam calibrator; subsequently, the acquired phase solution was applied to both the off-beam calibrator and J2221. To make the final image models of the off-beam calibrators and J2221, we first applied 13 cm models to the datasets of respective sources at all λ during phase calibration (or self-calibration for J2221). In this way, we aligned the reference points of each core-shift-probing AGN across λ. Thereafter, we split out the aligned datasets, and remade the models of J2221 and off-beam calibrators at each λ. The final image models of in-beam and off-beam calibrators at all λ were made publicly available[<https://github.com/dingswin/calibrator_models_for_astrometry>], to convenience the reproduction of our results. § ASTROMETRIC PARAMETERS & CORE SHIFTS The data reduction described in Section <ref> produced a) two pulsar positions measured with respect to J2222, b) Multi-View positions of the pulsar, J2221, and J2222, and c) 5×3=15 J2221 positions from the core-shift-determining sessions. The products a) and c) can provide stringent constraints on astrometric parameters and core shifts, respectively. §.§ Astrometric inference We added the two new pulsar positions measured with respect to J2222 to previous pulsar positions (measured with respect to J2222), and re-made the Bayesian inference as described in G21. The resultant astrometric parameters are provided in Table <ref>. For comparison, the previous VLBI results reported by G21 are reproduced to Table <ref>. Unsurprisingly, the proper motion improves substantially by a factor of ∼7, while the parallax also being enhanced by ≈14%, corresponding to a refined trigonometric distance of 268.6^+1.0_-0.9 pc for . On the other hand, the 3 σ discrepancy in μ_δ reveals under-estimated systematic uncertainty in declination in previous works, likely due to vertical structure evolution of J2222, which has much larger impact on short (≲2 yr) timespan. §.§ Core shift determination Following the pioneering work by <cit.>, we developed new packages to infer the core shifts of the 4 core-shift-probing AGNs, from the 15 J2221 positions (with respect to three off-beam calibrators at five λ) in a Bayesian style. The core shift inference requires three ingredients — (1) a prescription of systematic errors of the 15 J2221 positions, (2) mathematical description of the core shifts, and (3) an underlying mathematical relation between core shift and λ (or observing frequency ν). The three ingredients are addressed as follows. §.§.§ Systematic errors on measured J2221 positions Along with the 15 J2221 positions, their random errors due to noise in the J2221 images were also obtained with data reduction (described in Section <ref>). Additionally, atmospheric propagation effects would introduce systematic errors, which change with ν and the angular separation between J2221 and the reference source. We approached the systematic errors by σ^𝒮_ijk/1 mas=η_EFAC·η_0 ·s_i/1 deg·l̂(ϵ) ·σ_0(ν_j)/1 mas·Θ̂_k , where σ^𝒮_ijk denote systematic errors; the subscript i=J2218, J2219, J2226 specifies the reference source; the second subscript j=1,2,3,4,5 corresponds to one of the five observing frequencies (see Section <ref>); the last subscript k=α, δ refers to right ascension (RA) or declination; η_EFAC is a scaling factor (for the estimated systematic uncertainty) to be determined with Bayesian analysis; η_0 is the initial scaling factor that brings σ^𝒮_ijk to a reasonable value (∼1 mas) before inferring η_EFAC, which eases the inference of η_EFAC at a later stage; s_i represents the angular separation between J2221 and the reference source; Θ̂_k denotes the fractional synthesized beam size (projected to RA or declination); l̂ and σ_0 stand for the two terms changing with antenna elevations ϵ and ν, respectively. In this work, we set η_0≡1; the adopted l̂(ϵ) and σ_0(ν) are described in Appendix <ref>. §.§.§ The directions of core shifts Regarding the ingredient (2), we describe the core shift of the i'-th (i'=J2218,J2219,J2221,J2226) AGN with two parameters — the core shift magnitude r_i', and the direction of core shift θ_i'. As noted in <cit.>, it is difficult to infer both r_i' and θ_i' for each of the 4 core-shift-probing AGNs from the 15 J2221 positions without prior constraints, which has been taken into account in design of the observations. As mentioned in Section <ref>, all of the core-shift-probing AGNs are selected to have displayed clear jet-core radio feature on VLBI scales. Following <cit.>, we assumed core shift directions are aligned with respective AGN jet directions, thereby gaining prior knowledge of the core shift directions with analysis of VLBI images of the AGNs. The determination of AGN jet directions is detailed in Appendix <ref>. §.§.§ The relation between core shift and observing frequency The relation between core shift and observing frequency is given by <cit.> as r ∝ν^-1/k_r, where the index k_r equals to 1 when synchrotron self-absorption (that leads to the jet core radio emissions) is in equipartition <cit.>. For the i'-th AGN, we initially adopted an equivalent formalism r_i' = r_0i'(ν/ν_0)^-β_i' as the relation between r_i' and ν, where β_i' is a power index to be determined in Bayesian analysis; v_0 and r_0i' refer to the reference frequency and the core shift magnitude at the reference frequency, respectively. Using the Bayesian inference described in Appendix <ref>, we derive β_i' along with other model parameters. The results are provided in Table <ref>. When β_i' are included in the inference, the reduced chi-square χ^2_ν of inference is 1.9. Although β_i' is significantly (≳3 σ) determined for 2 of the 4 AGNs, it is consistent with 1 in all cases. In comparison, when performing the inference with all β_i' fixed to 1, we acquired consistent results with generally higher precision. Moreover, the χ^2_ν decreases to 1.2, suggesting β_i'≡1 as an appropriate assumption for this work. We adopt the results derived assuming β_i'≡1 for further analysis. The adopted core shift model is illustrated in Figure <ref>. § AN ULTRA-PRECISE MSP POSITION OBTAINED WITH VLBI With r_0i' and θ_i' determined for the three off-beam calibrators and J2221, we calculated the core shifts of the 4 AGNs at 1.6 GHz, and proceeded to derive the absolute position of . The derivation was realized in three approaches. In the first approach, the pulsar position obtained with Multi-View (presented in Section <ref>) was corrected for the core-shift refinements in the assumed positions of the off-beam calibrators (as described by Equation <ref> in Appendix <ref>). We obtained the pulsar position α^𝒢_1=22^h22^m0599993±0.17 mas, δ^𝒢_1=-0137'157841±0.32 mas in the geocentric reference frame at the reference epoch t_ref of MJD 59554.0, which corresponds to α^ℬ_1 = 22^h22^m0600015±0.17 mas, δ^ℬ_1 = -0137'157827±0.32 mas with respect to the barycenter of the solar system after removing the parallax effects. Here, the positional uncertainty is the addition-in-quadrature of the statistical uncertainty (provided in Section <ref>) and the Δx⃗_target uncertainty described in Appendix <ref>. In the second approach, we first acquired the J2221 position x_J2221 at 1.6 GHz by correcting the X-band J2221 position (obtained with Multi-View, see Section <ref>) using the same process employed in the previous approach for the target source . Subsequently, we reprocessed all VLBA data (including the two new pulsar sessions and the historical ones) of , phase-referencing to J2221. With the position series obtained from the data reduction, we inferred the reference pulsar position x_psr^ J2221 (measured with respect to J2221) and its uncertainty at t_ref=MJD 59554.0 (along with other astrometric parameters) using the Bayesian analysis described in <cit.>. Combining x_J2221, the image model position x_J2221^ model of J2221, and x_psr^ J2221, the absolute position of was determined with Equation <ref> to be α^ℬ_2 = 22^h22^m0600015±0.19 mas, δ^ℬ_2 = -0137'157825±0.42 mas, where the error budget of the position is detailed in Appendix <ref>. The last approach is essentially the same as the second one, except that J2222 is used instead of J2221. Accordingly, the absolute pulsar position was calculated with the J2222 position x_J2222 at 1.6 GHz, the image model position x_J2222^ model of J2222, and the reference pulsar position x_psr^ J2222 measured with respect to J2222. We obtained α^ℬ_3 = 22^h22^m0600015±0.14 mas, δ^ℬ_3 = -0137'157828±0.27 mas. The pulsar positions obtained with the three approaches are summarized in the upper part of Table <ref>. Though all the three approaches render highly consistent absolute pulsar positions, the first approach offers a snapshot (on the timescale of AGN structure evolution) localization (of ) independent from the previous astrometric observations of , as opposed to the two other approaches (which essentially use the multi-frequency multi-source observations to perfect the position and structure of one of the in-beam calibrators, rather than the pulsar, at that snapshot in time, and then proceeds to perform standard differential astrometry using that frozen model of the astrometric reference source). Therefore, we report (α^ℬ_1,δ^ℬ_1) at t_ref=MJD 59554.0 as the primary absolute position of . §.§ Comparison to timing positions A priori, we expect the position difference between the VLBI PINPT measurement, and that made by pulsar timing, to be consistent. With a sample size of 1, it is difficult to ascribe any discrepancy to one or the other of the measurement techniques, or to a systematic difference between dynamic and kinematic frames - but we can use the level of agreement to set a probabilistic upper limit on the error contribution from any of these three potential sources. The most precise published timing-based position of is reported in G21. In Table 3 of G21, three positions are provided for t_ref=MJD 55743, which include one position derived assuming a proper motion determined with VLBI, and two positions derived without using any VLBI prior (hereafter referred to as timing-only positions). To test the PINPT strategy with pulsar timing, we re-derived (α^ℬ_2,δ^ℬ_2) and (α^ℬ_3,δ^ℬ_3) at t_ref=MJD 55743 (same as that of G21) following the method described earlier in Section <ref>, under the assumption that the structures of J2221 and J2222 do not evolve with time. The results are displayed in Table <ref>. To minimize the correlation between the timing and VLBI positions, we only compare the VLBI positions to the two timing-only positions (of G21), which are listed in Table <ref> as the “DMX” and “non-DMX” positions. Here, “DMX”, named by G21, refers to the pulsar timing model that describes DM with a piecewise constant function (; G21), while "non-DMX" refers to the timing model that approximates DM variations with a cubic function (G21). The DMX and non-DMX timing positions are marginally consistent to within 2 σ (here and hereafter, σ refers to the addition-of-quadrature of the uncertainties of the two compared sides), with the uncertainty of the DMX position being more conservative than the non-DMX one. On the other side of the comparison, δ^ℬ_2 and δ^ℬ_3 are consistent with each other. However, α^ℬ_2 is smaller than α^ℬ_3 at 3 σ significance, which is associated with a ∼3 σ discrepancy between the two μ_α measured with respect to J2221 (44.755^+0.017_-0.014 ) and J2222 (44.707±0.005 , as reported in Table <ref>), respectively. The discrepancy between α^ℬ_2 and α^ℬ_3 likely indicates the violation of the assumption that the structures of J2221 and J2222 do not evolve with time. The jet direction of J2221 is almost aligned with the RA direction (see Figure <ref>). As noted in D13, the relatively severe structure evolution of J2221 is believed to cause 1) less precise parallax derived with respect to J2221 (as the parallax magnitude in the RA direction is ≈2.4 times larger than in the declination direction), and 2) biased μ_α determination. Specifically, the discrepancy between α^ℬ_2 and α^ℬ_3 (or the μ_α discrepancy) can be explained by 0.56 mas larger (or fractionally 18% higher according to Table <ref>) r_J2221(1.6 GHz) at MJD 55743 compared to at MJD 59554. On the other hand, thanks to the horizontal jet of J2221, δ^ℬ_2 is almost free from the impact of structure evolution (of J2221), hence being more favorable than δ^ℬ_3, let alone the indication of vertical structure evolution of J2222 mentioned in Section <ref>. Therefore, we adopt (α^ℬ_3,δ^ℬ_2) as the absolute pulsar position at MJD 55743. A more sophisticated treatment in the future could incorporate positions in both axes from both sources, weighting them by some measure of expected reliability and taking into account the covariance between the two measurements. When comparing (α^ℬ_3,δ^ℬ_2) to the two timing-only positions, we first find good consistency between (α^ℬ_3,δ^ℬ_2) and the DMX position, with α^ℬ_3 and the DMX RA consistent to within 0.3 σ, and δ^ℬ_2 and the DMX declination consistent to within 1.1 σ. On the other hand, despite the 1.5 σ consistency between δ^ℬ_2 and the non-DMX declination, α^ℬ_3 is smaller than the non-DMX RA at >3 σ significance. It is not impossible that the >3 σ RA discrepancy is caused by the use of different kinds of reference systems. Therefore, we cannot yet conclude that the new PINPT results favour the DMX timing model over the non-DMX one with just one MSP. Moreover, we reiterate that α^ℬ_3 is subject to additional errors induced by potential horizontal structure evolution of J2222. A more robust test of the PINPT strategy with pulsar timing can be achieved by comparing (α^ℬ_1,δ^ℬ_1) to timing positions (based on different timing models) derived at MJD 59554, which will be part of an upcoming timing paper. Additionally, applying the PINPT strategy to just a few well timed MSPs will allow a much stronger test of the new strategy. § SUMMARY Using the PINPT strategy, we determine an MSP position to the ∼0.2 mas precision with VLBI, which improves on the previous precision level by a factor of ∼5 <cit.>, and is comparable with the precision level of the timing positions of the best-timed MSPs <cit.>. According to <cit.>, applying the PINPT strategy to 50 MSPs promises ≲0.1 mas precision for the connection between the kinematic and dynamic reference frames, >3 times more precise than the previous expectation. Considering that systematic errors of calibrator source positions are in a range of 0.05–0.2 mas<ref> and unlikely to be improved within several decades, our strategy provides the position accuracy that approaches to a practical limit. In general, the PINPT strategy can be used in a broader context: it can sharpen the VLBI localisation of any steep-spectrum compact radio source, which could facilitate the studies of, for example, fast radio bursts <cit.> or mergers of two neutron stars <cit.>. Furthermore, it is important to stress that the PINPT strategy is not only meant to enhance the precision of absolute positions, but would have fundamental impact on differential astrometry as well. Due to the variability of AGN core shifts <cit.>, proper motions and parallaxes measured with respect to AGNs could be potentially biased (e.g. D13, also see Section <ref>), which is believed to have driven the occasional inconsistencies between VLBI and timing proper motions for the longest timed MSPs <cit.>. However, traditional differential astrometry with one calibrator is incapable of quantifying the variabilities in AGN core shifts, hence being subject to extra systematic errors. In the scenario of precise in-beam astrometry, variability in the core shifts of the reference sources is a leading source of systematic errors. Multi-epoch PINPT observations, on the other hand, can constrain the core shift variabilities, thus minimizing their corruption on astrometric results. As pulsar astrometry is considered, joining the core-shift-determining sessions to in-beam or Multi-View astrometry can provide accurate pulsar proper motions and parallaxes not biased by core shift variabilities, which can then be safely incorporated into pulsar timing analysis. HD acknowledges the EACOA Fellowship awarded by the East Asia Core Observatories Association. HD thanks Wei Zhao for the discussion about jet direction determination. This work is mainly based on observations with the Very Long Baseline Array (VLBA), which is operated by the National Radio Astronomy Observatory (NRAO). The NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Pulse ephemerides of were made for the purpose of pulsar gating, using data from the Effelsberg 100-meter telescope of the Max-Planck-Institut für Radioastronomie. This work made use of the Swinburne University of Technology software correlator, developed as part of the Australian Major National Research Facilities Programme and operated under license <cit.>. aa § SUPPORTING MATERIALS FOR SECTION <REF> § TWO APPROACHES OF MULTI-VIEW (2D INTERPOLATION) As mentioned in Section <ref>, Multi-View was realized in two approaches. In both approaches, J2219 — the closest to among the off-beam calibrators, was used as the main phase calibrator. In the first approach, we made a phase calibration with J2219, then interpolated the phase solution to the approximate sky position of by performing two 1D interpolation operations <cit.> one after another. Specifically, the first 1D interpolation extrapolated the phase solution to the intersection of two straight lines — one connecting J2219 and J2226, and the other connecting J2218 and J2226. Then the second 1D interpolation derived the phase solution at the position of . In the second approach, the phase solution acquired with J2219 was passed to J2218 and J2226. Subsequently, self-calibration was performed with both J2218 and J2226. The acquired incremental phase solutions ϕ_J2218 and ϕ_J2226 were then linearly added together as c_J2218·ϕ_J2218+ c_J2226·ϕ_J2226 (here, c_J2218 and c_J2226 are constants described in Appendix <ref>), which essentially moves the virtual calibrator (explained in ) to the approximate sky position of the target (i.e., , J2221, or J2222). § L̂(Ε) AND Σ_0(Ν) In Equation <ref>, there are two functions — l̂(ϵ) and σ_0(ν). The former describes the fractional atmospheric path length as a function of the antenna elevation ϵ, while the latter characterizes the evolution of the systematic error with respect to the observing frequency ν. We define l̂(ϵ) as l/R_E, where l and R_E are, respectively, the atmospheric path length and the radius of the Earth (not including the atmosphere). It is easy to calculate that l̂(ϵ)=√(sin^2ϵ+2 η_H+η_H^2)-sinϵ , where the overline commands averaging over the observation; η_H=h_atmo/R_E is the atmosphere thickness divided by R_E. We adopted η_H=0.15 calculated with the upper height of the ionosphere (≈965 km) and the average Earth radius of 6371 km. Using simulations, the relation between propagation-related positional error and ν has been studied by <cit.>, and is provided in Figure 6 of <cit.>. As the simulations of <cit.> assume a 5 angular separation between the target and the reference source, we divided the results of <cit.> by a factor of 5 (to reach the systematics per degree separation), and adopted the divided results as σ_0(ν). In Equation <ref>, we assume σ^𝒮_ijk∝l̂(ϵ)·σ_0(ν_j), while having little knowledge about the coefficient. Therefore, the nuisance parameter η_EFAC (to be determined in Bayesian inference) is essential for completing Equation <ref>, and recovering the true magnitude of σ^𝒮_ijk. § THE DETERMINATION OF AGN JET DIRECTIONS AGN jet directions ϕ_jet and their uncertainties have been directly estimated from their VLBI images <cit.>. Likewise, we determined ϕ_jet of the 4 core-shift-probing AGNs from the final image models<ref> of the 4 sources using the newly developed package arcfits[Available at <https://github.com/dingswin/arcfits>. The writing of the package has benefited from the discussion with Dr. Wei Zhao, who already has a preliminary script to derive directions along the jet ridge line using circular slicing (see <https://github.com/AXXE251/AGN-JET-RL>).]. The obtained ϕ_jet results are illustrated alongside the AGN images in Figure <ref>, and provided in Table <ref>. arcfits derives ϕ_jet and their uncertainties in an automatic way described as follows. §.§ The noise level of the residual map After reading a VLBI image, a 2D array of flux densities can be obtained. From this 2D map, the noise level rms of the residual map (i.e., the image after removing all detected source components) can be derived, which is a preparation for measuring ϕ_jet. We established the residual map and estimated the rms in the following way. We first marginalized the 2D array of flux densities into a 1D array, and calculated the standard deviation of the flux densities. All flux densities higher than 7 times the standard deviation were considered “detected”, and were removed from the 1D array. We repeated this standard deviation calculation and removal of detected points until we reached the residual map, in which no more detection can be identified. The standard deviation of this residual map was adopted as the residual map noise level rms. §.§ Elliptical cuts on the inner regions of VLBI images The hitherto most advanced method of measuring the position angles of radio features outside the compact radio core is by 1) converting the positions in a VLBI image from Cartesian coordinate to a polar coordinate centered around the brightest spot of the image, and 2) applying circular cuts to the VLBI image <cit.>. As the core shift study is concerned, a ϕ_jet determined close to the compact radio core is expected to better approximate the core shift direction, as compared to a ϕ_jet determined afar <cit.>. Due to intrinsic structure and/or scatter broadening, the compact radio core is usually larger than the synthesized beam, while normally resembling the beam (e.g., Figure <ref>). In other words, there is no prior information about the size of the compact radio core. As a result, when the synthesized beam is non-circular, circular cuts applied in close proximity to the compact radio core might lead to the misidentification of the compact radio core (stretching along the major axis) as radio features outside the compact core. Although this anomaly can be corrected by human intervention, other systematic errors might be introduced during the human intervention. As a novel method that is i) largely free of human intervention, and ii) dedicated to ϕ_jet determination in close proximity to the compact radio core, we cut an image model with a number of increasingly large ellipses that a) have the same axis ratio and position angle as the synthetic beam, and b) are centered at the image pixel of the highest flux density. Given that the position angle and axis ratio are both constant, the n-th (n=1,2,3,...) ellipse can be characterized by its semi-major axis a_n; a position on this ellipse can be defined with one additional parameter — the position angle (east of north) ϕ. In this work, we universally adopted a_n=a_beam+(n-1)·(a_beam/2) for the ellipses, where a_beam refers to the semi-major axis of the synthesized beam. Among the ellipses ascertained by Equation <ref>, the innermost ellipse outside the compact radio core was used to derive ϕ_jet; the position angle corresponding to the maximum flux density on the ellipse was adopted as the ϕ_jet. An ellipse is considered outside the compact radio core, when it meets the following criteria: * the maximum flux density on the ellipse is >7 rms (see Appendix <ref> for the meaning and calculation of rms), and * the median flux density on the ellipse is <3 rms, and * ϕ_jet uncertainty can be calculated with the method detailed in Appendix <ref>. If no ellipse meets the criteria, the AGN is considered a compact source without extended radio features. §.§ The uncertainty on the AGN jet direction In Appendix <ref>, we adopted the position angle of the maximum flux density 𝒮_max on the innermost elliptical cut outside the compact radio core as the ϕ_jet. Random noises in the image might distort the flux density distribution, and deviate the position of the maximum flux density on the ellipse. The degree of flux density change due to random noises is limited: the chance of large flux density changes due to random noises is lower than smaller flux density changes. Given a flux density drop Δ𝒮 induced by random noises, one can estimate the chance of the drop by counting flux densities <𝒮̅_R-Δ𝒮 in the residual map (see Appendix <ref> for explanation of the residual map), where 𝒮̅_R stands for the mean flux density of the residual map. Reversely, provided a confidence level where the flux densities in the residual map is ≥𝒮̅_R-Δ𝒮, we can derive the Δ𝒮 corresponding to the confidence level. In this way, we estimated Δ𝒮 corresponding to 68% confidence level from the residual map. At 68% confidence, we expect S'_max>S_max-Δ𝒮, where S'_max is the maximum flux density changed by random noises. On the ellipse (where ϕ_jet is determined), we identify all position angles ϕ where the flux densities equal to S_max-Δ𝒮. When exactly two ϕ are acquired, the two ϕ are adopted as the 1 σ uncertainty interval of ϕ_jet. Otherwise (which is rare), we consider the ellipse likely still intersects the compact radio core, and move onto the next ellipse further afield. § THE BAYESIAN INFERENCE OF CORE SHIFTS We derived the core shift model with quartet[planned to be released alongside a future catalogue paper], a newly developed package dedicated to inferring core-shift-related parameters in a Bayesian manner, which is explained as follows. §.§ Mathematical formalism Apart from the aforementioned parameters r_i, θ_i, β_i and η_EFAC, the reference positions x^*_ik are also required in the model of core shifts. For the Bayesian inference, the likelihood function is P_CS∝(∏_i∏_j ∏_kσ_ijk)^-1exp[-1/2∑_i ∑_j∑_k(x_ijk-x̃_ijk/σ_ijk)^2] , where x_ijk and x̃_ijk refer to, respectively, the observed and the modeled J2221 positions with respect to the i-th reference source at observing frequency ν_j; the total positional uncertainty σ_ijk is the addition-in-quadrature of random and systematic errors. In Equation <ref>, the modeled J2221 positions x̃_ijk follow the relation x̃_ijα = [r_J2221(ν_j) cosθ_J2221 - r_i(ν_j) cosθ_i] + x^*_iα x̃_ijδ = [r_J2221(ν_j) sinθ_J2221 - r_i(ν_j) sinθ_i] + x^*_iδ , where the formalism of r_i'(ν_j) is given by Equation <ref>. §.§ Priors For the Bayesian inference, we adopted the following prior information: * When the inference of β_i' is requested, the prior constraints of β_i' follow a uniform distribution between 0.3 and 3, which is denoted as β_i'∼𝒰(0.3,3). * r_0i'∼𝒰(0,5), where the unit is mas. * x^*_ik∼𝒰(x_ij|_k-3σ_x_ij|_k, x_ij|_k+3σ_x_ij|_k), where x_ij|_k and σ_x_ij|_k stand for the average and standard deviation of x_ijk at a given k; one lower limit and one upper limit are universally used for all reference sources. * η_EFAC∼𝒰(0,20). * The prior constraints on θ_i' follow Gaussian distributions characterized by the AGN jet directions ϕ_jet in Table <ref>. Namely, θ_i'∼𝒢(ϕ_jet|_i', σ_ϕ_jet|_i'). In the case of asymmetric ϕ_jet uncertainty, the larger side of the uncertainty is used as the σ_ϕ_jet|_i'. § THE CALCULATION OF AN ABSOLUTE PULSAR POSITION AND ITS UNCERTAINTY The determination of the absolute position of through the PINPT strategy relies on well determined absolute positions of the off-beam calibrators. This work is based on the off-beam calibrator positions x⃗^ RFC_i reported in the 2024A release of the Radio Fundamental Catalogue<ref> (RFC). Generally speaking, for Multi-View carried out with 3 off-beam calibrators, the relation between the target position x_target and the off-beam calibrator positions x_q (q=1,2,3) is x_target = ∑_q^3 c_q x_q , where x_target is known to a precision good enough to guide the Multi-View; c_q are coefficients that can be solved with the additional condition ∑_q^3 c_q = 1 . Therefore, in Multi-View, offsets in the off-beam calibrator positions would lead to offset in the target position (refined with Multi-View), as Δx_target = ∑_q^3 c_q ·Δx_q , where Δx_q result from 1) core shifts of the off-beam calibrators, and 2) inaccurate positions of the off-beam calibrator image models, and are calculated by Δx_q = (x⃗^ model_q - x⃗^ RFC_q ) + [r_q(ν_psr) - r⃗_q^ RFC] . Here, x⃗^ model_q refers to the image model position of the q-th off-beam calibrator; r_q(ν_psr) denotes the core shift of the q-th off-beam calibrator at the observing frequency of interest (which is 1.6 GHz, the observing frequency of the pulsar sessions, for this work); r⃗_q^ RFC is the residual core shift of the RFC position. Specific to this work, q=i=J2212, J2219, J2226. c_i for various targets are provided in Table <ref>. According to <cit.>, r⃗_i^ RFC=0, when 1) x⃗^ RFC_i is derived with group-delay astrometry (that removes the ionosphere-induced group delay with dual-band or multi-band geodetic observations), and 2) β_i=1. Both conditions are met for this work: x⃗^ RFC_i are estimated with group-delay astrometry; and we already assume that β_i≡1 in the Bayesian analysis (see Section <ref>). Therefore, we adopt r⃗_i^ RFC=0 in this work. The uncertainty on Δx_target is calculated by the addition-in-quadrature of the uncertainties on the components c_i Δx⃗_i, where the uncertainty of Δx⃗_i is further derived by the addition-in-quadrature of the uncertainties on x⃗^ RFC_i and r_i. §.§ Deriving an absolute pulsar position via a nearby AGN When the target itself is an AGN (e.g. J2221, J2222), Equation <ref> needs to be generalized to Δx_target = ∑_q^3 c_q ·Δx_q + Δr_target , where Δr_target is the core shift difference between ν_psr and the observing frequency ν_IBC of the AGN target. Accordingly, the uncertainty on Δx_target is further added in quadrature by the uncertainty of Δr_target. The availability of a nearby AGN (hereafter referred to as IBC) around a pulsar (or other targets of interest) provides an alternative pathway to the absolute position of the pulsar. After applying the position correction Δx_target calculated with Equation <ref>, the absolute position x⃗_IBC of the IBC at ν_psr can be derived. Provided i) the image model position x_IBC^ model of the IBC, and ii) the pulsar position x_psr^ PR determined with phase referencing with respect to the IBC, the absolute pulsar position can be calculated as x_psr = x_psr^ PR + (x⃗_IBC - x_IBC^ model) . This pathway of deriving the absolute position may become the only option in cases where Multi-View of the target of interest cannot be arranged (e.g., when the target is an unpredictable radio transient). The uncertainty of x_psr derived via the IBC is the addition-in-quadrature of the Δx_target uncertainty, the x_psr^ PR uncertainty (estimated with Bayesian inference in this work), and the statistical uncertainty of the IBC position obtained with Multi-View at ν_IBC.
http://arxiv.org/abs/2407.12657v1
20240717154135
Appearances are deceptive: Can graviton have a mass?
[ "Leihua Liu", "Tomislav Prokopec" ]
hep-th
[ "hep-th", "astro-ph.CO", "gr-qc" ]
All-optical Saddle Trap Thiago Guerreiro Received 04 December 2023 / Accepted 15 July 2024 ===================================================== § INTRODUCTION The question of understanding the evolution of linearized gravitational perturbations (and beyond) on general cosmological backgrounds has gained importance after the onset of cosmic inflation <cit.>, and in particular after the discovery of gravitational waves <cit.>. Namely, inflation amplifies quantum matter density perturbations <cit.> and stretches them to vast cosmological scales, such that they perturb homogeneous cosmic microwave background radiation <cit.> and provide seeds to the Universe's large scale structure <cit.>. In this work we point out that insufficient attention has been given to proper understanding of the dynamics of linearlized gravitational perturbations on quantum matter backgrounds. [Textbooks <cit.> and reviews <cit.> perform the analysis by assuming a fixed classical backgrounds, in which case there is no problem.] In particular, we point at the difficulties that arise when the naîve approach is taken, namely when gravitational and matter actions are expanded to the second order in gravitational perturbations around general (spatially flat) cosmological backgrounds, whose expansion is driven by quantized matter fields. We find that such a naîve approach suggests that the dynamical graviton has a mass, which is inconsistent with the common belief. In sections <ref> and <ref> we slowly work towards the solution of this disturbing observation, and show that the dynamical graviton, as expected, possesses no mass. However, our analysis indicates that the constraint sector of the theory couples to the non-vacuum part of the matter energy-momentum tensor. Rather than drawing any premature conclusions regarding what that signifies, we leave the precise interpretation of that coupling for a future investigation. § PRELIMINARIES §.§ Cosmological background In this work we consider spatially homogeneous, isotropic and spatially flat cosmological spaces, whose geometry in general D dimensions is characterised by the conformally flat Friedman-Lemaître (FL) metric, g^ (0)_μν= a^2(η) diag (-1 , 1 , … , 1) , where a is the scale factor that encodes the dynamics of the expansion, and superscript (0) denotes 0-th order in the gravitational perturbations. The rate of expansion is conveniently captured by the conformal Hubble rate, ℋ = 1/a d a/ dη , which is related to the physical Hubble rate as H≡ȧ/a =ℋ/a, with ȧ = d a/ dt and dt = a dη. From the definition of the Riemann tensor, R^α_μβν=∂_βΓ^α_μν-∂_νΓ^α_μβ+Γ^ρ_μνΓ^α_βρ-Γ^ρ_μβΓ^α_νρ , and the Christoffel connection, Γ^α_μν=1/2 g^αβ ( ∂_μ g_νβ+∂_ν g_μβ-∂_β g_μν ) , one can calculate the curvature tensors in the homogeneous and isotropic FL spacetime (<ref>) to be, R^ (0)_μνρσ = a^2[ 2 ℋ^2 η_μ[ρη_σ]ν -4(ℋ'-ℋ^2) ( a^2 δ^0_[μ g_ν] [ σδ^0_ρ]) ] , R^ (0)_μν≡ g_ (0)^αβ R^ (0)_αμβν = [(D-2)ℋ^2+ℋ' ] η_μν - (D-2)(ℋ'-ℋ^2) (δ_μ^0 δ_ν^0 ) , R^ (0)≡ g_ (0)^μν R^ (0)_μν = a^-2 (D-1)[(D-2)ℋ^2+ 2ℋ'] . G_μν^ (0)≡ R^ (0)_μν-1/2g^ (0)_μν R^ (0)= -D-2/2[(D-3)ℋ^2+2ℋ' ] η_μν- (D-2)(ℋ'-ℋ^2) (δ_μ^0 δ_ν^0 ) , where the superscript (0) stands for the zeroth order in gravitational perturbations. The energy-momentum tensor of the matter field that drives the expansion has a perfect fluid form, T^ (0)_μν = (𝒫^ (0)+ρ^(0))u^ (0)_μu^ (0)_ν + g^ (0)_μν𝒫^(0) = a^2[ (𝒫^ (0)+ρ^ (0))δ^0_μδ^0_ν + η_μν𝒫^ (0)] where 𝒫^ (0) and ρ^ (0) are the energy density and pressure of the cosmological fluid and u^ (0)_μ=-a δ_μ^0 is its velocity (in the fluid rest frame), such that g^μν_ (0) u^ (0)_μu^ (0)_ν = -1. §.§ Action We are interested in the dynamics of the gravitational metric field g_μν, defined by the Hilbert-Einstein action in D dimensions, S_ HE[g_μν] = 1/16π G∫ d^Dx √(-g)[R-(D-2)Λ_0] , where, as usually, R=g^μνR_μν is the Ricci curvature scalar, R_μν the Ricci tensor, g_μν the metric tensor, g^μν its inverse, g= det[g_μν] and G and Λ_0 denote the (bare) Newton and cosmological constant, respectively. We consider an expanding universe, whose expansion is driven by a massive fermionic matter field ψ (ψ=ψ^†γ^0), defined by the Dirac action, S_ D[ψ,ψ,g_μν] = ∫ d^Dx√(-g)[i/2(ψe^μ_ bγ^b ∇_μψ - (∇_μψ)e^μ_ bγ^b ψ) - ψM ψ] , where g = det[g_μν], M = m_R + i γ^5 m_I , (m_R, m_I ∈ℝ) , where m_R and m_I are fermionic mass and pseudo-mass parameters. We allow separate scalar and pseudo-scalar masses, to more realistically model the standard model, in which left- and right handed fermions have different dynamics. [The left-handed fermions of the standard model are weakly charged and thus couple to the weak gauge bosons W^± and Z^0, while the right-handed fermions do not.] The fermionic action on curved spacetimes (<ref>) is defined in terms of the tetrad vector field e^μ_ b, which connects space-time vectors with those on tangent space, γ^μ(x) = e^μ_ b(x) γ^b , where γ^μ(x) is the Dirac matrix on spacetime, which obeys the usual anticommutation relation, {γ^μ(x), γ^ν(x)} = - 2 g^μν(x) , and γ^b is the tangent space Dirac matrix, which obeys, the tangent-space anticommutation relation, {γ^b, γ^c} = - 2 η^bc , in which there is no spacetime dependence. In this work, we use Greek letters μ,ν, ⋯ to denote spacetime indices on tensors, and Latin letters b,c,⋯ to denote tangent space indices (a is reserved for the scale factor). Next, ∇_μ in (<ref>) denotes the covariant derivative. It acts on the fermion field (and its conjugate) as (see e.g. <cit.>), ∇_μψ = (∂_μ - Γ_μ)ψ , ∇_μψ = (∂_μ + Γ_μ)ψ , where Γ_μ is the spin(or) connection given by, Γ_μ = -i/2ω_μ cdσ^cd , ω_μ cd = e^ν_c(∂_μ e_ν d - Γ^ρ_μνe_ρ d) , σ^cd= i/4[γ^c,γ^d] . In what follows we expand the gravitonal and fermionic action to the second order in gravitational perturbations. Weyl transformations. Since (massless) fermions couple conformally to gravity, it is convenient to employ Weyl transformations, g_μν = Ω^2(x) g̃_μν , g = Ω^2D(x) g̃ , ψ(z) = Ω^-D-1/2χ(x) . where g̃_μν = η_μν + κ h_μν(x) , g̃ = 1 + κ/2 h + κ^2/8[h^2 - 2 h^μνh_μν] + O(κ^3) , g̃^μν = η^μν - κ h^μν(x) + κ^2h^μ_ρ (x)h^ρν(x) + O(κ^3) , with h^μν=η^μρη^νσh_ρσ, h = η^μνh_μν and κ^2 = 16π G is the loop-counting parameter of quantum gravity. Notice that for cosmology, Ω(x) → a(η) reduces to the scale factor, and that – after Weyl rescaling – the spatially flat background cosmological metric becomes (conformal) Minkowski metric η_μν. From g_μν = e_μ^b η_bce^c_ν, and g^μν = e^μ_b η^bce_c^ν it is a simple matter to show that tetrads transform as, e_μ^b = Ω(x) ẽ_μ^b , e^μ_b = [Ω(x)]^-1ẽ^μ_b , where ẽ_μ^b = δ_μ^b + κ/2 h_μ^b - κ^2/8 h_μ^ρ h_ρ^b + O(κ^3) , ẽ^μ_b = δ_μ^b -κ/2 h^μ_b + 3κ^2/8 h^μ_ρ h^ρ_ b + O(κ^3) . Under Weyl transformations (<ref>) the Ricci tensor and scalar transform as (cf. (<ref>)), R_μν = R̃_μν + 1/Ω^2[2(D-2)(∂_μΩ)(∂_νΩ) - (D-3)g̃^αβ(∂_αΩ)(∂_βΩ) g̃_μν] 1cm - 1/Ω[(D-2)(∇_μ∂_νΩ) + g̃_μνΩ] R = 1/Ω^2[ R̃ - 1/Ω^2 (D-1) (D-4)g̃^μν(∂_μΩ) (∂_νΩ) -2(D-1)1/ΩΩ] . Gravitational action. Upon inserting these into (<ref>), one obtains the gravitational action (up to κ^2), S_ HE[h_μν] = S^ (0)_ HE[h_μν] + S^ (1)_ HE[h_μν] + S^ (2)_ HE [h_μν] + O(κ^3) , where S^ (0)_ HE = 1/κ^2∫ d^D x a^D-2{-(D-1)(D-2)ℋ^2 - (D-2)a^2Λ_0} , S^ (1)_ HE = 1/κ∫ d^D x a^D-2{h D-2/2[(D-3)ℋ^2 +2ℋ'-a^2Λ_0] 2.85cm + h^μνδ^0_μδ^0_ν(D-2)(ℋ'-ℋ^2)} , S^ (2)_ HE = ∫ d^D x a^D-2{1/2(∂^ρ h^μν) (∂_μ h_ρν) -1/2(∂^μ h_μν) (∂^ν h) +1/4(∂^μ h) (∂_μ h) 2.45cm - 1/4(∂^ρ h^μν) (∂_ρ h_μν) -(D-2)/2ℋδ_μ^0 h^μν(∂_ν h) 3.5cm + (h^2-2 h^μνh_μν)D-2/8[(D-3)ℋ^2+2ℋ'-a^2Λ_0] 4.5cm + (hh^μν-2 h^μρ h^ν_ρ)δ^0_μδ^0_νD-2/2(ℋ'-ℋ^2)} . When D→ 4 this action reduces to the one derived in Rick Vinke's thesis <cit.>, and agrees with <cit.>, up to no-derivative terms quadratic in h_μν, which were omitted in <cit.> on the account of the 0th order Einstein's equation in de Sitter, (D-1)ℋ^2 - a^2 Λ_0 =0. Notice that the quadratic terms in the last two lines of (<ref>) drop out in the de Sitter limit, in which ℋ'-ℋ^2=0, as they should. Furthermore, in the de Sitter limit also S^(1)_ HE→ 0. Fermion action. Massless fermions couple conformally to gravity in arbitrary dimension, which means that upon a Weyl transformation, the Dirac action (<ref>) reduces to, S_ D[χ,χ,h_μν] = ∫ d^Dx√(-g̃)[i/2(χẽ^μ_ bγ^b ∇_μχ - (∇_μχ)ẽ^μ_ bγ^b χ) - aχM χ] , where ∇_μχ denotes the covariant derivative with respect to g̃_μν. It is now relatively simple to expand this action in powers of h_μν. The result is S_ D[χ,χ,h_μν] = S^ (0)_ D[χ,χ] + S^ (1)_ D[χ,χ,h_μν]+ S^ (2)_ D [χ,χ,h_μν] + O(κ^3) , where S^ (0)_ D = ∫ d^Dx [i/2(χγ^μ∂_μχ - (∂_μχ)γ^μχ) - aχM χ] , S^ (1)_ D = κ/2∫ d^Dx [h ℒ - h^μν𝒦_μν] , S^ (2)_ D = κ^2/8∫ d^Dx [(h^2-2h^μνh_μν) ℒ+(3h^μ_ρ h^ρν-2hh^μν)𝒦_μν 2.1cm + 1/2 h^σ_μ∂_ν h_σρχ{γ^ν,σ^μρ}χ] , where for notational convenience we introduced the kinetic Dirac operator and Lagrangian, 𝒦_μν ≡ i/2(χγ_ν∂_μχ - (∂_μχ) γ_νχ) , ℒ ≡ η^μν𝒦_μν - a χMχ , where we assumed that {γ^b,σ^cd} is antisymmetric in all three indices (which is rigorously true in D=4, where {γ^b,σ^cd} = -ϵ^ebcdγ_eγ^5). In (<ref>)–(<ref>) and from now on we use condensed notation for the Dirac matrices on tangent space, γ^μ≡δ^μ_b γ^b. The action (<ref>)–(<ref>) agrees with <cit.>, provided one replaces, {γ^ν,σ^μρ}→ - ϵ^ανμργ_αγ^5, which is legitimate in D=4. The 2PI fermionic action. In this work we are primarily interested in fermionic fluids that drive the expansion of the Universe, and for that purpose the dynamics of fermions is better captured by the action defined in terms of the fermionic two-point functions, whose definitions are recalled in what follows. The positive and negative frequency fermionic Wightman two-point functions are defined as, iS^-+(x;x') = Tr[ρ̂_ inψ̂(x)ψ̂(x')] , iS^+-(x;x') = - Tr[ρ̂_ inψ̂(x')ψ̂(x)] , where ρ̂_ in denotes the (initial) density operator (in Heisenberg picture) and we adapted the Keldysh notation (the minus sign in the negative frequency Wightman function is due to the anticommuting nature of fermions). The Feynman (time ordered) propagator is then, iS^++(x;x') = Θ(t-t') iS^-+(x;x')+ Θ(t'-t) iS^+-(x;x') , where Θ(x) denotes the Heaviside theta function, which acts as projector and divides spacetime into the future and the past sections. The Dyson (anti-time ordered) propagator is obtained by exchanging the Wightman functions in (<ref>). The Wightman functions obey homogeneous Dirac equations, while the propagator satisfies, [The propagator equation of motion (<ref>) can be derived from the canonical anticommutation relation, {ψ̂(t,x⃗),Π̂_ψ(t,x⃗^ ')} = iħδ^D-1 (x⃗-x⃗^ ') , where Π̂_ψ(t,x⃗') is the canonical momentum operator defined by, Π_ψ(x)=δ S_ D/δ∂_0ψ(x) = √(-g)ψ^† ie^0_b(x)γ^0γ^b, and keeping in mind that the Wightman functions obey homogeneous equations of motion. ] √(-g)(ie^μ_b γ^b∇_μ- M)iS^++(x;x') = i ħδ^D(x-x') . The propagator (as well as the Wightman functions) obeys a second equation on the x' leg, which can be replaced by the symmetry requirement on the propagator, iS^++(x;x') =iS^++(x';x). The Dyson propagator obeys the same equation, but with an opposite sign in front of the delta function. For the purposes of this work, it is more convenient to use the Weyl rescaled fields χ and χ (<ref>), and define the two-point functions in terms of these, iS̃^-+(x;x') = Tr[ρ̂_ inχ̂(x)χ̂(x')] , iS̃^+-(x;x') = - Tr[ρ̂_ inχ̂(x')χ̂(x)] , in terms of which we can define the corresponding Feynman and Dyson propagators, iS̃^++(x;x') = Θ(t-t') iS̃^-+(x;x')+ Θ(t'-t) iS̃^+-(x;x') iS̃^–(x;x') = Θ(t-t') iS̃^+-(x;x')+ Θ(t'-t) iS̃^-+(x;x') . We are now ready write the fermionic two-particle irreducible (2PI) action, which can be organized similarly as the classical action (<ref>)–(<ref>), Γ_ D[iS^ bc,h^ b_μν] = S^(0)_ D[iS^ bc] + S^ (1)_ D[iS^ bc,h^ b_μν] + S^ (2)_ D [iS^ bc,h^ b_μν] + Γ^ (1)_ D[iS^ bc] + O(κ^3) , where S^ (0)_ D = ∑_ b,c=± b∫ d^Dx d^Dx' [-i/2γ^μ(∂_μ-∂_μ^') + aM ]δ^D(x-x')δ^ bci S^ bc(x';x) , ≡ ∑_ b,c=± b δ^ bc∫ d^Dx [ℒ^ bc(x;x')]_x'→ x S^ (1)_ D = κ/2∑_ b,c=± b δ^ bc∫ d^Dx {h^ b[ℒ^ bc(x;x')]_x'→ x - h_ b^μν[𝒦^ bc_μν(x;x')]_x'→ x} , S^ (2)_ D = κ^2/8∑_ b,c=± b δ^ bc∫ d^Dx {(h_ b^2-2h_ b^μνh^ b_μν) [ℒ^ bc(x;x')]_x'→ x 3.7cm + (3(h^ b)^μ_ρ h_ b^ρν-2h^ bh_ b^μν) [𝒦^ bc_μν(x;x')]_x'→ x 3.7cm - 1/2 (h^ b)^ σ_μ(∂_ν h^ b_σρ){γ^ν,σ^μρ}[i S^ bc(x';x)]_x'→ x} , and Γ^ (1)_ D denotes the one-loop contribution, Γ_ D^ (1) = ∑_ b,c=±∫ d^Dx d^Dx^'{ iħ Tr[ln(S^ bc(x';x))]} . We have used typewritter Latin letters for Keldysh indices, and we introduced a shorthand notation, 𝒦_μν^ bc(x;x') ≡ -i/2γ_ν(∂_μ-∂_μ^')i S^ bc(x';x) , ℒ^ bc(x;x') ≡ η^μν𝒦^ bc_μν(x;x') + a Mi S^ bc(x';x) , which are the nonlocal generalizations of (<ref>) suitable for the 2PI formalism. Notice that a trace over the spinor indices is implied in (<ref>)–(<ref>). The gravitational action (<ref>)–(<ref>) can be in analogous fashion adapted to the Schwinger-Keldysh notation. As here we are not interested in the graviton loops, this is a trivial procedure, as it entails promoting the field h_μν→ h^ b_μν and adding a factor b in front of the action and summing over b, so we will not explicitly write it. Instead, we will focus on the structure of the gravitational and fermionic actions. Notice first that the first order action (<ref>) can be recast as, S^ (1)_ HE = 1/κ∑_ b=± b∫ d^D x a^D-2{-h_ b^μν(G_μν^ (0)+D-2/2a^2η_μνΛ_0) } , . where G_μν^ (0) is the background Einstein tensor (<ref>). The fermionic first order action S^ (1)_ D = κ/2∑_ b=± b∫ d^Dx {h_ b^μν( η_μν[ℒ^ bb(x;x')]_x'→ x - [𝒦^ bb_μν(x;x'))]_x'→ x} ≡ 0cm κ/2∑_ b=± b∫ d^Dx {h_ b^μνT^ (0) b_μν} , where T^ (0) b_μν is the background energy-momentum tensor for fermions. Varying the first order action with respect to h^μν_ b gives the background equation of motion, G_μν^ (0)+D-2/2a^2Λ_0=κ^2/2 T_μν^ (0) =κ^2/2 a^2-D( η_μνℒ(x;x) - 𝒦_μν(x;x)) , . where we used that on-shell T_ b^(0)_μν→T^(0)_μν, is independent of b, and so are ℒ(x;x) and 𝒦_μν(x;x) evaluated at spacetime coincidence. (This is of course true provided ℒ(x;x)→ℒ^ (0)(x;x) and 𝒦_μν(x;x)→𝒦^ (0)(x;x) are solved to the leading order in κ.) The same is true of the graviton field: on-shell h^μν_ b→ h^μν. This also means that, when the Einstein equation is enforced in the first order action, it vanishes. In other words, when Einstein equation is enforced the linear graviton action does not induce any tadpole source for the gravitons, and that the solution to the Einstein equation is the stationary point of the gravitational action. Let us now consider the second order action. The gravitational part of interest are the zero-derivative terms in the last two lines of (<ref>), which in the Keldysh rendering are, S^(2)_ HE ⊃ ∑_ b=± b∫ d^D x a^D-2{ (h_ b^2-2 h_ b^μνh^ b_μν)D-2/8[(D-3)ℋ^2+2ℋ'-a^2Λ_0] 4.5cm + (h^ bh_ b^μν-2 h_ b^μρ (h_ b)^ν_ρ)δ^0_μδ^0_νD-2/2(ℋ'-ℋ^2)} . The term in the first line contains the η_μν part of G_μν^ (0) minus the cosmological constant constribution, while the term in the second line contains the δ_μ^0δ_ν^0 part of G_μν^ (0) in (<ref>). Consider now the fermionic second order action (<ref>). Eqs. (<ref>) and (<ref>) imply that, at zeroth order in κ, -𝒦_μν(x;x) can be split as, -𝒦^ (0)_μν(x;x) = 𝒬^ (0)η_μν + δ_μ^0δ_ν^0 𝒮^ (0) , such that 𝒫^ (0) = a^-D(ℒ^ (0) + 𝒬^ (0)) , 𝒫^ (0)+ρ^ (0) = a^-D𝒮^ (0) , denote the pressure and entropy density 𝒫^ (0) + ρ^ (0) (up to a factor of temperature) <cit.> (recall that the entropy density of a thermal fluid is s^ (0) = (𝒫^ (0) + ρ^ (0))/T). The form (<ref>) is also dictated by the symmetries of the FL spacetime. Namely, if one evaluates the coincident fermionic two-point function in a state obeying the cosmological symmetries, the split in (<ref>) is the most general one consistent with translational and rotational symmetries of cosmological spaces. Upon inserting Eq. (<ref>) into (<ref>) one obtains, S^ (2)_ D ⊃ κ^2/8∑_ b,c=± b δ^ bc∫ d^Dx {(h_ b^2-2h_ b^μνh^ b_μν) ℒ^ bc(x;x) +(2h^2_ b-3h_ b^μν h^ b_μν) 𝒬^ (0)_ b 3.7cm + (2h^ bh_ b^μν-3(h_ b)_ρ^μ (h_ b)^ρν) (δ_μ^0δ_ν^0 𝒮^ (0)_ b) } , where we restored Keldysh indices on 𝒬^ (0) and 𝒮^ (0) and we dropped the last line, as it contains derivatives acting on h_μν (the tensorial structure of (<ref>) should be maintained off shell for coincident 𝒦^ (0)_μν(x;x)). Now comparing (<ref>) with (<ref>) and in light of (<ref>), one concludes that the zero-derivative terms do not cancel when the background equation of motion (<ref>) is used. This then means that off-shell the graviton appears to have a mass. Indeed, summing the two contributions yields, S^(2)_ HE ⊃ ∑_ b=± b∫ d^D x a^D-2{ h_ b^2-2 h_ b^μνh^ b_μν/4[D-2/2[(D-3)ℋ^2+2ℋ'-a^2Λ_0] 4.5cm + κ^2/2( ℒ^ bc(x;x)+𝒬^ (0))]+κ^2/8(h^2_ b-h_ b^μν h^ b_μν) 𝒬^ (0) -0.6cm + h^ bh_ b^μν-2 h_ b^μρ (h_ b)^ν_ρ/2δ^0_μδ^0_ν[ (D-2) (ℋ'-ℋ^2) +κ^2/2𝒮^ (0)] +κ^2/8(h_ b)_ρ^μ (h_ b)^ρν) (δ_μ^0δ_ν^0 𝒮^ (0)) } -0.cm ⟶∑_ b=± b∫ d^D x a^D-2{ κ^2/8(h^2_ b-h_ b^μν h^ b_μν) 𝒬^ (0)+κ^2/8(h_ b)_ρ^μ (h_ b)^ρν) (δ_μ^0δ_ν^0 𝒮^ (0)) } . The fact that these terms do not drop out means that the graviton have an off-shell mass, which was first noted in <cit.>, supervised by one of the authors. In the following section we ask the question whether this mass survives on-shell, which would imply a physical mass. The expectation is that somehow the would-be graviton mass terms in (<ref>) cancel on-shell. § ON-SHELL ANALYSIS In this section we analyse the equations of motion for fermions and gravitons [At the linear level, the equations of motion for classical gravitational waves and potentials is identical to its quantum counterpart.] with the goal to resolve the question of the graviton mass. As the reader will see, the results are somewhat surprising. §.§ Equations of motion Varying the fermionic action (<ref>)–(<ref>) with respect to iS^ cb(x^';x), and multiplying from the right by it, gives the equation of motion for the fermionic two-point functions, -0.3cm {(iγ^μ∂_μ- aM) +κ/2[h(iγ^μ∂_μ- aM) -h^ν_μ iγ^μ∂_ν+1/2(∂_μ h-∂_ν h^ν_μ)iγ^μ] } iS̃^ bc(x;x') 9.9cm = iħ (σ^3)^ bcδ^D(x-x') , where σ^3 = diag(1,-1) is the Pauli matrix. By multiplying from the left, one obtains that the fermions also satisfy the equation on the second leg, -0.3cm iS̃^ bc(x;x') {(-iγ^μ∂^'_μ- a^' M) +κ/2[(-∂^'_μ iγ^μ- a^' M)h(x') 2.9cm + iγ^μ∂^'_ν h^ν_μ(x') -1/2(∂^'_μ h-∂^'_ν h^ν_μ(x'))iγ^μ] } = iħ (σ^3)^ bcδ^D(x-x') , The second equation is obtained by applying hermitian conjugation on the first one (<ref>). Assuming we know the solution of the leading order equation, (iγ^μ∂_μ- aM) iS̃_ bc^ (0)(x;x') = iħ (σ^3)^ bcδ^D(x-x') , one can write the general solution of (<ref>) as, [The naîve solution of (<ref>) is iS̃^ bc(x;x') ≡ iS̃^ (0)_ bc(x;x')+κ iS̃^ (1)_ bc(x;x') = (1-κ/2h(x^')) iS̃_ bc^ (0)(x;x') + 0.cm κ/2∫ d^Dy G_ R(x;y)iγ^μ[h^ν_μ(y) ∂^y_ν-1/2(∂_μ h(y)-∂_ν h^ν_μ(y))] iS̃_ bc^ (0)(y;x') , where G_ R(x;y) is the retarded Green's function obeying, (iγ^μ∂_μ- aM) G_ R(x;x^') = δ^D(x-x^') . The retarded Green's function can be expressed in terms of the two-point functions as, G_ R(x;x') =iS̃^ ++(x;x') - iS̃^ -+(x;x') = Θ(η-η^')[iS̃^ +-(x;x') - iS̃^ -+(x;x')] . Solution (<ref>) is not entirely correct however, as it does not satisfy the Dirac equation (<ref>) on the x^' leg. This is so because the Feynman and Dyson propagators are not causal Green's functions and receive contributions both when x' is in the future and in the past of x. ] iS̃^ bc(x;x') ≡ iS̃^ (0)_ bc(x;x')+κ iS̃^ (1)_ bc(x;x') =iS̃_ bc^ (0) -1.8cm - κ/2iħ∑_ de∫ d^Dy iS̃_ bd^ (0)(x;y)[ h(y) (iγ^μ∂^y_μ- aM)+1/2iγ^μ(∂^y_μ h(y))](σ^3)^ de iS̃_ ec^ (0)(y;x') -0.9cm + κ/2iħ∑_ de∫ d^Dy iS̃_ bd^ (0)(x;y) iγ^μ[h^ν_μ(y) ∂^y_ν+1/2(∂_ν h^ν_μ(y))] (σ^3)^ de iS̃_ ec^ (0)(y;x') , where iS̃^ (0)_ bc(x;x') and iS̃^ (1)_ bc(x;x') denote the 0th and 1st order in κ fermionic two-point functions, respectively. By integrating by parts the y-derivatives in (<ref>) acting on the fermionic two-point functions, the solution can be equivalently written as, iS̃^ bc(x;x') ≡ iS̃^ (0)_ bc(x;x')+κ iS̃^ (1)_ bc(x;x') =iS̃_ bc^ (0) -1.9cm - κ/2iħ∑_ de∫ d^Dy iS̃_ bd^ (0)(x;y)[ (-iγ^μ∂^y_μ- aM) h(y)-1/2iγ^μ(∂^y_μ h(y))](σ^3)^ de iS̃_ ec^ (0)(y;x') -1.3cm + κ/2iħ∑_ de∫ d^Dy iS̃_ bd^ (0)(x;y) iγ^μ[-∂^y_ν h^ν_μ(y) -1/2(∂_ν h^ν_μ(y))] (σ^3)^ de iS̃_ ec^ (0)(y;x') , which solves the equation of motion on the x' leg (<ref>), showing that both equations of motion are satisfied by the solution (<ref>). The naîve solution (<ref>) does not have that property. On cosmological spaces, the general spinor structure of the 0th order two-point functions is, iS̃^(0)_ bc(x;x') = (iγ^μ∂_μ+ aM^†) [P^φ_+iΔ̃_ (+)^ bc(x;x') +P^φ_-iΔ̃_ (-)^ bc(x;x') ] , where P^φ_±≡(1∓ e^iφγ^5γ^0)/2 (tan(φ)=m_R/m_I) are the (rotated) projectors on particle and antiparticle states (positive and negative frequency shells) <cit.>, and the scalar two-point functions iΔ̃_ (±)^ bc(x;x') obey, (∂^2 - a^2m^2 ∓ i aℋm)iΔ̃_ (±)^ bc(x;x') = iħ (σ^3)^ bcδ^D(x-x') , where m^2 = m_R^2 + m_I^2 is the fermionic mass squared. These equations cannot be solved for general cosmological backgrounds a=a(η). Nevertheless, they can be solved in some simple cases, for example in radiation era (ultra-relativistic fluids), in which the equation of state parameter is w=𝒫^ (0)/ρ^ (0) = 1/3, and in which a=A_0 η (A_0 = const.), as well as in general backgrounds when m=0. [The propagators are known on some fixed cosmological backgrounds, such as de Sitter <cit.>, <cit.>; and on spaces with constant acceleration, in which ϵ = -Ḣ/H^2 = 1-ℋ'/ℋ^2 = const., in which the mass parameter m^2 is proportional to the Ricci curvature scalar <cit.>.] But more on that later. Next, varying (<ref>)–(<ref>) and (<ref>)–(<ref>) with respect to the gravitational field yields the equation of motion for the metric perturbations, G_μν^ (0)+κℒ_μν^ ρσh_ρσ+D-2/2a^2Λ_0 η_μν+κD-2/2a^2Λ_0h_μν = κ^2/2T_μν -10.2cm ≡κ^2/2 a^2-D{η_μνℒ(x;x)-𝒦_μν(x;x) -10.2cm + κ[ h_μνℒ-1/2h_(μ^β𝒦_ν)β-1/2η_μν h^αβ𝒦_αβ+1/4ϵ^τσβδ(∂_σ h_β(μ) η_ν)δγ_τγ^5 iS^ bb(x;x) ] } , where we dropped Keldysh indices, as the dependence on them drops out from one-point functions and coincident two-point functions, representing expectation values of composite operators (this is because two-point functions do not contain imaginary parts when evaluated at coincidence). The operator ℒ^μνρσ in (<ref>) is known as the Lichnerowicz operator, which on cosmological spaces acts as <cit.>, ℒ_μν^ ρσh_ρσ = ∂_ρ∂_(μh^ρ_ν)-1/2∂^2h_μν-1/2∂_μ∂_νh -1/2η_μν∂^ρ∂^σ h_ρσ+1/2η_μν∂^2h - 0.cm (D-2)ℋ(∂_(μh_ν)0-1/2∂_0h_μν) +(D-2)ℋ(∂^ρ h_ρ 0-1/2∂_0h)η_μν - 0.cmD-2/2((D-3)ℋ^2 +2ℋ^') (h_μν+h_00η_μν) . Equations (<ref>)–(<ref>) can be used to establish whether there is an on-shell graviton mass. First, note that at the leading order in κ (not counting the classical coupling κ^2/2 = 8π G), the left- and right-hand-sides nicely combine into the 0th order Einstein equation (<ref>), and therefore – to the order κ^0 – it is all fine. Next to consider are the terms linear in κ. For clarity, we write them again explicitly, ℒ^μνρσh_ρσ-D-2/4a^2Λ_0 (hη_μν-2h_μν) = κ^2/2T^ (1)_μν -5.2cm ≡κ^2/2 a^2-D{η_μνℒ^ (1)(x;x)-𝒦^ (1)_μν(x;x) -5.2cm + [ h_μνℒ^ (0)-1/2h_(μ^β𝒦^ (0)_ν)β-1/2η_μν h^αβ𝒦^ (0)_αβ+1/4ϵ^τσβδ(∂_σ h_β(μ) η_ν)δγ_τγ^5 iS^ (0)_ bb(x;x) ] } , where iS̃^ (1)_ bb(x;x') is defined in (<ref>). Now collecting the terms in (<ref>), (<ref>) with no derivatives acting on h_μν results in the following left- and right-hand sides, (LHS)_ 0 der. = -D-2/2((D-3)ℋ^2 +2ℋ^') (h_μν+h_00η_μν) 0.2cm + D-2/2a^2Λ_0 (h_μν-1/2hη_μν) (RHS)_ 0 der. = κ^2/2 a^2-D{η_μνℒ^ (1) bb(x;x) -𝒦^ (1) bb_μν(x;x) 0.cm + [h_μνℒ^ (0)+1/2(𝒬^ (0)h_μν-h_0(μδ_ν)^0𝒮^ (0)) +1/2η_μν(𝒬^ (0)h+h_00𝒮^ (0)) } , where 𝒦^ bb (1)_μν(x;x) = -i/2[γ_(μ(∂_ν)-∂_ν)^') iS̃^ (1)_ bb(x;x')]_x'→ x , ℒ_ bb^ (1)(x;x) = η^μν𝒦^ bb (1)_μν(x;x) + aM iS^ (1)_ bb(x;x) , and we used (<ref>) and iS̃^(1)_ bc(x;x') -1.8cm = - 1/2iħ∑_ de∫ d^Dy iS̃_ bd^ (0)(x;y)[ h(y) (iγ^μ∂^y_μ- aM)+1/2iγ^μ(∂^y_μ h(y))](σ^3)^ de iS̃_ ec^ (0)(y;x') -0.9cm + 1/2iħ∑_ de∫ d^Dy iS̃_ bd^ (0)(x;y) iγ^μ[h^ν_μ(y) ∂^y_ν+1/2(∂_ν h^ν_μ(y))] (σ^3)^ de iS̃_ ec^ (0)(y;x') . Let us first have a look at the dynamical gravitons, that is the terms containing h_μν in Eqs. (<ref>)–(<ref>). The left-hand-side (<ref>) is just h_μν times the Lorentz covariant part of the Einstein equation (i.e. the part multiplying η_μν), the right-hand-side (<ref>) is, (RHS)_ 0 der.⊃κ^2/2 a^2-D h_μν[ℒ^ (0)+1/2𝒬^ (0)] , plus the nonlocal terms containing 𝒦^(1) bb _μν and ℒ_ bb^ (1). The 0th order Einstein equation (<ref>), (<ref>) contains a factor 1 in the last term, instead of 1/2 in (<ref>), so unless one can extract another factor 1/2 from the nonlocal terms, one would have to conclude that the graviton is massive on-shell. The situation in the scalar sector (the terms containing h_00 and h) is even more complex: not even the left-hand-side agrees with the leading order Einstein equation. Since we do not see a simple way to localize h_μν(y) in the nonlocal terms, it still appears that the dynamical graviton is massive on-shell. To summarize, we have found that the non-local form of the graviton equation prevents us from removing the mass term from both the dynamical and scalar part of the graviton equation of motion (<ref>). § CONSERVATION LAWS COME TO THE RESCUE Apart from the equations of motion, Einstein's gravity provides us with three conservation laws, the contracted Bianchi identity, metric compatibility and the energy-momentum conservation, ∇^μ G_μν = 0 , ∇^μ g_μν = 0 , ∇^μ T_μν = 0 . These act as consistency conditions and could provide extra information that may be useful. So let us check that. Energy-momentum tensor. Upon writing T_μν = a^2-DT_μν and acting the covariant derivative on the right-hand-side of (<ref>) yields (to the first order in κ), -0.4cm a^-D{[∂^μT_μν-ℋδ_ν^0 η^μνT_μν] -κ[∂_μ(h^μρT_ρν) +1/2(∂_ν h^ρμ)T_ρμ-1/2(∂^μ h)T_μν] 9.8cm + κℋδ_ν^0h^μρT_μρ} = 0 . Inserting T_μν=T^(0)_μν+T^(1)_μν from (<ref>) into this equation yields at the leading order in κ, ∂^μT^(0)_μν-ℋδ_ν^0 η^μνT^(0)_μν = 0 ⟹ ∂_νℒ^(0)-∂^μ𝒦^(0)_μν-ℋδ_ν^0 (Dℒ^(0)-𝒦^(0)) = 0 , where 𝒦^(0)=η^μν𝒦^(0)_μν. Inserting the decomposition (<ref>) into Eq. (<ref>) one gets, ∂_ν(ℒ^(0)+𝒬^(0)) -δ_ν^0∂_0𝒮^(0)-ℋδ_ν^0 [D(ℒ^(0)+𝒬^(0))-𝒮^(0)] = 0 . Upon recalling that ∂_ν→δ_ν^0∂_0 and in light of (<ref>), this can be rewritten as, ∂_ν(a^Dρ^(0)) -ℋδ_ν^0 [(D-1)𝒫^(0)-ρ^(0)] = 0 , which is equivalent to the leading order conservation law in the standard form, ∂_0ρ^(0)+(D-1)ℋ(ρ^(0)+𝒫^(0)) = 0 . Next we consider the first order equation, which is easily obtained from (<ref>), ∂^μT^ (1)_μν-ℋδ_ν^0 η^μνT^ (1)_μν = κ[∂_μ(h^μρT^ (0)_ρν) +1/2(∂_ν h^ρμ)T^ (0)_ρμ-1/2(∂^μ h)T^ (0)_μν-ℋδ_ν^0h^μρT^ (0)_μρ] . Taking account of (<ref>), according to which, T_μν^ (1) = η_μνℒ^ (1)-𝒦^ (1)_μν -.4cm + κ[ h_μνℒ^ (0)-1/2h_(μ^β𝒦^ (0)_ν)β-1/2η_μν h^αβ𝒦^ (0)_αβ+1/4ϵ^τσβδ(∂_σ h_β(μ) η_ν)δγ_τγ^5 iS^ (0)_ bb(x;x) ] , the conservation law (<ref>) can be rewritten as, ∂_νℒ^ (1)-∂^μ𝒦^ (1)_μν-ℋδ_ν^0(Dℒ^ (1)-η^μν𝒦^ (1)_μν) = -κ[3/4∂^μ(h_μ^ρ𝒦^ (0)_ρν) -1/4∂^μ(h^ρ_ν𝒦^ (0)_ρμ) -0.9cm - 1/2h^μρ(∂_ν𝒦^ (0)_μρ) -1/2(∂^μ h)𝒦^ (0)_μν+δ_ν^0D-1/2ℋh^μρ𝒦^ (0)_μρ -1.5cm + 1/4∂^μ((∂_σ h_α(μ)ϵ^ρσα_ ν)γ_ργ^5 iS_ bb^(0)(x;x)) ] . Remarkably, any dependence on ℒ^ (0) has dropped out from the right-hand side of this equation. To solve (<ref>), notice first that the terms in the first and third line can be easily absorbed into 𝒦^ (1)_μν. Indeed, upon writing 𝒦^ (1)_μν= κ[3/4h_μ^ρ𝒦^ (0)_ρν-1/4h^ρ_ν𝒦^ (0)_ρμ+1/4(∂_σ h_α(μ)ϵ^ρσα_ ν)γ_ργ^5 iS_ bb^(0)(x;x) ] + k^ (1)_μν , Eq. (<ref>) simplifies to, ∂_νℒ^ (1)-∂^μ k^ (1)_μν-ℋδ_ν^0(Dℒ^ (1)-η^μνk^ (1)_μν) -0cm -2.4cm =-κ[-1/2h^μρ(∂_ν𝒦^ (0)_μρ) -1/2(∂^μ h)𝒦^ (0)_μν+δ_ν^0D/2ℋh^μρ𝒦^ (0)_μρ] , where the parity violating term in (<ref>) drops out, as its trace vanishes. One can make further progress towards solving (<ref>) by making use of the decomposition (<ref>), upon which the right-hand-side simplifies to, (RHS) =-κ[1/2∂_ν(h 𝒬^ (0)) +1/2h_00(∂_ν𝒮^ (0)) -1/2δ_ν^0(∂_0h)𝒮^ (0)-δ_ν^0D/2ℋ(h_00𝒮^ (0)+h𝒬^ (0)) ] , -0.2cm Now, the first term can be clearly absorbed ℒ^(1). Indeed, writing ℒ^ (1)= -κ/2h𝒬^ (0)+ℓ^ (1) , Eq. (<ref>) further simplifies to, ∂_νℓ^ (1)-∂^μ k^ (1)_μν-ℋδ_ν^0(Dℓ^ (1)-η^μνk^ (1)_μν) =-κδ_ν^0[ 1/2h_00(∂_0𝒮^ (0)) -1/2(∂_0h)𝒮^ (0)-D/2ℋh_00𝒮^ (0)] , -0.4cm where we took account of the fact that 𝒮^ (0) does not depend on spatial coordinates (due to the translational symmetry of the background). No local solution in the graviton fields to (<ref>) exists. Indeed, making the Ansatz, k^ (1)_μν = - δ_μ^0δ_ν^0 s^ (1) , ℓ^ (1) = 0 , reduces (<ref>) to, a∂_0s^ (1)/a =κ/2[ h_00(∂_0𝒮^ (0)) -(∂_0h)𝒮^ (0)-Dℋh_00𝒮^ (0)] , whose general solution can be written as, s^ (1)(η,x⃗) =κ a(η) ∫_η_0^η dη^'/2a(η')[h_00(∂_0𝒮^ (0)) -(∂_0h)𝒮^ (0)-Dℋh_00𝒮^ (0)](η^',x⃗) , which can be added to (<ref>). To summarize, we have found out that the perturbed energy-momentum tensor (<ref>) acquires through η_μνℒ^ (1)-𝒦^ (1)_μν additional local and non-local contributions which, when incorporated into T_μν^ (1), results in, T_μν^ (1) = κ[ h_μνℒ^ (0)-1/2η_μνh𝒬^ (0) - h_μ^ρ𝒦^ (0)_νρ-1/2η_μν h^αβ𝒦^ (0)_αβ+δ_μ^0δ_ν^0 s^ (1)] . which can be also written as, T_μν^ (1) = κ[h_μν(ℒ^ (0)+𝒬^ (0)) -δ_(μ^0 h_ν)0𝒮^ (0)+1/2η_μν h_00𝒮^ (0)+δ_μ^0δ_ν^0 s^ (1)] . Notice that the parity violating term dropped out completely from T_μν^ (1). More importantly, the dynamical graviton contributes in (<ref>) as, κ^2/2a^2-D h_μν(ℒ^ (0)+𝒬^ (0)) =κ^2/2a^2 h_μν𝒫^ (0) , which is precisely what is needed to cancel the would-be dynamical graviton mass term in (<ref>), resolving the dynamical graviton mass problem in the linear equation of motion for the gravitational perturbations on general cosmological backgrounds in general D. However, when (<ref>) is inserted into (<ref>), one sees that the gravitational potential (h and h_00) and vector (h_0i) perturbations do not cancel out from the no-derivative terms in (<ref>)–(<ref>), so the question of scalar (and vector) perturbations persists. Einstein tensor conservation. Note first that the term proportional to Λ_0 in (<ref>) multiplies g_μν = a^2(η_μν + κ h_μν), from which it immediately follows that ∇^μ g_μν = 0, and therefore that term is covariantly conserved. Next, we have also checked by explicit calculation that ∇^μ G_μν = 0, both at the zeroth and first order in κ, making a nontrivial check of the correctness of G_μν^ (0) and G_μν^ (1) =κℒ_μν^ ρσh_ρσ given in Eqs. (<ref>) and (<ref>), respectively. In other words, the graviton equation of motion (<ref>) is (covariantly) transverse, as it should be. Since these equations constitute a contracted Bianchi identity, which is satisfied for arbitrary metric perturbations, they provide no further information for the problem at hand. § SIMPLE MODEL OF FERMIONIC BACKREACTION As a simple example, in this section we consider the problem of the fermion backreaction in the simple case in which fermions are in thermal equilibrium, and Universe's expansion can be considered as adiabatic. This means that the temperature scales with the scale factor as, T∝ 1/[a g^1/3_* s(a)], where g_* s(a) is the number of relativistic degrees of freedom in the plasma, which only changes when some species become nonrelativistic. Neglecting this dependence, we have Ṫ/T ≃ H≪ T, so that T can be considered as an adiabatic function of time, and Ṫ can be neglected. With this in mind, we can write the leading order thermal fermionic propagator governed by the equation of motion (<ref>) as (cf. Ref. <cit.>), iS^(0)(x;x') = (iγ^μ∂_μ + a M^†) iΔ_F(x;x') , where iΔ_F(x;x') = (am)^D-2/(2π)^D/2K_D-2/2(am√(Δ x^2_++) )/(am√(Δ x^2_++) )^D-2/2 -∫ d^3 k/(2π)^3 e^i k· (x - x')cos[ω(t-t')]/ω( e^βω+1) , where ω = √(k^2+(am)^2), Δ x^2_++ = -(|η-η'|-iϵ)^2 +x -x'^2, and we set D = 4 limit in the thermal part of the propagator, as it is finite in D=4. This solution neglects the difference between particles and antiparticles in Eqs. (<ref>)–(<ref>), which is justified when H≪ m, which is what we assume here. The scalar part of the propagator (<ref>) evaluates at space-time coincidence to, iΔ_F(x;x) = (am)^D-2/(4π)^D/2Γ(1-D/2) +1/2π^2β^3(am)[∂_z J_F(4,z)]_z= β am , where J_F(4,z) denotes the fermionic thermal integral, J_F(n,z) = ∫_0^∞ d x x^n-2ln[ 1+exp( -√(x^2+z^2) )] , whose argument, z = am/T can be reintepreted as a temperature that scales inversely with the scale factor, i.e. T(t) = T/a(t), where T denotes the initial temperature (we work in units in which the Boltzmann constant, k_B=1). Next, we evaluate 𝒦^ (0)_μν defined in Eq. (<ref>), 𝒦_μν^(0) bb(x;x) = -i/2 Tr{γ_ν[(∂_μ-∂_μ^') i S^ bb(x';x) ]_x'→ x } = -2^D/2-1[(∂_μ-∂_μ^')∂_ν iΔ_F(x;x')]_x'→ x , where we made use of, Tr(γ_νγ^α) = - Tr[𝕀]δ_ν^α = -2^D/2δ_ν^α. Making use of, K_ν(z)/z^ν = Γ(-ν)/2^ν+1∑_n=0^∞(z/2)^2n/(ν+1)_nn! +Γ(ν)/2^ν+1∑_n=0^∞(z/2)^2n-2ν/(-ν+1)_nn! , (ν = D-2/2 ) , and of the fact, that the D-dependent series does not contribute at and near coincidence, one finds that (<ref>) evaluates to, 𝒦_μν^(0) bb(x;x) = (am)^D/2(2π)^D/2Γ(-D/2)η_μν +2η_μν/3π^2β^4[1/z∂_z J_F(4,z)]_z= β am + 2δ_μ^0δ_ν^0/3π^2β^4[4/z∂_z J_F(6,z) +3z∂_z J_F(4,z)]_z= β am , where, when evaluating the momentum integral, we used k_i k_j→δ_ijk^2/3. Noting that, 1/z∂_z J_F(n,z) = -(n-3)J_F(n-2,z) , (n> 3) , Eq. (<ref>) simplifies to, 𝒦_μν^(0) bb(x;x) = (am)^D/2(2π)^D/2Γ(-D/2)η_μν -2η_μν/π^2β^4[ J_F(4,z)]_z= β am - 2δ_μ^0δ_ν^0/π^2β^4[4 J_F(4,z) +z^2J_F(2,z)]_z= β am . Inserting this and (<ref>) into (<ref>) gives, ℒ^(0) bb(x;x) = η^μν𝒦^(0) bb_μν(x;x) + a Tr[Mi S^(0) bb(x;x) ] = 0 , which was to be expected as ℒ^(0) bb(x;x) represents the leading order on-shell contribution to the Lagrangian. We are now ready to calculate the energy-momentum tensor (<ref>). The leading order contribution is, T_μν^ (0) b = a^2-D(η_μνℒ^ (0) bb(x;x) -𝒦^ (0) bb_μν(x;x) ) = -0.cm [-m^D/2(2π)^D/2Γ(-D/2) +2/π^2(β a)^4[ J_F(4,z)]_z= β am](a^2η_μν) 0.1cm + 2a^2δ_μ^0δ_ν^0/π^2(β a)^4[4 J_F(4,z) +z^2J_F(2,z)]_z= β am . The divergence ∝ 1/(D-4) in (<ref>) can be renormalized by the cosmological constant counterterm action in (<ref>), which contributes to the energy-momentum tensor as, T_μν^ ct = -2/√(-g^ (0))δ S_ HE/δ g^μν = T_μν^ (0) ct + T_μν^ (1) ct +𝒪(κ^2) = -2/κ^2[G_μν+D-2/2Λ_0g_μν] , where g_μν = a^2(η_μν+κ h_μν). Recalling that, Γ(-D/2) = -1/D-4[1+D-4/2(γ_ E-3/2 )] +𝒪(D-4) , where γ_ E= -ψ(1)=0.57… is the Euler constant, we see that adding T_μν^ (0) ct to (<ref>) with a cosmological constant, ΔΛ≡Λ_0 -Λ, which – in the minimal subtraction scheme – amounts to, -D-2/κ^2ΔΛ = -m^4/8π^2×μ^D-4/D-4 , renormalizes the action (<ref>). The renormalized energy-momentum tensor is obtained by summing the two contributions, T_μν^ (0) ren = T_μν^ (0) b+T_μν^ (0) ct = [m^4/16π^2( ln(m^2/2πμ^2) +γ _ E-3/2) +2/π^2(β a)^4[ J_F(4,z)]_z= β am]g^(0)_μν + 0.cm [2/π^2(β a)^4[4 J_F(4,z) +z^2J_F(2,z)]_z= β am] (a^2δ_μ^0δ_ν^0) . The first three terms in (<ref>) are the usual one-loop vacuum contributions to the vacuum energy, and (if one wants) they can be be subtracted by choosing the finite cosmological constant to be, Λ/κ^2= m^4/16π^2( ln(m^2/2πμ^2) +γ _ E-3/2) . The remaining terms in (<ref>)–(<ref>) are the termal one-loop contributions, and have the form of a perfect cosmological fluid (<ref>) with pressure and energy density given by, 𝒫^(0) = 2/π^2(β a)^4[ J_F(4,z)]_z= β am 𝒫^ (0)+ρ^(0) = 2/π^2(β a)^4[4 J_F(4,z) +z^2J_F(2,z)]_z= β am . Consider now the first order contributions in (<ref>). Neglecting for the moment ℒ^ (1)(x;x) and 𝒦^ (1)_μν(x;x), one obtains, T^ (1)_μν ⊃ a^2-Dκ{1/2[ h_μν+η_μν h ][ -(am)^D/2(2π)^D/2Γ(-D/2) +2/π^2β^4[ J_F(4,z)]_z= β am] 0cm + [ h_(μ^0 δ_ν)^0 +η_μν h_00] [1/π^2β^4[4 J_F(4,z) +z^2J_F(2,z)]_z= β am] } , where we took account of (<ref>), (<ref>) and (<ref>). Notice that, in the adiabatic approximation we are working in, the parity violating term in (<ref>) dropped out. [The parity violating term in (<ref>) could in principle contribute a subleading contribution. However, based on Eqs. (<ref>) and (<ref>) we see that, upon taking a trace, its contribution vanishes, a^2-D/4κϵ^τσβδ(∂_σ h_β(μ) η_ν)δ Tr[γ_τγ^5 iS^ (0)_ bb(x;x)] 2cm = a^2-D/4κϵ^τσβδ(∂_σ h_β(μ) η_ν)δ× 2^D/2-1δ_τ^0 am_Rm_I/m∑_±(± i∓ i) iΔ_(±)^ bb(x;x) = 0 . Moreover, the analysis in section <ref> suggests that, when the first order contributions from ℒ^ (1)(x;x) and 𝒦^ (1)_μν(x;x) are added, the parity violating term drops out on-shell. ] Comparing the order κ counterterm in (<ref>) with the structure of the divergences in the leading order (<ref>) and in the first order term (<ref>) show that the counterterm (<ref>) cannot be used to renormalize both divergences (because the coefficient of the divergent contribution in (<ref>) is by a factor 1/2 too small). The expectation is that, including the remaining contributions to the energy-momentum tensor from ℒ^ (1)(x;x) and 𝒦^ (1)_μν(x;x), would render the problem renormalizable. The analysis of section <ref> shows that the total first order energy-momentum tensor can be written as (<ref>), (<ref>). Inserting  (<ref>) into (<ref>) gives, T_μν^ (1) = κ(a^2 h_μν)[-m^D/2(2π)^D/2Γ(-D/2) +2/π^2(aβ)^4[J_F(4,z)]_z=β(am) ] + κ a^2-D[-δ_(μ^0 h_ν)0𝒮^ (0)+1/2η_μν h_00𝒮^ (0)+δ_μ^0δ_ν^0 s^ (1)] , where 𝒮^(0) = 2/π^2β^4[4J_F(4,z)+z^2J_F(2,z)]_z=β(am) , and s^ (1) is the thermal contribution in Eq. (<ref>). By comparing (<ref>) with (<ref>) one sees that the cosmological constant counterterm (<ref>), with the choice (<ref>), renormalizes the energy-momentum tensor to the linear order in κ. The renormalized energy-momentum tensor can be written as, T_μν^ ren = T_μν^ (0) b+T_μν^ (1) b+T_μν^ ct = [m^4/16π^2( ln(m^2/2πμ^2) +γ _ E-3/2) +2/π^2(β a)^4[ J_F(4,z)]_z= β am]g_μν 0.cm + κ[2/π^2(β a)^4[4 J_F(4,z) +z^2J_F(2,z)]_z= β am] (a^2δ_μ^0δ_ν^0) + κ a^2 [-δ_(μ^0 h_ν)0+1/2η_μν h_00 ] 2/π^2(β a)^4[4J_F(4,z)+z^2J_F(2,z)]_z=β(am) + κ/a^2δ_μ^0δ_ν^0 s^ (1) . This can be then expanded in the high and low temperature limits. [Useful high temperature expansions (z≪ 1) are <cit.>, J_F(4,z) ≃7π^4/360-π^2/24z^2 -z^4/32ln(z^2/a_F) , 4J_F(4,z)+z^2J_F(2,z) ≃7π^4/90-π^2/12z^2+z^4/16 , with a_F = π^2exp(32-γ_E), ln(a_F) ≃2.6351. In the low temperature limit (z≫ 1) we have, J_F(4,z) ≃√(π z^3/2) e^-z , 4J_F(4,z)+z^2J_F(2,z) ≃√(π z^3/2)(4+z) e^-z . ] We have thus shown that, while the naîve energy-momentum tensor was not renormalizable, adding consistently the first order corrections derived in section <ref> renders the energy-momentum tensor renormalisable. The renormalized energy-momentum tensor (<ref>) is an improvement when compared with the naive one in (<ref>) as it allowed for consistent renormalization. However, it is fair to say that we still lack a rigorous framework within which one can study the evolution of gravitational perturbations through different epochs of the early Universe. § CONCLUSION AND OUTLOOK In this work we have studied Einstein's gravity (<ref>) coupled to massive Dirac fermions (<ref>), where – to mimic the standard model – we included both scalar and pseudoscalar masses (<ref>). Our goal is to understand the dynamics of gravitational perturbations in general cosmological backgrounds driven by fermionic matter. [With some effort, our analysis can be adapted to other types of matter present in the standard model, such as scalar and vector fields <cit.>.] The first step is to expand the gravitational and fermionic actions to the second order in gravitational perturbations around a general (spatially flat) cosmological background (<ref>), resulting in Eqs. (<ref>)–(<ref>) and (<ref>)–(<ref>). The 2PI form of the fermionic action, which is more convenient for applications in cosmology, is given in (<ref>)–(<ref>). The second order action in gravitational perturbations contains non-derivative terms (<ref>)–(<ref>) that do not vanish on-shell, i.e. when the background equations of motion are used, suggesting that there is an on-shell graviton mass. While this observation is unexpected, that does not mean that there is a physical graviton mass. In order to clarify this question, in section <ref> we study the equation of motion for linear gravitational perturbations (<ref>). The fermionic energy-momentum tensor contains both zeroth order terms in the gravitational fields, which are local, and first order terms, which are nonlocal. Including only the local contributions does not solve the question of the graviton mass, implying that the non-local contributions are important. Moreover, from the analysis of section <ref> it is unclear how to consistently identify the local contribution that would cancel the graviton mass. A better understanding of that question is advanced in section <ref>, in which we use conservation of the energy-momentum tensor to show that an additional local contribution to the energy-momentum tensor emerges, which removes the dynamical graviton mass. Moreover, while the naîve (perturbative) energy-momentum is non-renormalizable, the improved one (<ref>)–(<ref>) is renormalizable, as is shown in section <ref>. While we now do understand how to solve some of the problems that arise when studying the dynamics of linear gravitational perturbations on cosmological backgrounds, a general framework is still lacking, and that is what we discuss next. A systematic framework for studying the dynamics of gravity on curved backgrounds governed by the action, S[g_μν,ψ] = S_ g[g_μν] +S_ m[g_μν,ψ] , where S_ g[g_μν] denotes the gravitational action, e.g. the Hilbert-Einstein action (<ref>), and S_ m[g_μν,ψ] is the matter action (ψ denotes any matter field), e.g. the Dirac action (<ref>), can be obtained in the context of perturbative QFT as follows. Firstly, one expands the gravitational field g_μν into a background field g^(0)_μν and perturbations δ g_μν: g_μν=g^(0)_μν+δ g_μν. The corresponding expansion for cosmological backgrounds is (cf. Eqs. (<ref>)–(<ref>)), g_μν(x) = a^2(η_μν +κ h_μν) , ( g^(0)_μν = a^2η_μν, δ g_μν = a^2κ h_μν) . The desired set of dynamical equations consists of two equations, which can be obtained by applying the background field method and a perturbative expansion to the action (<ref>). The first equation is the dynamical background equation, and it is in the form of the semi-classical Einstein equation <cit.>, G_μν^(0)+Λ g_μν^(0) = 8π G [T_μν^ cl + ⟨𝕋^*[T̂_μν]⟩]_g_μν=g^(0)_μν , where G_μν^(0) =[ κ^2/√(-g)δ S_ g/δ g_μν]_g_μν=g^(0)_μν is the Einstein tensor for the background metric g_μν^(0), T_μν^ cl and ⟨𝕋^*[T̂_μν]⟩ are the classical and quantum contributions to the energy-momentum tensor, and 𝕋^* stands for 𝕋-star time ordering, according to which vertex derivatives (if any) are pulled out of the time-ordered two-point function contributing to the energy-momentum tensor. Eq. (<ref>) makes physical sense only when ⟨𝕋^*[T̂_μν]⟩ is renormalized, i.e. all the divergences it contains are removed by local counterterms, and it is well known how to do that in semiclassical gravity <cit.>. ⟨𝕋^*[T̂_μν]⟩ can be evaluated in some perturbative scheme, e.g. the one-loop truncation, for which the corresponding diagram is shown in figure <ref>. While it is not known how to solve (<ref>) self-consistently on general gravitational backgrounds, different approximation schemes have been advanced by Anderson <cit.> and others <cit.>, but only the technique of stochastic inflation adapted to late time cosmology <cit.> allows for a consistent solution of semiclassical gravity (<ref>) in cosmological settings. The second equation governs the evolution of gravitational perturbations, and it can be written as, -0.3cm ℒ_ cov^μνρσ(x)δ g^ b_ρσ(x) +∑_ c c∫ d^4x^'√(-g^(0)(x^')) [^μν_ bΣ_ c^ρσ](x;x^') δ g^ c_ρσ(x^') +𝒪((δ g^ c_ρσ)^2) = 0 , where ℒ_ cov^μνρσ(x) denotes the (covariant) Lichnerowicz operator (<ref>), ℒ_ cov^μνρσ(x)×δ^4(x-x^')/√(-g)≡[κ^2/√(-g(x))√(-g(x^'))δ^2 S_ g/δ g_μν(x) δ g_ρσ(x')]_g_μν=g^(0)_μν , and [^μν_ bΣ_ c^ρσ](x;x^') is the graviton self-energy which is, in the Schwinger-Keldysh formalism and in the one-loop truncation, given by, -.2cm [_μν^ bΣ^ c_ρσ](x;x^') = κ^2/√(-g_ b)√(-g^'_ c)⟨ . { i δ S_ m/δ g_ b^μν(x)δ S_ m/δ g_ c^ρσ (x')+δ^2 S_ m/δ g_ b^μν(x) δ g_ c^ρσ(x')}|_g_μν=g^(0)_μν⟩ , where g_ b=g_ b(x) and g^'_ c=g_ c(x^'). This equation has been much less studied than (<ref>) in cosmological settings <cit.>, but it is as important for our purpose. Eq. (<ref>) is written in the Schwinger-Keldysh notation, so that it can be used both for evolution of the graviton one-point function (in which case the Keldysh indices on δ g^ c_ρσ can be dropped) and for evolution of the graviton two-point functions, in which case one ought to multiply (<ref>) from the right by δ g^ d_ρσ(x^'') and take an expectation value [Alternatively, Eq. (<ref>) can be obtained by varying the suitably truncated 2PI effective action which includes the graviton one- and two-point functions.] to obtain a set of evolution equations for the graviton two-point functions i[_μν^ bΔ^ c_γδ](x;x^'), -0.3cm ℒ_ cov^μνρσ(x) i[_ρσ^ bΔ^ c_γδ](x;x^') 0.6cm +∑_ c c∫ d^4x^''√(-g^(0)(x^'')) [^μν_ bΣ_ c^ρσ](x;x^'') i[_ρσ^ cΔ^ d_γδ](x^'';x^') = iħ cδ^ cdδ^4(x-x^') , where i[_μν^ bΔ^ c_γδ](x;x^') (b,c = ±) are the graviton two-point functions. For example, when b = ∓ and c = ± one obtains the positive and negative frequency graviton Wightman functions, i[_μν^ -Δ^ +_γδ](x;x^') = ⟨δĝ_μν(x) δĝ_γδ(x^') ⟩ , i[_μν^ +Δ^ -_γδ](x;x^') = ⟨δĝ_γδ(x^') δĝ_μν(x) ⟩ . Just as in the case of the background equation (<ref>), the self-energy in Eq. (<ref>) must be renormalized. Renormalization of the one-loop graviton self-energy demands at least four geometric counterterms <cit.>, S_ ct[g_μν] = ∫ d^D x √(-g)(c_1 R^2 +c_2 W_μνρσW^μνρσ+c_3 R +c_4 ) , where W_μνρσ denotes the Weyl curvature tensor, [One can equivalently use the Riemann tensor counterterm action, ∫ d^D x √(-g) c^'_2 R_μνρσR^μνρσ.] with the goal to remove all nontransverse parts by local counterterms. The corresponding one-loop Feynman diagrams for the graviton self-energy are shown in figure <ref>. -0.2cm Gravity is a gauge theory, so both equations (<ref>) and (<ref>) (or equivalently (<ref>)) must in addition satisfy certain consistency conditions. Thus Eq. (<ref>) must obey the following transversality conditions, ∇_(0)^μ G^(0)_μν=0 , ∇_(0)^μ g^(0)_μν=0 , ∇_(0)^μ T_μν^ cl =0 , ∇_(0)^μ⟨𝕋^* [T̂_μν]⟩]_g_μν=g^(0)_μν=0 , which, in the presence of matter field condensates, may require a delicate analysis <cit.>. The first condition in (<ref>) is the contracted Bianchi identity, and must hold for an arbitrary metric, and the last two are the energy-momentum conservation laws, and the classical and quantum contributions must be separately conserved. Likewise, there are two transversality conditions that are expected to hold for Eq. (<ref>), ∇^(0)_μℒ_ cov^μνρσ(x)=0 , ∇^(0)_μ i[^μν_ bΣ_ c^ρσ](x;x^') =0 , The former identity follows immediately from the contracted Bianchi identity in (<ref>) and the definition of the Lichnerowicz operator (<ref>), while the latter is the Ward identity for the graviton self-energy <cit.>. Notice that the graviton two-point functions (<ref>) need not be transverse (they are transverse only in the special class of exact de Donder gauges <cit.>). Experience shows <cit.> that a transverse self-energy is obtained only after tuning the cosmological constant counterterm such that the one-loop matter fluctuations do not contribute an additional cosmological constant. This observation is particularly important when considering the question of the graviton mass, as it tells us that the graviton masslessness is protected by the gauge symmetry it satisfies. While at the quantum level transversality of the graviton self-energy is imposed by the Ward (or Slavnov-Taylor) identity, at the classical level it is the second Noether identity that guarantees transversality of the evolution equation for gravitational perturbations. The classical equation is obtained as the classical limit of the quantum equation (<ref>) for the one-point function, which can be obtained from (<ref>) by setting δ g^ b_μν(x) →δ g_μν(x), -0.3cm ℒ_ cov^μνρσ(x)δ g_ρσ(x) +∫ d^4x^'√(-g^(0)(x^')) [^μνΣ_ ret^ρσ](x;x^') δ g_ρσ(x^') +𝒪((δ g_ρσ)^2) = 0 , where [^μνΣ_ ret^ρσ](x;x^') = [^μν_ +Σ_ +^ρσ](x;x^') -[^μν_ +Σ_ -^ρσ](x;x^') is the retarded self-energy. There are however physical situations in which one can dynamically generate vacuum energy. Examples of such processes in the early Universe setting are the electroweak transition, in which the mass generation mechanism also changes the vacuum energy, and the strong transition, induced by chiral quark condensates which also change the vacuum energy. This means that one can choose to tune the vacuum energy to zero either before or after such a transition, but not both, implying that the self-energy cannot be made transverse in both regimes. The question what are the ramifications of this fact for the dynamics of gravitational perturbations will be addressed elsewhere <cit.>. The principal message of this work is that one cannot obtain consistent evolution equations for (classical or quantum) gravitational perturbations on cosmological backgrounds generated by quantized matter fields without properly quantizing matter both in the semiclassical equation for gravity (<ref>), but also in the equation of motion for perturbations on quantized matter backgrounds (<ref>). These equations are studied in more detail in the example of scalar electrodynamics in Ref. <cit.>. § ACKNOWLEDGMENTS The authors thank Rick Vinke, whose master thesis was used as the point of departure and inspiration for this work. This work is part of the Delta ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW) — NWO project number 24.001.027. LL is funded by NSFC grant NO. 12165009, Hunan Natural Science Foundation NO. 2023JJ30487. 99 Starobinsky:1980te A. A. Starobinsky, “A New Type of Isotropic Cosmological Models Without Singularity,” Phys. Lett. B 91 (1980), 99-102 doi:10.1016/0370-2693(80)90670-X Guth:1980zm A. H. Guth, “The Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems,” Phys. Rev. D 23 (1981), 347-356 doi:10.1103/PhysRevD.23.347 LIGOScientific:2016lio B. P. Abbott et al. [LIGO Scientific and Virgo], “Tests of general relativity with GW150914,” Phys. Rev. Lett. 116 (2016) no.22, 221101 [erratum: Phys. Rev. Lett. 121 (2018) no.12, 129902] doi:10.1103/PhysRevLett.116.221101 [arXiv:1602.03841 [gr-qc]]. Mukhanov:1981xt V. F. Mukhanov and G. V. Chibisov, “Quantum Fluctuations and a Nonsingular Universe,” JETP Lett. 33 (1981), 532-535 Planck:2019nip N. Aghanim et al. [Planck], “Planck 2018 results. V. CMB power spectra and likelihoods,” Astron. Astrophys. 641 (2020), A5 doi:10.1051/0004-6361/201936386 [arXiv:1907.12875 [astro-ph.CO]]. Planck:2018jri Y. Akrami et al. [Planck], “Planck 2018 results. X. Constraints on inflation,” Astron. Astrophys. 641 (2020), A10 doi:10.1051/0004-6361/201833887 [arXiv:1807.06211 [astro-ph.CO]]. DES:2022urg T. M. C. Abbott et al. [DES and SPT], “Joint analysis of Dark Energy Survey Year 3 data and CMB lensing from SPT and Planck. III. Combined cosmological constraints,” Phys. Rev. D 107 (2023) no.2, 023531 doi:10.1103/PhysRevD.107.023531 [arXiv:2206.10824 [astro-ph.CO]]. Mukhanov:2005sc V. Mukhanov, “Physical Foundations of Cosmology,” Cambridge University Press, 2005, ISBN 978-0-521-56398-7 doi:10.1017/CBO9780511790553 Weinberg:2008zzc S. Weinberg, “Cosmology,” Oxford University Press (2008), ISBN-13 978-0198526827, ISBN-10 0198526822 Mukhanov:1990me V. F. Mukhanov, H. A. Feldman and R. H. Brandenberger, “Theory of cosmological perturbations. Part 1. Classical perturbations. Part 2. Quantum theory of perturbations. Part 3. Extensions,” Phys. Rept. 215 (1992), 203-333 doi:10.1016/0370-1573(92)90044-Z Miao:2005am S. P. Miao and R. P. Woodard, “The Fermion self-energy during inflation,” Class. Quant. Grav. 23 (2006), 1721-1762 doi:10.1088/0264-9381/23/5/016 [arXiv:gr-qc/0511140 [gr-qc]]. RickVinke:2020 R. S. Vinke, “A Field-Theoretic Approach to Fermionic Dark Matter,” Utrecht University Theses repository (2020), https://studenttheses.uu.nl/handle/20.500.12932/36948 Tsamis:1992xa N. C. Tsamis and R. P. Woodard, “The Structure of perturbative quantum gravity on a De Sitter background,” Commun. Math. Phys. 162 (1994), 217-248 doi:10.1007/BF02102015 BarrosoMancha:2020fay M. Barroso Mancha, T. Prokopec and B. Swiezewska, “Field-theoretic derivation of bubble-wall force,” JHEP 01 (2021), 070 doi:10.1007/JHEP01(2021)070 [arXiv:2005.10875 [hep-th]]. Park:2015kua S. Park, T. Prokopec and R. P. Woodard, “Quantum Scalar Corrections to the Gravitational Potentials on de Sitter Background,” JHEP 01 (2016), 074 doi:10.1007/JHEP01(2016)074 [arXiv:1510.03352 [gr-qc]]. Candelas:1975du P. Candelas and D. J. Raine, “General Relativistic Quantum Field Theory-An Exactly Soluble Model,” Phys. Rev. D 12 (1975), 965-974 doi:10.1103/PhysRevD.12.965 Prokopec:2022yon T. Prokopec and V. H. Unnithan, “Majorana propagator on de Sitter space,” Eur. Phys. J. C 82 (2022) no.11, 1015 doi:10.1140/epjc/s10052-022-10970-1 [arXiv:2203.15678 [hep-th]]. Prokopec:2022yon T. Prokopec and V. H. Unnithan, “Majorana propagator on de Sitter space,” Eur. Phys. J. C 82 (2022) no.11, 1015 doi:10.1140/epjc/s10052-022-10970-1 [arXiv:2203.15678 [hep-th]]. Koksma:2009tc J. F. Koksma and T. Prokopec, “Fermion Propagator in Cosmological Spaces with Constant Deceleration,” Class. Quant. Grav. 26 (2009), 125003 doi:10.1088/0264-9381/26/12/125003 [arXiv:0901.4674 [gr-qc]]. Kainulainen:2002th K. Kainulainen, T. Prokopec, M. G. Schmidt and S. Weinstock, “Semiclassical force for electroweak baryogenesis: Three-dimensional derivation,” Phys. Rev. D 66 (2002), 043502 doi:10.1103/PhysRevD.66.043502 [arXiv:hep-ph/0202177 [hep-ph]]. Prokopec:2003pj T. Prokopec, M. G. Schmidt and S. Weinstock, “Transport equations for chiral fermions to order h bar and electroweak baryogenesis. Part 1,” Annals Phys. 314 (2004), 208-265 doi:10.1016/j.aop.2004.06.002 [arXiv:hep-ph/0312110 [hep-ph]]. Fennema:2024 Abe Fennema and Tomislav Prokopec, in progress (2024). Birrell:1982ix N. D. Birrell and P. C. W. Davies, “Quantum Fields in Curved Space,” Cambridge Univ. Press, 1984, ISBN 978-0-521-27858-4, 978-0-521-27858-4 doi:10.1017/CBO9780511622632 Anderson:2002hh P. R. Anderson, C. Molina-Paris and E. Mottola, “Linear response and the validity of the semiclassical approximation in gravity,” [arXiv:gr-qc/0204083 [gr-qc]]. Anderson:2011wq E. Anderson, “On the Semiclassical Approach to Quantum Cosmology,” Class. Quant. Grav. 28 (2011), 185008 doi:10.1088/0264-9381/28/18/185008 [arXiv:1101.4916 [gr-qc]]. Tsamis:2005je N. C. Tsamis and R. P. Woodard, “Dimensionally regulated graviton 1-point function in de Sitter,” Annals Phys. 321 (2006), 875-893 doi:10.1016/j.aop.2005.08.004 [arXiv:gr-qc/0506056 [gr-qc]]. Anderson:2015yda P. R. Anderson, C. Molina-Paris and D. H. Sanders, “Breakdown of the semiclassical approximation during the early stages of preheating,” Phys. Rev. D 92 (2015), 083522 doi:10.1103/PhysRevD.92.083522 [arXiv:1502.01892 [gr-qc]]. Anderson:2020hgg P. R. Anderson, E. D. Carlson, T. M. Ordines and B. Hicks, “Semiclassical predictions regarding a preinflationary era and its effects on the power spectrum,” Phys. Rev. D 102 (2020) no.6, 063528 doi:10.1103/PhysRevD.102.063528 [arXiv:2005.12370 [gr-qc]]. Pla:2020tpq S. Pla, I. M. Newsome, R. S. Link, P. R. Anderson and J. Navarro-Salas, “Pair production due to an electric field in 1+1 dimensions and the validity of the semiclassical approximation,” Phys. Rev. D 103 (2021) no.10, 105003 doi:10.1103/PhysRevD.103.105003 [arXiv:2010.09811 [gr-qc]]. Glavan:2013mra D. Glavan, T. Prokopec and V. Prymidis, “Backreaction of a massless minimally coupled scalar field from inflationary quantum fluctuations,” Phys. Rev. D 89 (2014) no.2, 024024 doi:10.1103/PhysRevD.89.024024 [arXiv:1308.5954 [gr-qc]]. Glavan:2014uga D. Glavan, T. Prokopec and D. C. van der Woude, “Late-time quantum backreaction from inflationary fluctuations of a nonminimally coupled massless scalar,” Phys. Rev. D 91 (2015) no.2, 024014 doi:10.1103/PhysRevD.91.024014 [arXiv:1408.4705 [gr-qc]]. Glavan:2015cut D. Glavan, T. Prokopec and T. Takahashi, “Late-time quantum backreaction of a very light nonminimally coupled scalar,” Phys. Rev. D 94 (2016), 084053 doi:10.1103/PhysRevD.94.084053 [arXiv:1512.05329 [gr-qc]]. Glavan:2017jye D. Glavan, T. Prokopec and A. A. Starobinsky, “Stochastic dark energy from inflationary quantum fluctuations,” Eur. Phys. J. C 78 (2018) no.5, 371 doi:10.1140/epjc/s10052-018-5862-5 [arXiv:1710.07824 [astro-ph.CO]]. Belgacem:2021ieb E. Belgacem and T. Prokopec, “Quantum origin of dark energy and the Hubble tension,” Phys. Lett. B 831 (2022), 137174 doi:10.1016/j.physletb.2022.137174 [arXiv:2111.04803 [astro-ph.CO]]. Belgacem:2022fui E. Belgacem and T. Prokopec, “Spatial correlations of dark energy from quantum fluctuations during inflation,” Phys. Rev. D 106 (2022) no.12, 123514 doi:10.1103/PhysRevD.106.123514 [arXiv:2209.01601 [gr-qc]]. Vedder:2022spt C. J. G. Vedder, E. Belgacem, N. E. Chisari and T. Prokopec, “Fluctuating dark energy and the luminosity distance,” JCAP 03 (2023), 016 doi:10.1088/1475-7516/2023/03/016 [arXiv:2209.00440 [astro-ph.CO]]. Tsamis:1996qk N. C. Tsamis and R. P. Woodard, “One loop graviton selfenergy in a locally de Sitter background,” Phys. Rev. D 54 (1996), 2621-2639 doi:10.1103/PhysRevD.54.2621 [arXiv:hep-ph/9602317 [hep-ph]]. Park:2010pj S. Park and R. P. Woodard, “Solving the Effective Field Equations for the Newtonian Potential,” Class. Quant. Grav. 27 (2010), 245008 doi:10.1088/0264-9381/27/24/245008 [arXiv:1007.2662 [gr-qc]]. Park:2011ww S. Park and R. P. Woodard, “Scalar Contribution to the Graviton Self-Energy during Inflation,” Phys. Rev. D 83 (2011), 084049 doi:10.1103/PhysRevD.83.084049 [arXiv:1101.5804 [gr-qc]]. Leonard:2014zua K. E. Leonard, S. Park, T. Prokopec and R. P. Woodard, “Representing the Graviton Self-Energy on de Sitter Background,” Phys. Rev. D 90 (2014) no.2, 024032 doi:10.1103/PhysRevD.90.024032 [arXiv:1403.0896 [gr-qc]]. Park:2015kua S. Park, T. Prokopec and R. P. Woodard, “Quantum Scalar Corrections to the Gravitational Potentials on de Sitter Background,” JHEP 01 (2016), 074 doi:10.1007/JHEP01(2016)074 [arXiv:1510.03352 [gr-qc]]. Tan:2021ibs L. Tan, N. C. Tsamis and R. P. Woodard, “Graviton self-energy from gravitons in cosmology,” Class. Quant. Grav. 38 (2021) no.14, 145024 doi:10.1088/1361-6382/ac0233 [arXiv:2103.08547 [gr-qc]]. Miao:2024nsz S. P. Miao, N. C. Tsamis and R. P. Woodard, “Summing Gravitational Effects from Loops of Inflationary Scalars,” [arXiv:2405.01024 [gr-qc]]. Kavanagh:2024 Riley Kavanagh and Tomislav Prokopec, in preparation (2024). Miao:2024atw S. P. Miao, N. C. Tsamis and R. P. Woodard, “Alternate computation of gravitational effects from a single loop of inflationary scalars,” JHEP 07 (2024), 099 doi:10.1007/JHEP07(2024)099 [arXiv:2405.00116 [gr-qc]]. Glavan:2023lvw D. Glavan and T. Prokopec, JHEP 10 (2023), 063 doi:10.1007/JHEP10(2023)063 [arXiv:2306.11162 [hep-ph]]. Capper:1973pv D. M. Capper, G. Leibbrandt and M. Ramon Medrano, “Calculation of the graviton selfenergy using dimensional regularization,” Phys. Rev. D 8 (1973), 4320-4331 doi:10.1103/PhysRevD.8.4320 Capper:1974vb D. M. Capper and M. R. Medrano, “Gravitational slavnov-ward identities,” Phys. Rev. D 9 (1974), 1641-1647 doi:10.1103/PhysRevD.9.1641 Grillo:1999yw N. Grillo, “Finite one loop calculations in quantum gravity: Graviton selfenergy, perturbative gauge invariance and Slavnov-Ward identities,” [arXiv:hep-th/9912097 [hep-th]]. Burns:2014bva D. Burns and A. Pilaftsis, “Matter Quantum Corrections to the Graviton Self-Energy and the Newtonian Potential,” Phys. Rev. D 91 (2015) no.6, 064047 doi:10.1103/PhysRevD.91.064047 [arXiv:1412.6021 [hep-th]]. Mora:2012zi P. J. Mora, N. C. Tsamis and R. P. Woodard, “Graviton Propagator in a General Invariant Gauge on de Sitter,” J. Math. Phys. 53 (2012), 122502 doi:10.1063/1.4764882 [arXiv:1205.4468 [gr-qc]]. Quiros:2007zz M. Quiros, “Field theory at finite temperature and phase transitions,” Acta Phys. Polon. B 38 (2007), 3661-3703
http://arxiv.org/abs/2407.12762v1
20240717173942
Quasi-Linear Size PCPs with Small Soundness from HDX
[ "Mitali Bafna", "Dor Minzer", "Nikhil Vyas" ]
cs.CC
[ "cs.CC" ]
Helical Spin Dynamics in Commensurate Magnets: a Study on Brochantite, Cu_4SO_4(OH)_6 A. Podlesnyak ===================================================================================== § ABSTRACT We construct 2-query, quasi-linear sized probabilistically checkable proofs (PCPs) with arbitrarily small constant soundness, improving upon Dinur's 2-query quasi-linear size PCPs with soundness 1-Ω(1). As an immediate corollary, we get that under the exponential time hypothesis, for all >0 no approximation algorithm for 3-SAT can obtain an approximation ratio of 7/8+ in time 2^n/log^C n, where C is a constant depending on . Our result builds on a recent line of works showing the existence of linear sized direct product testers with small soundness <cit.>. The main new ingredient in our proof is a technique that embeds a given PCP construction into a PCP on a prescribed graph, provided that the latter is a graph underlying a sufficiently good high-dimensional expander. Towards this end, we use ideas from fault-tolerant distributed computing, and more precisely from the literature of the almost everywhere agreement problem <cit.>. We show that graphs underlying HDXs admit routing protocols that are tolerant to adversarial edge corruptions, and in doing so we also improve the state of the art in this line of work. Our PCP construction requires variants of the aforementioned direct product testers with poly-logarithmic degree. The existence and constructability of these variants is shown in an appendix by Zhiwei Yun. § INTRODUCTION The PCP Theorem <cit.> is a cornerstone of theoretical computer science, with many applications in hardness of approximation, cryptography and interactive protocols. It will be convenient for us to take the following combinatorial view of PCPs, using the language of the Label Cover problem. An instance of Label Cover Ψ = (G=(L∪ R,E), Σ_L, Σ_R, Φ = {Φ_e}_e∈ E) consists of a bipartite graph G, alphabets Σ_L, Σ_R and constraints Φ_e⊆Σ_L×Σ_R, one for each edge. Each one of the constraints is a projection constraint, meaning that for every e∈ E there is a map ϕ_eΣ_L→Σ_R such that Φ_e = {(σ,ϕ_e(σ)) | σ∈Σ_L}. Given a label cover instance Ψ, the goal is to find assignments A_L L→Σ_L and A_R R→Σ_R that satisfy as many of the constraints as possible, namely that maximize the quantity val_Ψ(A_L,A_R) = 1/|E||{e=(u,v)∈ E | (A_L(u),A_R(v))∈Φ_e}|. We denote val(Ψ) = max_A_L,A_R val_Ψ(A_L,A_R). Finally, we denote by gap-LabelCover[c,s] the promise problem wherein the input is an instance Ψ of label cover promised to either have val(Ψ)≥ c or else val(Ψ)≤ s, and the goal is to distinguish between these two cases. In this language, versions of the PCP theorem assert that the problem gap-LabelCover[c,s] is NP-hard in some cases, and there are a few parameters of interest: * Completeness - the completeness parameter is c, and one often wants it to be as large as possible. In this paper we will always have perfect completeness, that is, c=1. * Soundness - the soundness of a PCP is the parameter s, and one wants it to be as small as possible. * Alphabet size - the alphabet size is defined as max(|Σ_L|, |Σ_R|), and one often wants it to be of constant size. * Instance size - finally, the instance size refers to the blow-up of the reduction showing that gap-LabelCover[c,s] is NP-hard. The assertion of NP-hardness means that there is a polynomial time reduction mapping 3-CNF formulas ϕ to instances of label cover Ψ such that: if ϕ is satisfiable, then val(Ψ)≥ c, and if ϕ is unsatisfiable, then val(Ψ)≤ s. Letting the size of the 3-CNF formula ϕ be denoted by n, the size of the PCP is measured by the size of Ψ as a function of n. The original proof of the PCP Theorem <cit.> was able to achieve perfect completeness, soundness s≤ 1- for some absolute (but tiny) constant >0, constant size alphabet and polynomial instance size. Using the parallel repetition theorem of Raz <cit.>, one is able to get a stronger version of the PCP theorem wherein the soundness parameter s can be taken to be an arbitrarily small constant, and the rest of the parameters remain qualitatively the same. One feature of parallel repetition that we wish to highlight is that it increases the alphabet size and the instance size at a polynomial rate, and decreases the soundness at the same rate. The question of whether one can come up with soundness amplification techniques that increase the instance size more mildly (while still decreasing the soundness) is often referred to as “derandomized parallel repetition” in the literature. This question will be central to the current paper, and we remark that there are known barriers to general results along these lines <cit.>. Morally speaking, these results show that there is no general amplification technique that obtains the same amplification rate as standard parallel repetition but is more size efficient. Still, one may hope that there are size efficient procedures that amplify soundness in a slightly weaker way. The main result of this paper is such a procedure. §.§ Near-linear Size PCPs Following the proof of the PCP theorem, the question of consructing size efficient PCPs has naturally emerged. Polishchuk and Spielman <cit.> were the first to construct nearly linear-sized 2-query PCPs with soundness 1- and size n^1+c(), where c() approaches 0 as tends to 0. In her combinatorial proof of the PCP theorem, Dinur <cit.> established a quasi-linear version of the PCP theorem, namely a 2-query PCP with size n· poly(log n), soundness s= 1-Ω(1) and constant alphabet size. Her proof used a novel gap-amplification procedure for PCPs via graph powering, which we discuss below. Bogdanov <cit.> observed that the soundness achievable by this approach plateaus at 1/2, suggesting that other ideas are necessary for a quasi-linear size PCP with arbitrarily small soundness. Moshkovitz and Raz <cit.> were the first to construct 2-query PCPs with near linear size and small soundness. Specifically, they proved the hardness of Label Cover with soundness s = 1/k^Ω(1), size n· 2^Θ(√(log n)) and alphabet 2^Θ(k), for any k≤log n. The most notable feature of this work is that it allows one to even get sub-constant soundness that vanishes with the instance size (at the price of having quite a large alphabet). The size of the PCP though is larger than the size of Dinur's PCP, blowing up by a factor of 2^Θ(√(log n)) whereas Dinur only incurs a factor of poly(log n). The above discussion brings us to the main result of this paper: For all δ>0, there is C = C(δ)>0 and a polynomial time procedure such that given an instance ϕ of 3-SAT of size n produces a label cover instance Ψ with the following properties: * The size of Ψ is at most n (log n)^C and the alphabet size of Ψ is at most O_δ(1). * If ϕ is satisfiable, then val(Ψ) = 1. * If ϕ is unsatisfiable, then val(Ψ)≤δ. In words, Theorem <ref> gives a version of the PCP theorem of Dinur <cit.> in the low soundness regime. The structure of the proof of Theorem <ref> The proof of Theorem <ref> has three components. The first component involves modifying a PCP construction to transform the underlying graph into an explicit graph of our choosing. Building on <cit.>, we show that one can embed an arbitrary 2-CSP on a prescribed graph G, if G has a fault-tolerant routing protocol. We then show such routing protocols for graphs derived from high-dimensional expanders, using their edge-expansion properties and the presence of numerous well-distributed dense subgraphs. This is the primary contribution of our work that draws inspiration from fault-tolerant distributed computing <cit.>, and in the process also improves upon the best-known construction of edge fault-tolerant sparse networks <cit.>. The second component is a size-efficient direct product tester from HDX, which was conjectured to exist in <cit.> and recently established in a series of works <cit.>. By appropriately combining the first and second components, we construct a size-efficient PCP with small soundness but a large alphabet. The third and final component is a classical alphabet reduction technique. We use the results of <cit.> to reduce the alphabet size to a constant, thus concluding the proof of <Ref>. §.§ Implications Many hardness of approximation results in the literature start off with the hardness of Label Cover with small soundness, which is typically achieved by parallel repetition or by appealing to the result of <cit.>. We are able to reproduce any such result, that does not use any other features of parallel repetition, with only quasi-linear blow-up. These implications are easiest to phrase in terms of the exponential time hypothesis (ETH) of <cit.> and using the work of Håstad <cit.>, we have the following two corollaries. Assuming ETH, for any >0 there exists C>0 such that solving gap-3SAT[1,7/8+] requires time at least 2^n/log^C n.[We remark that in his proof of the hardness of 3-SAT <cit.>, Håstad uses a feature of the outer PCP construction that our PCP lacks. However, as observed by Khot <cit.>, a weaker property called “smoothness” suffices for the analysis of the reduction, which our PCP construction in Theorem <ref> has. This is because in the end we compose with the Hadamard code and the associated manifold versus point test, which is smooth.] In the 3-LIN problem one is given a system of linear equations over 𝔽_2 where each equation contains 3 variables, and the goal is to find an assignment satisfying as many of the constraints as possible. We have: Assuming ETH, for any >0 there exists C>0 such that solving gap-3LIN[1-,1/2+] requires time at least 2^n/log^C n. Our proof techniques can be used to obtain improvements for the almost everywhere reliable transmission problem <cit.>. This problem is very similar to the routing problem (given in Definition <ref>) which is central to this paper. The almost everywhere reliable transmission problem involves designing a sparse graph G along with communication protocols between all pairs of vertices of G, that are resilient against corruptions in the network. Our results nearly match the performance of the best-known protocols for vertex corruptions <cit.>, while addressing the more challenging scenario of edge corruptions, thus improving upon the results of <cit.>. For further details, see Section <ref>. For all n ∈ℕ, there exists a graph G = (V, E) on Θ(n) vertices with degree n and O(log n)-round protocols {ℛ_u, v}_u,v∈ V on it such that, for all adversaries corrupting |E| edges, all but O() of the message transfers between pairs of vertices (u, v) will be successful. Further, running a single ℛ_u, v protocol can be done in n computation across all nodes. The rest of this introductory section is organized as follows. In Section <ref> we discuss hardness amplification in PCPs. In Section <ref> we discuss fault-tolerant routing protocols and in Section <ref> we discuss the ideas going into the proof of Theorem <ref>. §.§ History of Gap Amplification §.§.§ Parallel Repetition The parallel repetition theorem of Raz <cit.> is a powerful technique in hardness of approximation and interactive protocols. Most relevant to us is its application to PCPs, wherein it is used to boost the soundness of a given PCP construction. Indeed, given a label cover instance Ψ and an integer t∈ℕ, we consider the t-fold repeated game Ψ^⊗ t which consists of the graph G_t = (L^t∪ R^t, E_t) where E_t = {((u_1,…,u_t),(v_1,…,v_t)) | (u_i,v_i)∈ E,∀ i=1,…,t}, as well as alphabets Σ_L^t, Σ_R^t and constraints Φ' = {Φ'_e}_e∈ E_t where Φ_(u⃗,v⃗) ={((σ_1,…,σ_t),(τ_1,…,τ_t)) | (σ_i,τ_i)∈Φ_(u_i,v_i) ∀ i=1,…,t}. It is clear that if val(Ψ) = 1 then val(Ψ^⊗ t)=1, and the content of the parallel repetition theorem asserts that if val(Ψ)≤ 1-, then val(Ψ^⊗ t)≤ (1-')^t, where ' = poly(). Thus, parallel repetition can be used to decrease the soundness to be as close to 0 as we wish. However, as size(Ψ^⊗ t) = size(Ψ)^t, parallel repetition cannot be used on a given PCP construction to get small soundness while maintaining quasi-linear size. In light of this issue, it makes sense to look for sparse analogs of parallel repetition that still amplify soundness. This task is known as derandomizing parallel repetition in the literature – given a label cover instance Ψ and an integer t, one would like to come up with sparse subsets of L^t and R^t such that the induced label over instance Ψ^⊗ t on them would still have significantly smaller soundness than the original instance Ψ. Ideally, one would like the subsets to be as small as O_t(|L|), O_t(|R|) while retaining arbitrarily small constant soundness s. Towards this end, it makes sense to consider the simpler combinatorial analog of this question known as direct product testing, which we define next. §.§.§ Direct Product Testing In an effort to simplify the proof of the PCP theorem, Goldreich and Safra <cit.> introduced the notion of direct product testing. In direct product testing, one wishes to encode a function f [n]→Σ (which, in the context of PCPs is thought of as an assignment) via local views in a way that admits local testing. The most natural direct product encoding, which we refer to as the Johnson direct product scheme, has a parameter k∈ℕ which is thought of as a large constant. The function f is encoded via the assignment F[n]k→Σ^k defined as F({a_1,…,a_k}) = (f(a_1),…,f(a_k)).[We fix an arbitrary ordering on [n].] The natural 2-query direct product test associated with this encoding is the following consistency check: * Sample B⊆ [n] of size √(k). * Sample A,A'⊇ B independently of size k. * Read F[A] and F[A'] and check that F[A]|_B = F[A']|_B. It is clear that if F is a valid encoding of a function f, then the tester passes with probability 1. The work of <cit.> shows that this test is also sound. Namely, for all δ>0, taking k∈ℕ large enough, if an assignment F[n]k→Σ^k passes the Johnson direct product test with probability at least δ, then there is a function f' [n]→Σ such that _A [Δ(F[A], f'|_A)≤ 0.01] ≥(δ), where Δ(F[A], f'|_A) = 1/|A|#{i∈ A | F[A](i)≠ f'(i)} is the relative Hamming distance between F[A] and f'|_A. The main drawback of the Johnson direct product encoding is its size blow up: the size of the encoding of F is roughly the size of f to the power k. This is the same behaviour that occurs in parallel repetition, which raises a combinatorial analog of the derandomized parallel repetition problem: is there a sparse collection of k-sets 𝒮_k⊆[n]k, ideally |𝒮_k| = O_k(n), such that the encoding of f [n]→Σ given by F𝒮_k→Σ^k defined as F({a_1,…,a_k}) = (f(a_1),…,f(a_k)), admits a sound 2-query consistency test. §.§.§ Graph Powering Underlying the proof of Dinur <cit.> is a derandomized direct product tester in the 99% soundness regime. Dinur proved that the set system formed by constant sized neighbourhoods of vertices in a spectral expander supports a natural 2-query test with soundness bounded away from 1[Although her original result is not stated in terms of direct product testing, a later work of <cit.> observes that her proof indeed implies such a tester.]. She used these ideas to show a gap amplification result for PCPs achieved via graph powering. To present it, we expand the definition of label cover to graphs that are not necessarily bipartite. A constraint satisfaction problem (2-CSP in short) Ψ = (G=(V,E), Σ, Φ) is composed of a graph G, an alphabet Σ, and a collection of constraints Φ = {Φ_e}_e∈ E, one for each edge. Each constraint is Φ_e⊆Σ×Σ, describing the tuples of labels to the endpoints of e that are considered satisfying. Dinur started with an instance of a 2-CSP Ψ over a d-regular expander graph G with soundness 1-1/ n and constant alphabet. Given a parameter t∈ℕ, she considered the 2-CSP Ψ' over the graph G' whose vertex set is V and u,v are adjacent if they have a path of length t between them in G. The alphabet of a vertex u is Σ^d^t/2, which is thought of as an assignment to the neighbourhood of u of radius t/2. The allowed symbols on u are assignments to the neighbourhood of u satisfying all of the constraints of Ψ inside it. Finally, the constraint between u and v is that the assignment they give to their neighbourhoods agree on any common vertex to them. She showed that if most constraints of Ψ' are satisfied (equivalently, the direct product test passes), the assignment to the neighborhoods must be consistent with a global assignment to the vertices, which in turn must satisfy a large fraction of the edges of Ψ. This implies that if val(Ψ)≤ 1-δ, then val(Ψ')≤ 1-min(2δ, c) where c>0 is an absolute constant. Thus, each invocation of graph powering improves the soundness of the 2-CSP, and after Θ(log(1/δ)) iterations the resulting 2-CSP will have value at most 1-Ω(1). Every iteration of graph powering blows up the alphabet size though, which Dinur resolved via an alternating step of alphabet reduction. §.§.§ The Soundness Limit of Graph Powering Following Dinur's result <cit.>, Bogdanov <cit.> observed that graph powering on an arbitrary expander fails to decrease the soundness below 1/2. Towards this end, he considers any locally-tree-like expander graph G with large girth, such as the construction in <cit.>, and defines a CSP Ψ over G whose alphabet is Σ = {0,1}, and constraints are inequality constraints. The graph G has n vertices, girth g = Θ(log n), and the largest cut in it has fractional size 1/2 + o(1). The latter fact implies that val(Ψ)=1/2+o(1). On the other hand, as long as t< g/2, for each vertex v the t/2-neighborhood of v has two possible assignments. If u and v are within distance t, then there is a 1-to-1 correspondence between the assignments of u and the assignments of v. Thus, randomly choosing one of these possible assignments for each u leads to an assignment that satisfies 1/2 of the constraints in G^t in expectation, and in particular val(G^t)≥ 1/2. This means that graph powering fails to decrease the soundness below 1/2. §.§.§ Subspace-based Direct Product Testers The work of Impagliazzo, Kabanets and Wigderson <cit.> made progress on derandomized direct product testing in the 1% regime by analyzing a more efficient, albeit still polynomial-sized version of the Johnson direct product tester based on subspaces, which we refer to as the Grassmann direct product tester. In this context, we identify the universe [n] with 𝔽_q^d (where 𝔽_q is a field), so that an assignment f [n]→Σ is interpreted as a function f𝔽_q^d→Σ. The Grassmann direct product encoding of f is a table F that specifies the restriction of f to each d'-dimensional subspace of 𝔽_q^d, where d'<d is a parameter chosen appropriately. The Grassmann direct product test is the natural consistency check, wherein one chooses B of prescribed dimension, then two d'-dimensional subspaces A and A' containing B, and checks that F[A] and F[A'] agree on B. Their work proves that this test has small soundness in a sense similar to (<ref>). The work <cit.> also strengthened the connection between gap amplification and direct product testing. Namely, they showed an amplification procedure for PCPs using the Johnson direct product tester. Just like parallel repetition though, this procedure incurs a polynomial blow-up in the instance size, and one could hope to use the more efficient Grassmann direct product encoding to improve upon this size blow-up. This turns out to be harder, and the work <cit.> leaves this as an open possibility. To use the Grassmann direct product tester effectively, one must start with a 2-CSP whose constraint graph is compatible with the structure of subspaces, in the sense that a d'-dimensional subspace must contain multiple edges of the initial 2-CSP, otherwise one cannot hope for any gap amplification. §.§.§ Derandomized Parallel Repetition using Subspace-based Direct Product Testers Dinur and Meir <cit.> obtained this compatibility by reducing an arbitrary 2-CSP with soundness 1-Ω(1) to a 2-CSP on the De-Bruijn graph with soundness 1-Ω(1/log n). The benefit of switching to a De-Bruijn graph G is that, if V(G) is denoted by _q^d, then the set of edges E(G) forms a linear subspace of _q^2d. This made G compatible with the Grassmann direct product tester, which then allowed the authors to amplify the soundness of any 2-CSP on G to soundness close to 0. The idea of embedding CSPs on De-Bruijn graphs first appeared in a similar context in <cit.>. Towards this end, these works used the property that De-Bruijn graphs have efficient routing protocols. The work of <cit.> demonstrates that, in some cases, derandomized direct product testing theorems can be used to construct PCPs. Their construction however falls short of getting size efficient PCPs since the Grassmann direct product encoding itself has a polynomial size blow-up. §.§.§ Linear Sized Direct Product Testers from HDX The work <cit.> identified high dimensional expanders as a potential avenue towards constructing size efficient direct product testers. Roughly speaking, high-dimensional expansion is a generalization of the usual notion of spectral expansion in graphs. An HDX is an expander graph G which in addition has a lot of well-connected constant-sized cliques, with X(k) denoting the k-sized cliques in G and X={X(k)}_k=1^d referred to as the clique complex of G. Indeed, their work showed that for any sufficiently good HDX the natural direct product encoding using the set system X(k) (which typically has size O_k(n)) admits a natural 2-query tester with soundness 1-Ω(1). They conjectured that there are high-dimensional expanders that support a direct product tester in the more challenging, 1% regime of soundness. Motivated by this problem and potential applications to PCPs, the works <cit.> studied the soundness of the natural direct product tester on X(k). First, the works <cit.> identified that for the test to have a small soundness, it is necessary that the HDX is a “coboundary expander”, which is a notion of topological expansion. In particular any graph which is locally tree like, such as the one used by Bogdanov <cit.>, is automatically not a coboundary expander, giving a more comprehensive explanation to his counterexample. Secondly, the works <cit.> established that sufficiently strong high-dimensional expansion and coboundary expansion imply that X(k) admits a direct product tester with small soundness. Following this, the works <cit.> constructed sparse HDX, that are also coboundary expanders. Their construction is a variant of the Chapman-Lubotzky complex <cit.> with an appropriate choice of parameters. In particular, they established that: For all δ>0, there is Δ∈ such that for all large enough k,d∈ the following holds. There is an infinite sequence of d-dimensional clique complexes {X_n}_n∈ N with degree Δ such that the set system X_n(k) admits a 2-query direct product test with soundness δ.[We remark that in the works <cit.>, only the case of Boolean alphabets Σ = {0,1} was considered. Their proof however is easy to adapt to any alphabet, and in Section <ref> we explain these adaptations for the sake of completeness.] With Theorem <ref> in hand, one may hope to obtain a derandomized parallel repetition result, just like Dinur and Meir <cit.> turned the tester of <cit.> into a small soundness PCP. Thus, one again wishes to be able to convert a given CSP Ψ into a CSP Ψ' whose underlying graph is compatible with the HDXs from Theorem <ref>. As we elaborate below, this task entails several challenges, and the main contribution of the current work is to perform such a conversion via routing protocols for the almost-everywhere reliable transmission problem. §.§ Almost-Everywhere Reliable Transmission In the almost everywhere reliable transmission problem from <cit.>,[For ease of application, our formalization slightly deviates from the standard one used in the literature.] the goal is to design a sparse graph G = (V,E), that allows transference of messages in a fault tolerant way. More precisely, for any permutation π V→ V and an alphabet Σ, the goal is to design a L-round protocol; at every round of the protocol each node can send messages in Σ to its neighbours in G, and at the end of the protocol the message of v should be transmitted to π(v) for most v. This guarantee should hold even if a small fraction of vertices or edges in the graph behave maliciously. The parameters of interest in this problem are as follows: * Work Complexity: The work complexity of a protocol is defined as the maximum computational complexity any node in the graph G incurs throughout the protocol. * Degree: This is the degree of G, which we aim to minimize. * Tolerance: A protocol is considered ((n), ν(n))-vertex (edge) tolerant if, when an adversary corrupts up to (n)-fraction of vertices (edges) of the graph (allowing them to deviate arbitrarily from the protocol), at most ν(n)-fraction of the transmissions from u→π(u) are disrupted. With this in mind, the simplest protocol one can design on is a “pebble-routing protocol” by showing that for any permutation π:V→ V, there exists a set of L-length paths in G from u →π(u) such that at any time step every vertex is used exactly once across all the paths. This protocol has work complexity O(L) and is (, L)-vertex-tolerant for all >0. The work of Dwork et al. <cit.> shows that the constant-degree “butterfly network” has pebble-routing protocols with L=O(log n), thus giving a network that can tolerate 1/log n-fraction of vertex corruptions. All protocols which improve upon this result do so by considering more complicated protocols that use error correction. The first such protocol which could tolerate a linear number of vertex corruptions was given by <cit.>, but it has exp(n) work complexity. In <cit.> a routing network with poly(log n) degree and O(n) work complexity is constructed, and in <cit.> a routing network with O(log n) degree and poly(log n) work complexity. As per tolerance, the last two works show that for all less than some universal constant c>0, their network is (, + O(/log n))-vertex tolerant. §.§ Our Techniques We begin by describing the connection between routing protocols and PCPs. We then discuss our construction of routing protocols over HDX and elaborate on the rest of the proof of <Ref>. §.§.§ The Connection between Fault-Tolerant Routing Protocols and PCPs The works of <cit.> both used the fact that the De-Bruijn graph G on n vertices has a pebble-routing protocol of length O(log n) to transform the constraint graph of any 2-CSP to the De-Bruijn graph. Their argument proceeds as follows. Suppose that we have a CSP instance Ψ over a d-regular graph H on n vertices. First, break the graph H into a union of d disjoint perfect matchings π_1,…,π_d [n]→ [n],[Strictly speaking, we can only guarantee that this is possible to do in the case that H is bipartite, and we show an easy reduction to this case.] thought of as permutations on V(H). For each permutation π_i, we have a pebble-routing protocol _i on the De-Bruijn graph, where a vertex v transmits its supposed label to π_i(v) along the path from v→π_i(v). With this in mind, in the new CSP Ψ' the alphabet of each vertex v consists of the messages that it sent and received throughout each one of the d routing protocols. The constraints of Ψ' over (u,v)∈ E(G) are that for all t, the label sent by u at step t matches the label received by v from u at step t+1. Thus, for each i the vertex π_i(v) will also have as part of their assignment the symbol they supposedly got from v at the end of the routing protocol, and we only allow labels that satisfy the constraint (v,M_i(v)) in Ψ. It is easy to see that if O(/dlog n)-fraction of the edges of Ψ' are violated, at most -fraction of the paths are unsuccessful. As d = O(1), this implies that * If val(Ψ) = 1, then val(Ψ') = 1. * If val(Ψ)≤ 1-, then val(Ψ')≤ 1-Ω(/log n). * If the alphabet of Ψ is of constant size, then the alphabet of Ψ' is of polynomial size. Note that any constant degree graph (including the graph underlying X_n) can at best have a pebble-routing protocol of length Θ(log n), which is the source of the log n-factor loss in soundness above. This loss is unaffordable to us since the subsequent gap amplification via the direct-product tester in <Ref> would lead to a huge size blow-up. Fortunately we observe that their argument generalizes to arbitrary routing protocols giving us the following general connection: Suppose G is a regular graph on 2n vertices such that for all permutations π:V(G)→ V(G), G has a (,ν)-edge-tolerant protocol with work complexity W that can be constructed in time (n). Then there is a polynomial time reduction that given any 2-CSP Ψ' on a k-regular graph H with |V(H)|≤ n produces a 2-CSP Ψ on G such that, * If (Ψ')=1 then (Ψ)=1. * If (Ψ')≤ 1-O(ν) then (Ψ)≤ 1-. * If the alphabet of Ψ' is Σ then the alphabet of Ψ is Σ^kW. We can now hope to instantiate the above lemma with routing protocols for HDX that are tolerant to a constant fraction of edge-corruptions to embed a 2-CSP onto an HDX without losing much in the soundness. §.§.§ Challenges in Using Existing Routing Protocols for PCPs With Lemma <ref> in mind, the result of <cit.> achieves the type of parameters we are after. There are two significant differences between their setting and ours, though: * Picking the graph: in the setting of the almost everywhere reliable transmission problem, one is free to design the routing graph G as long as it has the desired parameters. In our case, we need to use the graphs G underlying complexes that supports a direct product tester with low soundness, for which we currently know of only one example – the complexes from Theorem <ref>. * The corruption model: The prior works discussed in <Ref> have mostly considered the vertex corruption model, whereas to apply <Ref> one must design protocols resilient in the more challenging edge corruption model. This model was studied by <cit.> who give a protocol with work complexity is n^Θ(1), which is again unaffordable to us. The main new tool in the current paper is efficient routing protocols on high dimensional expanders in the edge corruption model. Below, we elaborate on the two protocols that we give in this paper. §.§.§ Clique Based Routing Network on HDX A key idea introduced by <cit.> (also used by <cit.>) is that to construct tolerant routing protocols on a graph G, it is beneficial for it to contain many well inter-connected large cliques. In these cliques, messages can be transmitted very quickly, and they can be used for error correction. As the complexes X_n from Theorem <ref> naturally have many well-connected cliques, it stands to reason that they too admit routing protocols with similar performance to the protocols in <cit.>. Indeed, this turns out to be true; see Section <ref> for a formal statement of our clique-based routing scheme. Our analysis of this scheme though requires the complex X_n to have dimension d = Θ((loglog n)^2) and to achieve that, the complexes from Theorem <ref> have size n d^C· d^2 = n2^ poly(loglog n), which falls short of being quasi-linear. We therefore resort to an alternative routing scheme, based on yet another highly connected structure found in HDX – links. §.§.§ Link-based Routing Network on HDX Our link-based routing scheme requires milder expansion properties from X, and these properties can be achieved by the complexes from Theorem <ref> with quasi-linear size. More concretely, take a complex X = X_n as in Theorem <ref> with spectral expansion γ = (log n)^-C and dimension d = C for a large constant C>0.[In Theorem <ref> the degree is thought of as a constant, giving an HDX with arbitrarily small (but constant) expansion. In Section <ref>, Yun shows how to modify this construction to achieve better spectral expansion at the expense of having poly-logarithmic degree.] Let G=(X(1),X(2)) be the underlying graph with n vertices and n n edges. We show how to use the graph G for routing, and towards this end we use the following properties of X: * Expansion: the complex X is a d-partite one-sided γ-local spectral expander. We defer the formal definition of high-dimensional expansion to Section <ref>, but for the purposes of this overview it is sufficient to think of this as saying that many graphs associated with X, such as the underlying graph G as well as the underlying graph of each link L_u, are partite graphs with second largest eigenvalue at most γ. Here and throughout, the link of u refers to the complex consisting of all faces of X containing u, where we omit the vertex u from them. * Very well connected links: the links of X are highly connected. By that, we mean that inside every vertex-link L we can set up a collection of constant length paths inside it satisfying: (a) the collection includes a path between almost all pairs of vertices in L, and (b) no edge is overly used. At a high level, the first property implies that links sample the vertices of G very well and the second property allows us to pretend that links behave very closely to cliques. With this in mind, we now present an overview of our link-based routing network. We first consider the simplified case that the graph G is regular; even though our graph G is not regular, it is a helpful case to consider and we later explain how to remove this assumption. Formalizing the setup: Let V=X(1) and fix a permutation π V→ V. In our setup, for each vertex v in X, the vertices in its link L_v hold a common message m_v. If Δ denotes the degree of G, then note that every vertex holds Δ symbols, and for each one of these it knows which link the message is associated with. By the end of the protocol on G, we want that for at least 1-1/ n-fraction of v, the majority of vertices in L_π(v) hold m_v. Setting up external paths: Recall the graph G = (X(1),X(2)). By the spectral expansion of X it follows that the graph G is an expander. At this point we can use the results of <cit.>, which show O(log n)-length pebble-routing protocols for regular, sufficiently good spectral expanders[In reality we use a simple deterministic polynomial time algorithm to find paths with much weaker guarantees than a pebble-routing protocol (see Theorem <ref>). This is because while it is likely that the constructions of <cit.> can be made algorithmic and deterministic, the conclusion of <Ref> suffices for our purposes.]. At each point in the routing protocol each vertex holds multiple symbols (being part of several links), and we stress that it always knows which symbol it is supposed to transmit at each point of the protocol. With this set-up we describe the protocol. At a high level, the goal in the t-th step is to transmit the message from a link L_w to the link L_w' that occur consecutively in a path above. Note that it must be the case that the vertices w and w' are neighbours in G, and hence we may consider the link L_w,w'. The analysis in an idealized “clique” setting: to get a sense for the protocol, we first pretend that each one of the links L_w, L_w' and L_w,w' in fact forms a clique. In this case, the t^th step of our protocol begins by the vertices in L_w sending their message to the vertices in L_w,w'. Each vertex in L_w,w' computes the majority value among the symbols they received in this step, and then forwards this symbol to all of the vertices in L_w'. Each vertex in L_w' also computes the majority value among the symbols it receives, at which point the t^th step of the protocol is over. The key idea behind the analysis of this protocol is that vertices compute the correct value they were supposed to so long as they are not over-saturated with corrupted edges. More precisely, let ℰ⊆ X(2) be the set of corrupted edges, which therefore is at most fraction of the edges. We say a 1-link L_w is good if at most √() fraction of the edges in it are corrupted. Let V_w⊆ L_w be the set of vertices for which at most ^1/4-fraction of the edges adjacent to them in L_w are corrupted; note that if L_w is good then by an averaging argument, |V_w|≥ (1-^1/4)|L_w|. We refer to the vertices outside V_w as the doomed vertices of L_w. For a 2-link L_w,w', we say it is good if both L_w,L_w' are good and at most ^1/8-fraction of the vertices in it are doomed with respect to L_w or L_w'. Using spectral arguments it is easy to show that a 1-1/ n-fraction of 1-links and 2-links are good. One can now show that if L_w, L_w' and L_w,w' are all good then the majority value is transmitted from L_w to L_w'. We can conclude the argument by a simple union bound – since at most 1/ n-fraction of 1-links and 2-links are bad and the external paths L_v → L_π(v) are of length O(log n), at most 1/ n-fraction paths contain at least one bad link, therefore all but 1/ n-fraction of the transmissions are successful[Technically, the number of 2-links is asymptotically greater than the number of paths, therefore this union bound does not work. We handle this issue by moving to the zig-zag product of G where the number of paths is equal to the number of 2-links.]. Back to the real “link” setting: Links are not as well connected as cliques, but they turn out to be connected enough to almost make the above protocol go through. Indeed, the “clique” assumption in the above paragraph was mainly used to argue that the transmission from L_w to L_w,w' and from L_w,w' to L_w can each be done cleanly in a single round. While this property is no longer true for links, we use the “very well connected links” property above to circumvent it. At a high level, we simply replace the immediate transmission between a pair of vertices u,v in the above idealized setting, with a short path between u → v inside L_w. Using the fact that edges in L_w are used uniformly across the internal paths in L_w, we can show that very few of these short paths can be corrupted and a similar error analysis for the link to link transfer goes through. Lifting the regularity assumption: Finally we remark that in our actual protocol we use the zig-zag product of <cit.>, a powerful tool from derandomization. It allows us to define a closely related graph Z which is regular and has constant degree. This move is compatible with routing: paths in Z have natural correspondence to paths in G, and the uniform distribution over Z corresponds to the stationary distribution over G. Although the routing still takes place on G, moving to Z allows us to deal with the irregularity of G and with other technical issues that come up in implementing the union bound above. We note here that the link-to-link routing protocol works for any simplicial complex with the properties stated above – sufficiently good local spectral expansion of X and the well-connectedness of links of X. In particular, one can pick any other complex X with these properties such as the d-partite LSV complexes <cit.> and the complexes of Kaufman and Oppenheim <cit.>, to get the same tolerance guarantees as above. §.§.§ Composing the Links Based Routing Protocol with Known PCP Constructions Using our link based routing protocol along with <Ref> on the existing PCP results gives a quasi-linear size PCP on the graphs underlying the complexes from Theorem <ref> with soundness bounded away from 1. There exists >0 such that the following holds. Let {X_n'}_n'∈ N be the infinite family of clique complexes from Theorem <ref>. Then for sufficiently large d∈ℕ, there is C>0 and a polynomial time reduction mapping a 3SAT instance ϕ of size n to a CSP Ψ over (X_N(1),X_N(2)), for some d-dimensional X_N, such that: * If ϕ is satisfiable, then Ψ is satisfiable. * If ϕ is unsatisfiable, then val(Ψ)≤ 1-. * The size of the graph underlying Ψ is at most n (log n)^C, the alphabet Σ of it satisfies that log(|Σ|)≤ (log n)^C, and the decision complexity of the constraints (i.e., the circuit complexity of a circuit checking if a given pair of labels satisfies a constraint) is at most (log n)^C. We stress that the key point of Theorem <ref> is that the CSP is over the graph underlying the complex from Theorem <ref>, making it potentially compatible with derandomized parallel repetition. One can take the size efficient PCP construction of Dinur <cit.> as the starting point towards the proof of <Ref>. Interestingly though, we can exploit the fact that the link-to-link protocol is (,1/ n)-tolerant to get an improved version of <Ref> as an unexpected, positive side-effect. In <Ref>, we show that one can embed an arbitrary 2-CSP with soundness 1-1/ n to a 2-CSP on G with soundness 1-Ω(1), where G is the graph underlying any complex that supports a link-to-link protocol. This provides a 1-shot amplification procedure giving an alternative to the step by step approach of Dinur <cit.>. The CSP Ψ however has a large alphabet, so (just like in Dinur's proof) we require an alphabet reduction procedure to bring the alphabet back to constant, while not affecting the soundness or size by much. Thus to prove <Ref> one can also start with a size efficient PCP construction with weaker soundness guarantees, such as that of Ben-Sasson and Sudan <cit.>, and apply <Ref>. §.§.§ Derandomized Parallel Repetition using HDX Given the 2-CSP in <Ref> on the graphs underlying the complexes from Theorem <ref>, we can now use the direct-product testing theorem to amplify the soundness of the 2-CSP to any constant close to 0. To be more specific, suppose that we have a CSP Ψ over the graph underlying the complex X, one can naturally define a label cover instance, Ψ', coming from faces of X as follows. Given a parameter k, the PCP verifier reads symbols from the tables F X(k)→Σ^k and G X(√(k))→Σ^√(k), and performs the following test: * Sample A∼π_k and B⊆ A of size √(k) uniformly. * Read F[A] and G[B]. Check that all of the constraints of Ψ inside A are satisfied by the local labeling F[A], and that F[A]|_B = G[B]. The instance Ψ' has asymptotically the same size and alphabet. We show that if the direct product test on X(k) has soundness δ, then given that val(Ψ')≥δ, one can get an assignment to X(1) that is consistent with X(k) and in particular satisfies a large fraction of the edges of Ψ, implying that (Ψ)≥ 1-O(δ). Using <Ref> along with <Ref> thus gives us the following conclusion: For all δ>0 there exists C>0 such that the following holds. There is a polynomial time reduction mapping a 3SAT instance ϕ of size n to a label cover instance Ψ with the following properties: * If ϕ is satisfiable, then Ψ is satisfiable. * If ϕ is unsatisfiable, then val(Ψ)≤δ. * The size of the graph underlying Ψ is at most n (log n)^C, the alphabet Σ of it satisfies that log(|Σ|)≤ (log n)^C, and the decision complexity of the constraints (i.e., the circuit complexity of a circuit checking if a given pair of labels satisfies a constraint) is at most (log n)^C. §.§.§ Applying Alphabet Reduction Using Theorem <ref> and alphabet reduction we conclude the main result of this paper, Theorem <ref>. More specifically, we use the 2-query PCP composition technique of Moshkovitz and Raz <cit.> and its abstraction by Dinur and Harsha <cit.>. To apply the latter result, we use two folklore constructions of decodable PCPs, one based on the Reed-Muller code <cit.> and a similar construction based on the Hadamard code. § PRELIMINARIES In this section we give a few basic preliminaries that will be used throughout the paper. Notations: We use standard big-O notations: we denote A = O(B) or A B if A≤ C· B for some absolute constant C>0. Similarly, we denote A = Ω(B) or A B if A≥ c B for some absolute constant c>0. We also denote k≪ d to denote the fact that d is taken to be sufficiently large compared to any function of k. If A is a finite set and i ≤ |A|, the notation B⊆_i A means that we sample a subset of size i of A uniformly. Given a domain X and a measure μ over it, we denote by L_2(X;μ) the space of real-valued functions over X endowed with the expectation inner product. §.§ Properties of Expanders We need the following well known version of the expander mixing lemma for bipartite graphs. Let G = (U,V,E) be a bipartite graph in which the second singular value of the normalized adjacency matrix is at most λ, and let μ be the stationary distribution over G. Then for all A ⊆ U and B ⊆ V we have that |_(u, v) ∈ E[u ∈ A, v ∈ B] -μ(A)μ(B)| ≤λ√(μ(A)(1-μ(A))μ(B)(1-μ(B))). We also use the following standard sampling property of bipartite expanders. Let G = (U,V,E) be a weighted bipartite graph with second singular value at most λ. Let B ⊆ U be a subset with μ(B)=δ and set T = {v ∈ V ||_u neighbour of v[u ∈ B]-δ| > }. Then [T]≤λ^2δ/^2. §.§ High-Dimensional Expanders A d-dimensional simplicial complex X = (X(0),…,X(d)) with vertex set X(1) = [n] is a downwards closed collection of subsets of [n]. We follow the convention that X(0) = {∅}, and for each i>1 the set of i-faces X(i) is a collection of subsets of X(1) of size i. The size of X is the total number of faces in X. The degree of a vertex v∈ X(1) is the number of faces containing it, and the degree of X is the maximum degree over all v∈ X(1). For a d-dimensional simplicial complex X = (X(0),X(1),…,X(d)), 0≤ i≤ d-2 and I∈ X(i), the link of I is the (d-i)-dimensional complex X_I whose faces are given as X_I(j-i) = {J∖ I | J∈ X(j), J⊇ I}. For a d-dimensional complex X=(X(0),X(1),…,X(d)) and I∈ X of size at most d-2, the graph underlying the link of I is the graph whose vertices are X_I(1) and whose edges are X_I(2). We associate with X a collection of distributions over faces. The distribution μ_d is the uniform distribution over X(d), and for each i<d the distribution μ_i is a distribution over X(i) which results by picking D∼μ_d, and then taking I⊆ D of size i uniformly. The distribution μ_I,j-i associated to the link of I ∈ X(i) is the conditional distribution, μ_j | J ⊇ I. Density, average and average on links: for a set S ⊆ X(i), let μ_i(S) denote its density with respect to μ_i. For j>i, I∈ X(i) and S ⊆ X_I(j-i) let μ_I,j-i(S) denote the density of S with respect to μ_I,j-i. We often omit the subscript and simply write μ(S) and μ_I(S) in these cases when there is no risk of confusion. We extend the definition for functions, and for F: X(j) → we define μ(F) = _J ∼μ_j[F(J)] as well as μ_I(F) = _J ∼μ_I,j-i[F(J)]. For a set K ∈ X(k) for k ≥ j, we define the restricted function F|_K{J∈ X(j) | J⊆ K}→ℝ by F|_K(J) = F(J); we think of {J∈ X(j) | J⊆ K} as being endowed with the natural condition measure of μ_j on this set, and hence define μ(F|_K) =_J∼μ_j J ⊆_j K[F(J)]. We say a d-dimensional simplicial complex X is a γ one-sided local spectral expander if for every I∈ X of size at most d-2, the second eigenvalue of the normalized adjacency matrix of the graph (X_I(1), X_I(2)) is at most γ. §.§ Properties of Local Spectral Expanders Recall that we associated with each d-dimensional simplicial complex X a sequence of measures {μ_k}_1≤ k≤ d, where μ_k is a probability measure over X(k). Note that for all 0 ≤ t ≤ r ≤ d, a sample according to μ_t can be drawn by first sampling R ∼μ_r, and then sampling T⊆_t R uniformly. The converse is also true: a sample from μ_r can be drawn by first sampling T ∼μ_t, and then sampling R from μ_r conditioned on containing T. These observations give rise to the standard “up” and “down” operators, which we present next. The operator U_i^i+1 is a map from L_2(X(i); μ_i) to L_2(X(i+1); μ_i+1) defined as U_i^i+1f(u) = _v ⊆_i u[f(v)] for all u∈ X(i+1). For j≥ k+1, we define U_k^j via composition of up operators: U_k^j = U_j-1^j ∘…∘ U_k^k+1. The operator D_i^i+1 is a map from L_2(X(i+1); μ_i+1) to L_2(X(i); μ_i) defined as D_i^i+1f(u) = _v ⊇_i+1 u[f(v)] for all u ∈ X(i). For j≥ k+1, we define D_k^j via composition of down operators: D_k^j = D_k^k+1∘…∘ D^j_j-1. Abusing notations, we use the notations U^j_k, D^j_k to denote the operators, as well as the real valued matrices associated with them. A key property of the down and up operators is that they are adjoint: For all k ≤ j ≤ d, U_k^j and D^j_k are adjoint operators: for all functions f X(k)→ℝ and g X(j)→ℝ it holds that U_k^jf,g = f,D^j_kg. We need the following lemma regarding the second eigenvalue of the down-up walks U^j_kD^j_k on X(j) (j ≥ k), that can be found in <cit.>. Roughly speaking, the lemma asserts that for a one-sided spectral expander X, the singular values of the operators U_α i^i and D_α i^i for α∈ (0,1) are upper bounded by the eigenvalues of the corresponding operators in the complete complex, up to an additive factor if poly(i)γ: Let (X, μ) be a d-dimensional γ one-sided local spectral expander. For all i ≤ d and α∈ (1/i, 1), the largest singular value of U^i_α i and D^i_α i is at most √(α)+(i)γ. Thus the down-up random walk U^i_α iD^i_α i on X(i) has second largest singular value at most α + (i)γ. §.§ Partite Complexes In this section we define partite complexes and state some of their properties. A d-dimensional complex X is said to be d-partite if X(1) can be partitioned into d disjoint parts called “colors”, X(1)=∪_i∈ [d] X_i(1), such that every d-face in X(d) contains one vertex from each color. For a face I ∈ X we let col(I) denote the set of colors of the vertices it contains. For an i-sized set S ⊆ [d], we will use X_S(i) ⊆ X(i) to denote i-faces I for which col(I)=S. The following result is a trickle-down theorem for partite complexes from <cit.>. Let X be a d-partite complex, and let L,R⊆[d] be two disjoint color classes. We define the bipartite weighted graph (X_L(|L|), X_R(|R|)) to be the graph where the weight of an edge (u,v) is equal to w(u,v)=_x∼μ_d[x_L = u, x_R = v]. Let X be a d-partite simplicial complex, and suppose that for all v ∈ X(1) the graph underlying X_v is a λ-one sided (d-1)-partite expander, for λ < 1/2. Suppose that the underlying graph of X is connected. Then for every i≠ j, the bipartite graph between X_{i}(1) and X_{j}(1) is a λ/1-λ-bipartite expander. For color sets L and R that are larger than 1, the following lemma bounds the eigenvalues the graph (X_L(|L|), X_R(|R|)) provided bounds on the second eigenvalue of associated graphs on links of X. Let X be a d-partite complex. Suppose that for each link L of X and for any two colors i,j∉col(L), we have that the bipartite graph (L_i(1),L_j(1)) has second largest eigenvalue at most λ. Then, for any pair of disjoint sets L,R ⊆ [d], the bipartite graph (X_L(|L|), X_R(|R|)) has second largest eigenvalue at most (d)λ. §.§ Variants of the Chapman-Lubotzky Complex In this section we discuss variants of the Chapman and Lubotzky complex <cit.> and some of their properties. In their paper, Chapman and Lubotzky <cit.> construct an infinite family of complexes that are 2-dimensional coboundary expanders over _2. In the works <cit.> the authors extend this construction, and show that for all m∈ℕ, one can construct variants of the Chapman and Lubotzky complexes that are 2-dimensional coboundary expanders over S_m. They used this fact to prove that the natural 2-query direct product tester has small soundness δ (that can be taken to be arbitrarily close to 0 so long as one takes m large enough), as defined formally below. Given a supposed encoding F X(k)→Σ^k, the direct product tester associated with X proceeds as follows: * Sample D∼π_d. * Sample B⊆ D of size √(k) uniformly. * Independently and uniformly sample k-faces A, A' satisfying B⊆ A,A'⊆ D. * Check that F[A]|_B = F[A']|_B. We say that the (k,√(k))-direct product test on X has soundness δ if the following holds. Let F:X(k)→Σ^k be any function that passes the (k,√(k))-direct-product test above. Then there exists a function f:X(1)→Σ such that, _A∼ X(k)[Δ(F[A],f|_A)≤δ] ≥(δ). In the lemma below, we state the properties that we need from the complexes of <cit.>. This includes the aforementioned fact about direct product testing and the fact that these complexes are polynomial-time constructible, which was established by Dikstein, Dinur and Lubotzky <cit.>. We also need some other features that are easier to conclude. For all δ∈ (0,1) there exists ℓ∈ℕ such that the following holds for all C>0. For large enough k,d∈ and for large enough n ∈, one can construct in time poly(n) a d-dimensional complex X for which n ≤ |X(1)|≤ O_ℓ,d(n) such that for any prime q =Θ(log^C n) the following holds: * The complex X is d-partite. * Every vertex participates in at most q^O(d^2) d-cliques. * For every vertex link L of X there is a group (L) that acts transitively on the d-faces L(d). * For links L≠ L_∅ and any i≠ j, the bipartite graph (L_i(1),L_j(1)) is uniformly weighted and has diameter O(d/|i-j|). * For all links L of X, every bipartite graph (L_i(1),L_j(1)) for i≠ j ∈ [d] ∖col(L) has second largest eigenvalue at most 2/√(q). * The second largest singular value of G=(X(1),X(2)) is at most 1/d+2/√(q)≤2/d. * The (k,√(k))-direct product test on X has soundness δ. First fix the parameters δ,C, d, n and a prime q=Θ(log^C n). To construct a complex with Θ(n) vertices and parameter q, we follow the presentation in <cit.>. Therein the authors pick a pro-ℓ subgroup H_0 ⊆ SU_g(H^ℓ(ℚ_ℓ)) and define the lattice Γ_0 in Section 5.1.3. Then using Fact 5.12 and Claim 2.31 they show that for some finite i, one can pick a normal subgroup H_i of H_0 with [H_0:H_i]=ℓ^i, for which the action of the corresponding lattice Γ_i does not merge vertices at distance at most 4. That is, for all 1≠γ∈Γ_i, γ and v in C_d, dist(γ,γ v) ≥ 4. Then to increase the size of the vertex set, they choose j which is large enough and output the complex Γ_j ∖C_d so that the number of vertices is between n and ℓ n. Since apriori the size of Γ_i ∖C_d could already be much larger than n, we instead use an explicit choice of subgroups as constructed in <Ref>, with H(1) replacing H_0, and H(i) in <Ref> replacing H_i from above. Here we used the fact that H(i)'s form a filtration of H(1), such that for all k≥ 1, H(k) is a pro-ℓ subgroup and for all k≥ 2, H(k) is an open compact normal subgroup of H(k-1). Let X_i denote the complex Γ_i ∖C_d where Γ_i is the lattice corresponding to H_i. <Ref> gives us that there is an i for which the compact open subgroup H_i⊂ G(ℚ_ℓ) satisfies that for any element 1≠γ∈Γ_i, and any vertex v∈C_d(0), dist(v,γ v)≥ 4. And moreover, |X_i(d)|≤ C_ℓ, dq^O(d^2) for some constant C_ℓ, d depending only on ℓ and d, implying that |X_i(0)|≤ n. To construct a complex X with Θ(n) vertices we can set X=X_j for some j which is large enough so that X_j(0) has size between n and n·ℓ^O(d^2). This is possible since [H_i' : H_i'+1] = ℓ^O(d^2) for all i', by the properties of the construction in <Ref>, therefore increasing i by 1 increases the size of X_i by a factor of at most ℓ^O(d^2). Since the construction of H(1) and H(i) in <Ref> is explicit, the time to construct Γ_i only depends on ℓ,d (which are constants). Therefore the construction of Γ_j and the final complex X is in (n) time. X is d-partite: For the first item, we note that the complex X is a quotient of the affine spherical building C̃_d with some group Γ. The affine building C̃_d is a d-partite complex and the symplectic group over _q, SP(2d, _q), acts transitively on the top faces of C̃_d. The complexes constructed in <cit.> are quotients of C̃_d by subgroups of SP(2d, _q), and in particular it follows that all of these quotients are also d-partite. Properties of vertex-links: The next three items follow because the link of each vertex in v is a product of two spherical buildings of dimension at most d over 𝔽_q (see <cit.> and <cit.>). These are well-known properties of spherical buildings and can be found in both the papers. Local spectral expansion of X: For the fifth item, we first note that the statement holds for every non-empty link L of X since these are tensor products of spherical buildings and the eigenvalue computations for them are well-known (see <cit.> or  <cit.>). Using this we get that the underlying graph of each vertex link L of X is a 1/√(q)-one-sided d-partite expander. The conclusion for the empty-link now follows by an application of the trickling down result for partite complexes, <Ref>. Second largest singular value of G: For the sixth item, write X(0) = V_1∪…∪ V_d, let M be the normalized adjacency operator of X, and let f X(0)→ℝ be a function with [f] = 0 and [f^2] = 1. As X is d-partite, we have that μ_1(V_i) = 1/d for all i, and we denote a_i = _v∼μ_1[f(v) | v∈ V_i]. ⟨ f, Mf ⟩ =_i,j∈ [d] i≠ j[_(u,v)∼ M_i,j[f(u)f(v)]] = _i,j∈ [d] i≠ j[_(u,v)∼ M_i,j[(f(u)-a_i)(f(v)-a_j)]] +_i,j∈ [d], i≠ j[a_ia_j], where M_i,j is the normalized adjacency operator of the bipartite graph G_i,j. Using the fourth item we have | _i,j∈ [d] i≠ j[_(u,v)∼ M_i,j[(f(u)-a_i)(f(v)-a_j)]] | ≤2/√(q)_i,j∈ [d] i≠ j√(_u∈ V_i(f(u)-a_i)^2)√(_v∈ V_j(f(v)-a_j)^2) ≤2/√(q)_i_u∈ V_i(f(u)-a_i)^2 ≤2/√(q)_i_u∈ V_i[f(u)^2] =1, where the second transition is by Cauchy Schwarz. Next, we have 0 = _i,j[a_ia_j] = 1/d_i[a_i^2] +_i,j∈ [d], i≠ j[a_ia_j], so _i,j∈ [d], i≠ j[a_ia_j] = -1/d_i[a_i^2]. Finally, _i[a_i^2] =∑_iμ(V_i)_v[f(v) | v∈ V_i]^2 ≤∑_iμ(V_i)_v[f(v)^2 | v∈ V_i] =[f^2]=1, so overall |_i,j∈ [d], i≠ j[a_ia_j]|≤1/d. Plugging this and (<ref>) into (<ref>) finishes the proof of the sixth item. Soundness of the Direct-Product Test: The seventh item follows from <cit.> as well as from <cit.>. The works <cit.> established the statement above as is, whereas the proof of <cit.> established the statement for the alphabet Σ = {0,1}. For the sake of completeness, in Section <ref> we explain how to adapt the argument from the latter papers to the case of general alphabets Σ. § BACKGROUND ON ROUTING PROTOCOLS In this section we discuss routing protocols, the pebble routing problem and a relaxation of it that is sufficient for us. §.§ Pebble Routing Protocols We start the discussion by formally defining routing/communication protocols on a graph G, as well as defining other related notions. Given a graph G, an r-round routing protocol on G is a set of rules where at each round vertices can send and receive messages to and from their neighbours in G. To decide what messages to send forward, each vertex is allowed to perform an arbitrary computation on all of its received messages. The work complexity[We note that the notion of work complexity used in prior work of <cit.> is slightly different from ours.] of is the total computation that any vertex performs throughout the protocol, as measured by the circuit size for the equivalent Boolean functions that the vertex computes. At round 0, the protocol starts out with an arbitrary function f:V(G)→Σ on the vertices of G, and after is implemented, at the final round r, we end up with a function g:V(G)→Σ on the vertices, also referred to as the function computed by the protocol . We now formally define the pebble routing problem, first studied in <cit.>. We say that a graph G has an r-round pebble-routing protocol, denoted by (G) = r, if the following holds. For all permutations π: V(G) → V(G) there is an r-round communication protocol on G such that in each round every vertex receives exactly one message symbol in Σ, and then sends this message forward to exactly one of its neighbors. If the protocol starts with f:V(G)→Σ, then at the r^th-round we have the function g:V(G)→Σ on the vertices satisfying g(π(u))=f(u). Note that this protocol can also be thought of as a set of |V(G)| paths, each one transmitting a message from u →π(u), where at each round any vertex is involved in exactly one path. We encourage the reader to think of pebble routing in this way. We remark that any pebble-routing protocol has work complexity at most rlog|Σ|.[We are accounting for the input length at a vertex in its computation cost.] In a sense, pebble routing is the simplest possible protocol, where no computation is performed by any vertex as it only forwards its received message. The works <cit.> study the pebble routing problem and prove that a sufficiently good expander graph admits an O(log n)-length protocol. For our purposes, it suffices to consider a relaxation of the pebble routing problem, in which the protocol is required to satisfy that g(π(u))=f(u) for all but o(1) fraction of the vertices. Also, we relax the condition that at each round, each vertex is used once. Instead, we only require that throughout the protocol, each vertex is a part of at most poly(log n) paths. Formally, we need the following result: There exists a universal constant α>0 such that the following holds. Let G = (V, E) be a regular, expander graph with second largest singular value σ_2(G) ≤α. Let c ≥ 0 be a fixed constant and π: V → V be a permutation. Then there is a (|E|)-time algorithm to construct a protocol with O(log n)-length paths from u to π(u) for all but O(1/log^c(n))-fraction of u. Furthermore, every vertex in V is used in at most t = O(log^c+1 n) paths in . We note that <cit.> is the case of c=0 in the above theorem. The proof for the case of c>0 is an easy generalization of the argument therein, and we give the formal argument in Section <ref> for the sake of completeness. §.§ Routing Protocols under Adversarial Corruption We now formally introduce the notion of almost-everywhere (a.e.) reliable transmission in the presence of adversarial corruptions. The goal in this problem is to design sparse networks and routing protocols for them, which allow a large fraction of honest nodes to communicate reliably even under adversarial corruptions of the communication network. Our setting is slightly different from most prior works in two ways. First, we need to consider a more general model of adversarial corruptions, where an arbitrary constant fraction of the edges may behave maliciously (as opposed to vertices being corrupted). Second, we are given a permutation π on V, and we need to design a routing protocol that succeeds in transmitting all but a small fraction of messages from u →π(u) correctly. Formally: We say an edge (u,v) ∈ G is uncorrupted if whenever u transfers a message σ across (u,v), then v receives σ; otherwise, we say the edge (u,v) is corrupted. We say that a graph G has a ((n),ν(n))-edge-tolerant routing protocol with work complexity w(n) and round complexity r(n) if the following holds. Let π: [n] → [n] be any permutation of V(G). Then there is an r-round communication protocol on G such that for all functions f:V(G)→Σ, after running the protocol for r-rounds each vertex v computes a value g(v)∈Σ with the following guarantee: any adversary that corrupts at most (n)-fraction of the edges, _u ∈ V(G)[f(u) ≠ g(π(u))] ≤ν(n). As in <Ref>, the work complexity w(n) of the protocol is the maximum computation that any node performs throughout the protocol, as measured by circuit size. Recall that if G is a constant degree graph which admits a pebble routing protocol of length at most r, then G admits a simple routing protocol that is (,r)-tolerant under edge corruptions. This simple connection though is insufficient for us: on a constant degree graph, the best we can hope for is (G) = Θ(log n), which necessitates taking ≤Θ(1/log n). Using such a protocol in conjunction with Lemma <ref> only leads to a 2-CSP on G with soundness 1-Θ(1/log n), which is too weak for our purposes. In the next section we will construct more involved protocols on the graphs underlying sufficiently good HDX, that are tolerant to a constant fraction of edge corruptions and have poly-logarithmic work complexity. Our main idea is to perform a message transfer on highly connected dense subgraphs in G, which naturally makes the protocol error-resilient, similar to the protocols of <cit.>. § LINK TO LINK ROUTING ON CL-COMPLEXES The goal of this section is to present a routing protocol over graphs underlying high-dimensional expanders. We begin by giving a high-level overview of the idea in Section <ref>, followed by a detailed description of the protocol and its analysis. §.§ High-level Overview Throughout this section, we denote by _ν(σ_1,…,σ_k) the value σ such that σ_i = σ or at least ν fraction of i's if such value exists, and ⊥ otherwise. Typically ν will be close to 1 (say, ν = 0.99). Recall that ultimately, we want to use our routing protocol to embed a regular 2-CSP instance Ψ' on the graph G. Since the instance Ψ' is over a regular graph and the graph G is not, there is some incompatibility between the two. In particular, it doesn't make much sense to identify vertices of Ψ' with vertices of G in an arbitrary way. To circumvent this issue we use the zig-zag product: we choose an appropriate family of expander graphs ℋ and take the zig-zag product graph Z = Gℋ. Thus, we get a graph Z which is a regular expander graph, and additionally there is a natural correspondence between vertices in Z and in G. Indeed, a vertex v in Z is naturally composed of a cloud-name, which we denote by v_1 and is actually a name of a vertex from G, and additionally an inner index that we denote by v_2 (corresponding to an edge of v_1∈ G). We associate a vertex of the 2-CSP Ψ' with a vertex of v∈ V(Z) (this makes sense now as these two graphs are regular), and this in turn is associated to the vertex v_1 in G. As discussed in <Ref>, we break Ψ into k permutations π_1,…, π_k each one thought of as a permutation on V(Z), and our goal is to transfer the value of v to π(v) for some fixed permutation π from {π_1,…,π_k}. We will instead think of the value of v, as held by all the vertices in the link of v_1, denoted by L_v_1, and this value needs to be transferred to the link L_π(v)_1. Once we have this kind of transfer, in <Ref> we show how to reduce Ψ to a 2-CSP on G. Formally, the initial function A_0 that has to be routed will be on the links associated to each v∈ V(Z). First define the set ={(v,u):v∈ V(Z),u∈ X_v_1(1)}. Then A_0:→Σ should be thought of as a function that for every v∈ V(Z), assigns the link X_v_1(1) values that are mostly the same across vertices of the link, in the sense that there exists σ_v ∈Σ such that _0.99(A_0(v,u)|u∈ X_v_1(1))=σ_v. For the overview, it is helpful to think of all the vertices in L_v_1 holding the same value, σ_v[We need to start with the weaker condition of almost all vertices holding σ_v to make the embedding result in <Ref> go through. Our proof easily ports over to this more general setting, so it is useful to think of all vertices in L_v_1 holding σ_v for the overview.]. Note that every value A_0(v,u) is held at the vertex u∈ G (v is a label for this value) and every vertex in G holds multiple such values. Even though the actual routing takes place on G, it is convenient to keep the graph Z in mind for the analysis, think of A_0 as a function on the links X_v_1 (for all v∈ Z) and the protocol as a link to link transfer. After the routing protocol ends, we will have a function A_T:→Σ. Our main result in this section, <Ref>, shows that the majority value on the link X_v_1 gets transferred to the link X_π(v)_1 for most v∈ Z. With this setup in mind, we first find a collection of paths on Z, each path being from v →π(v), using the relaxed pebble routing protocol in <Ref>. This is possible since Z is a regular expander graph. We now use these paths to implement the link to link transfer. Transmitting on Links: Each path P= u_1 →…→ u_T in , for T = O(log n) and u_T=π(u_1), can equivalently be thought of as a path over vertex-links: L_u_1→…→ L_(u_T)_1. Pretending for a moment that each link is in fact a clique, the protocol proceeds as follows: vertices of L_u_1 send their message to L_u_1,u_2; the vertices in L_u_1,u_2 each compute a majority value and pass it on to the vertices in L_u_2, and so on. We show that for any adversarial strategy, this protocol succeeds in transmitting the correct message on almost all of the paths in . The key here is that vertices are allowed to take majority values, so as long as they are not over-saturated with corrupted edges, they will compute the correct value. Returning to our actual scenario, the vertex links in X do not actually form cliques and so we cannot use the protocol as described above. To remedy this situation, we show that for each vertex link L in X we can set up a collection of short paths _L such that for almost all vertex pairs u,v∈ L, the collection _L contains a short path between u and v. Furthermore, no edge is used in the paths _L too often. The collection of paths _L allows us to pretend that the link L almost forms a clique, in the sense that we transmit a message between pairs u,v∈ L using the path from _L between them (as opposed to directly as in the case of cliques). Gap amplification: Finally we remark that one additional benefit of having A_0 on links is that only o(1)-fraction of the message transfers from L_v_1→ L_π(v)_1 are unsuccessful. We show that our protocol is (,1/ n)-tolerant, a guarantee which is impossible if the initial function was on V(G). Therefore associating the vertices of Ψ' to links, as we show in Section <ref>, translates to a gap amplification result for PCPs. More precisely, if we start with a 2-CSP Ψ' such that (Ψ')≤ 1-1/ n, then we get a 2-CSP Ψ on G with (Ψ)≤ 1-Ω(1), where Ψ is obtained using our link to link routing protocol. This gives the same amplification as achieved in the gap amplification procedure of Dinur <cit.>, but the alphabet size is exp( n)). This can be brought down to constant-sized alphabet by incurring a size blow-up by a factor n using the alphabet reduction technique (which we anyway have to do, see Section <ref>). Throughout this section we fix X a complex as in Theorem <ref> with the parameters δ∈ (0,1) chosen arbitrarily,[The parameter δ dictates the soundness of our final PCP. The results in this section and the next hold for all δ.] q=(log n)^C for sufficiently large constant C>0, d a large constant, and |X(1)|=n. We also fix the graph G = (X(1),X(2)). §.§ Routing Inside a Link In this section we describe a routing procedure inside individual links. This routing procedure will help us to facilitate the intuition that links in X are almost as well connected as cliques, supporting the approach presented in Section <ref>. We now show that inside each vertex link L, we can construct (in polynomial time) a set of short paths between almost all pairs of vertices in L. More precisely, we construct a collection of paths between all pairs of vertices U ∈ L_i(1), V ∈ L_j(1) for two indices i,j that are far apart. Our algorithm will give up on a small fraction of such pairs; we refer to the path between them as “invalid” and denote P(u,v)=⊥. Let >0 and fix a pair of indices i,j ∈ [d-1] with |i-j| ≥ (d-1). Then there is a (|L_ij(2)|)-time algorithm that constructs a set of paths _ij={P(u,v)}_u∈ L_i(1),v∈ L_j(1), where each valid path is of length O(1/) and at most O() fraction of paths are invalid. Furthermore, each edge in L_ij(2) is used in at most O(|L_i(1)||L_j(1)|/^3|L_ij(2)|) paths in _ij. Consider any pair of indices i,j ∈ [d] with |j-i| ≥ (d-1). Fix ℓ=Θ(1/) and t=Θ(|L_i(1)||L_j(1)|/^3|L_ij(2)|). By the fourth item in Theorem <ref> the diameter of (L_i(1),L_j(1)) is at most O(1/). The algorithm to construct the paths simply picks the shortest paths between a pair of vertices iteratively, and deletes any edges that has been used at least t-times. Note that since paths are of length ≤ℓ, in an ideal scenario where every edge occurrs equally often, each edge would belong to O(|L_i(1)||L_j(1)|/|L_ij(2)|) paths. We take t to be larger so as to allow some slack, which still gives us the uniformity over edges as in the statement of the lemma. We now proceed to the formal argument. For a set of edges ⊆ L_ij(2), let L_ij() denote the bipartite graph with vertices (L_i(1),L_j(1)) and the set of edges . Formally our algorithm is as follows: * Instantiate _ij=∅, =L_ij(2). * For every u∈ L_i(1), v∈ L_j(1) do the following: * Find the shortest path P(u,v) between u and v in the graph L_ij(). If the length of P(u,v) is at most ℓ then add it to the set _ij, else set P(u,v)=⊥. * If any edge e in L_ij(2) has been used in at least t paths in _ij then remove it from . It is easy to see that the above algorithm runs in polynomial time in |L_ij(2)| and that every edge in L_ij(2) is used in at most t paths in _ij. It remains to show that the algorithm finds a path of length at most ℓ between almost all pairs of vertices. Let _f denote the set when the algorithm terminates and _0=L_ij(2) denote the set at the start. It is easy to check that the number of edges that the algorithm removes from _0 is at most |L_i||L_j|ℓ· 1/t = Θ(^2|L_ij(2)|), implying that _e∼ L_ij(2)[e∉_f]≤^2. Since the diameter of (L_i(1),L_j(1)) is at most ℓ, for every u ∈ L_i, v∈ L_j we may fix an arbitrary shortest path P_u,v (length ≤ℓ) between them. By the third item in Theorem <ref> there is a group of symmetries (L) that acts transitively on the top faces of L. Thus, we may consider the path g ∘ P_u,v for any g ∈(L), with goes between g∘ u and g∘ v. We consider the distribution over paths, obtained as sample u ∼ L_i, v ∼ L_j, g ∼(L), and output the path P= g ∘ P_u,v. Note that a random edge drawn from a random path P ∼, is a uniformly random edge in L_ij(2) due to the transitive symmetry of the group (L). Therefore, _P ∼ e∼ P [1_e∉_f] ^2. Since the marginal of the starting and ending point of P ∼ is the uniform distribution over u ∼ L_i,v ∼ L_j and the length of the path is O(1/), we can rearrange the left hand side above and apply a union bound to get _u∼ L_i,v∼ L_j[_P ∼|u,v[1_∃ e ∈ P, e ∉_f]]ℓ^2 . Thus, by an averaging argument for at least a (1-O())-fraction of the pairs u,v, _P ∼|u,v[1_∃ e ∈ P, e ∉_f] ≤ 1/2. This means that when the algorithm terminates, for at least (1-O()) fraction of vertex pairs u,v there is at least one path of length at most ℓ between them. We argue that for each such pair u,v, the collection _i,j already contains a path between u and v upon the termination of the algorithm, since we have only deleted edges. We now use the short paths between all pairs of vertices in L to argue that an adversary that corrupts a small fraction of the edges can only corrupt a small fraction of the paths in . Recall that some of the paths in might be invalid, and to simplify notation we account these as paths that are corrupted by default. Thus, we say that a path in is corrupted if it equals ⊥ or if at least one of the edges in it is corrupted. Fix >0 and a vertex link L of X. Then there is a (|L|)-time algorithm to construct a set of paths ={P_U,V}_U≠ V ∈ L in which each valid path has length at most O(1/^1/8). Furthermore, any adversary that corrupts at most -fraction of the edges in L, corrupts at most O(^1/8)-fraction of the paths in . For every pair of indices i,j ∈ [d-1] with |j-i| ≥^1/8(d-1), run the polynomial time algorithm in <Ref> with the parameter ^1/8, to get a set of paths _ij of length O(1/^1/8) each, between all pairs u∈ L_i(1), v∈ L_j(1) such that at most O(^1/8)-fraction of the paths are invalid and every edge in L_ij(2) is used in at most O(|L_i(1)||L_j(1)|/^3/8|L_ij(2)|) paths. For every pair i,j∈ [d-1] with |j-i|<^1/8d, set _ij={P(u,v)}_u∈ L_i(1),v∈ L_j(1) with P(u,v)=⊥. Fix any adversary that corrupts at most -fraction of the edges in L, and let denote the set of corrupted edges. Let the fraction of corrupted edges inside L_ij(2) be denoted by μ_ij(). Let denote the set of indices i,j ∈ [d-1] that satisfy, |j-i| ≥^1/8(d-1) and μ_ij() ≤√(). By Markov's inequality it follows that the a random pair i,j∼ [d] is in with probability at least 1-^1/8-√()≥ 1-O(^1/8). Fix any pair (i,j) ∈ and consider the set of paths _ij. Recall that say that a path P(u,v) is corrupted if either P(u,v)=⊥ or any of its edges is corrupted. Since at most √()|L_ij(2)| edges are corrupted and each such edge is used in at most O(|L_i(1)||L_j(1)|/^3/8|L_ij(2)|) paths, we get that the number of corrupted paths in _ij is at most √()|L_ij(2)| ·Θ(|L_i(1)||L_j(1)|/^3/8|L_ij(2)|)^1/8|L_i(1)||L_j(1)|. It follows from the union bound that the fraction of corrupted paths is at most _i,j∼ [d][(i,j)∉]+_i,j∼ [d] u∼ L_i(1) v∼ L_j(1)[P(u,v) is corrupted| (i,j)∈] ^1/8. §.§ Moving to the Zig-Zag Product of G As discussed earlier in Section <ref>, the fact that graph G is not regular introduces several technical difficulties; this prohibits us from using Theorem <ref> directly, and it also poses difficulties later on when we use routing schemes to embed a regular 2-CSP instance onto G. In this section we circumvent these issues by considering the zig-zag product of G with appropriate expanders. The resulting graph Z will be a virtual, regular graph; by virtual, we mean that the graph Z will help us in choosing appropriate paths in G (for the routing) and in the analysis. The routing itself is still done on the graph G. For a set S ∈ X(s), let Δ_S=|X_S(d-s)|. Henceforth we will think of G as an undirected graph without weights, and for that we replace each edge (u,v) with Δ_{uv} parallel edges. Since the probability of drawing (u,v) from μ_2 is Δ_{u,v}/d2|X(d)|, this gives us the same random walk matrix over X(1). We use the zig-zag product to get a regular graph Z from G. The additional benefit of this operation is that Z has constant degree graph while also being an expander. The zig-zag product was first defined in <cit.> and is typically stated for regular graphs, but below we state a similar construction for irregular graphs G. We follow the exposition from the lecture notes <cit.>, except that we use the extension to irregular graphs. §.§.§ The Replacement Product and the Zig-Zag Product The following fact is well-known. For all σ>0, there exist k,m_0∈ and a family of k-regular graphs ={H_m}_m ≥ m_0 which is polynomial-time constructible, where for each m≥ m_0 that graph H_m has m vertices and σ_2(H_m)≤σ. For σ to be determined later, fix the family of expander graphs as in Lemma <ref>, and fix an undirected (possibly irregular) graph G with minimum degree at least m_0. Our presentation of the zig-zag product follows <cit.> almost verbatim, and it is convenient to first define the replacement product of G with , denoted by G. Assume that for each vertex of G, there is some ordering on its D neighbors. Then the replacement product G is constructed as follows: * Replace a vertex u of G with a copy of H_(i), that is the graph from on (i) many vertices (henceforth called a cloud). For u ∈ V(G), c ∈ V(H_(i)), let (u,c) denote the c^th vertex in the cloud of u. * Let (u, v) ∈ E(G) be such that v is the c_1^th neighbor of u and u is the c_2^th neighbor of v. Then ((u,c_1), (v,c_2)) ∈ E(G ). Also ∀ u ∈ V(G), if (c_1, c_2) ∈ E(H_(i)), then ((u, c_1), (u, c_2)) ∈ E(G). Note that the replacement product constructed as above has 2|E(G)| vertices and is (k+1)-regular. The zig-zag product G H is constructed as follows: * The vertex set V(G ) is the same as that of G. * ((u, c_1), (v, c_4)) ∈ E(G ) if there exist c_2 and c_3 such that ((u, c_1), (u, c_2)), ((u, c_2), (v, c_3)) and ((v, c_3), (v, c_4)) are edges in E(G H), i.e. (v, c_4) can be reached from (u, c_1) by taking a step in the cloud of u to go to (u,c_2), then a step between the clouds of u and v to go to (v,c_3), and finally a step in the cloud of v to reach (v,c_4). It is easy to see that the zig-zag product is a k^2-regular graph on 2|E(G)| vertices. Given that G is an expander, the zig-zag product graph G is also an expander. The proof when G is a regular graph can be found in <cit.>. The proof for the irregular case above is exactly the same hence we omit it here. If G is a graph on n vertices with σ_2(G)≤α, and ={H_m}_m ≥ m_0 is a family of k-regular graphs with σ_2(H_m) ≤β for all m ≥ m_0, then G is a k^2-regular graph with second largest singular value at most α + β + β^2. Along with the above statement, we will use the following obvious but important fact, For G=(X(1),X(2)) and Z=G, the distribution that samples a uniformly random vertex (v,c) of Z and outputs v, is equal to v ∼ X(1). Similarly the distribution that samples a uniformly random edge ((u,c_1),(v,c_2)) of Z and outputs (u,v) is equal to the distribution over edges (u,v)∼ X(2). §.§ The Routing Protocol using Links Fix α>0 from Theorem <ref>. Consider the graph G=(X(1),X(2)), viewed as an undirected and unweighted graph with multi-edges. Let be the family of expanders from <Ref> with σ_2()≤α/2 and Z=G. For every vertex v∈ Z we will let v_1 denote its component in X(1) and v_2 denote its position in the cloud of v_1, that is, v=(v_1,v_2). We have the following link-to-link transfer lemma: There exist _0,α>0 such that for all C ∈ there exist constants C',d_0∈ such that for large enough n ∈ the following holds. Let X be a complex from <Ref> with dimension d ≥ d_0, |X(1)|=n and the parameter q=log^C' n. Let Z=G be the zig-zag product of G with the family of expander graphs with σ_2()≤α/2 as in <Ref>. Let π:V(Z)→ V(Z) be any permutation. Then there is a routing protocol on G=(X(1),X(2)) with round complexity T=O(log n) and work complexity q^O(d^2)log |Σ| such that for all initial functions A_0:→Σ satisfying _v∼ V(Z)[_0.99(A_0(v,u)| u∼ X_v_1(1))≠⊥] ≥ 1-η, and for all possible adversaries that corrupt at most -fraction of edges with ≤_0, the protocol computes the function A_T:→Σ satisfying _v∼ V(Z)[_0.99(A_T(π(v),w)| w∼ X_π(v)_1(1))=_0.99(A_0(v,u)| u∈ X_v_1(1))]≥ 1-η-1/log^C n. Fix a permutation π on V(Z) and an initial function A_0:→Σ. Setting up paths over 1-links: Using <Ref> we know that the zig-zag product Z=G is a k-regular graph, for k=Θ(1), with σ_2(Z)≤2/d+α/2+α^2/4≤α. By Theorem <ref>, we may fix a relaxed-pebble-routing protocol that has |V(Z)| paths each of length T' ≤ O(log N) where every vertex and edge in Z is used in at most log^C_1n paths, for some constant C_1 that depends on C, and at most 1/3log^C n paths are invalid. It will be convenient for us notationally to have all of the paths in have the same length T', and we do so by repeating the final vertex of the paths the amount of times necessary. Each path in is of the form P= u_1→…→ u_T', with u_i∈ V(Z), u_T'=π(u_1) and for all i, (u_i,u_i+1) is an edge in the zig-zag product Z=G. This implies that ((u_i)_1,(u_i+1)_1) must be an edge in G. Therefore given P, we will think of the following link to link message transfer, X_(u_1)_1(1) → X_(u_1)_1,(u_2)_1(1) → X_(u_2)_1(1) →…→ X_(u_T')_1(1), to implement the required transfer from X_(u_1)_1(1) to X_(u_T')_1(1). Let T:=2T'. Expand each path in as shown above into 1-links connected by 2-links. For even t ∈ [0,T] let L_j,t denote the 1-link that occurs in the (t/2)^th-time-step of the j^th path in , and for odd t∈ [T] let L_j,t denote the intermediate 2-link that is the intersection of the 1-links L_j,t-1 and L_j,t+1. Setting up paths inside 1-links: For every 1-link X_u for u∈ X(1), we use the algorithm in <Ref> with the parameter √(), to construct a collection of short paths _u = {P_u(v,w)}_v,w∈ X_u(1) between all pairs of vertices v,w inside the link X_u. We will refer to these as the internal paths in X_u. The description of the routing protocol: for each path j and time-step t > 0, each vertex u∈ L_j,t takes the majority of the values it receives from v ∈ L_j,t-1; this occurs through the path P_L_j,t-1(v,u) if t is odd or the path P_L_j,t(v,u) if t is even (some paths may be invalid, in which case we interpret the received value on them as “⊥”). Then u passes on this value to w ∈ L_j,t+1 through the path P_L_j,t(u,w) if t is even and the path P_L_j,t+1(u,w) if t is odd. To formalize this, let the “outgoing message” from u at time-step 0, path j be (u,j,0)=A_0(v_j,u) where v_j ∈ V(Z) is the start vertex on the j^th path. For a vertex u ∈ X(1), for every path j∈ [N] and time-step t∈ [T] where u∈ L_j,t, u maintains a set of “incoming messages” (u,v,j,t) that it receives from vertices v∈ L_j,t-1. The vertex u then sets its outgoing message as (u,j,t)=_v∼ L_j,t-1((u,v,j,t)) if more than 1/2-fraction (computed according to the distribution on L_j,t-1) of the list has the same value, else sets it to ⊥. This outgoing message is then sent to every vertex in L_j,t+1 through the paths _L_j,t if t is even and _L_j,t+1 if t is odd. Bad links and bounding them: We now begin the analysis of the protocol, and for that we need to introduce a few notions. First, at most -fraction of the edges are corrupted, and we denote the set of corrupted edges by ⊆ X(2). A 1-link X_u is called bad if it contains too many corrupted edges, more precisely, if μ_u() ≥√() and good otherwise. _u ∼ X(1)[X_u is bad] ≤(d)/q. Deferred to <Ref>. For any 1-link X_u and v,w∈ L(1), we say an internal path P_u(v,w) is corrupted if it equals ⊥ or any of the edges on it are corrupted. We define the set _u ⊆ X_u(1) of doomed vertices of X_u as those vertices v∈ X_u(1) for which at least ^1/32-fraction of the paths P_u(v,w) for w ∼ X_u(1) are corrupted, i.e. _u = {v ∈ X_u(1): _w∼ X_u(1)[P_u(v,w) is corrupted] ≥^1/32}. The following claim asserts that a good 1-link cannot have too many doomed vertices. If X_u is a good link then, _v ∼ X_u(1)[v ∈_u] ^1/32. Deferred to <Ref>. A 2-link X_u,v is said to be bad if either X_u or X_v is a bad 1-link, or one of μ_uv(_u) or μ_uv(_v) is at least ^1/64. _(u,v) ∼ X(2)[X_u,v is bad] ≤(d)/q. Deferred to <Ref>. Link to Link Transfer on Good Paths: We now use Claims <ref>, <ref>, <ref> to finish the analysis of the protocol. Recall that every path contains a 1-link at an even time-step and a 2-link at an odd time-step. Consider any path j where (1) _0.99(A_0(v_j,u))≠⊥, (2) it is a valid path in , and (3) for all times steps t, L_j,t is a good link. On such a path we will show that _0.99((u,j,t)| u∈ L_j,t)=_0.99((v,j,t+1)) for all t. After that, we argue that almost all paths satisfy these properties. The proof of the former fact is broken into two steps: the message transfer from L_j,t to L_j,t+1 and then the message transfer from L_j,t+1 to L_j,t+2, and we argue about each step separately. The argument proceeds by induction on t. Fix some even t ∈ [T-2] and let _0.99((v,j,t))=σ≠⊥. The vertices u ∈ L_j,t+1 that are not doomed with respect to L_j,t (u ∉_L_j,t) will receive the value σ on at least 1-O(^1/32)-0.01 ≥ 1/2-fraction of the paths P_L_j,t(v,u) and therefore will compute the correct majority, setting (u,j,t+1)=σ. Since L_j,t+1 is a good 2-link we know that at most O(^1/64)≤ 0.01-fraction of its vertices are doomed, which gives that _0.99((u,j,t+1)|u∈ L_j,t+1)=σ. The argument for odd t is exactly the same. Fix some odd t ∈ [T-2] and let _0.99((v,j,t))=σ≠⊥. The vertices u ∈ L_j,t+1 that are not doomed with respect to L_j,t (u ∉_L_j,t) will receive the value σ on at least 1-O(^1/32)-0.01 ≥ 1/2-fraction of the paths P_L_j,t(v,u) and therefore will compute the correct majority, setting (u,j,t+1)=σ. Since L_j,t+1 is a good 1-link, <Ref> implies that at most O(^1/32) ≤ 0.01-fraction of its vertices are doomed, which immediately implies that _0.99((u,j,t+1)|u∈ L_j,t+1)=σ. Setting A_T(π(v_j),v)=(v,j,T) we conclude that for the j^th-path, _0.99(A_T(π(v_j),v))=_0.99(A_0(v_j,u)). Bounding the number of Good Paths: We now finish off the proof by calculating the number of paths j satisfying conditions (1), (2) and (3) above. By the assumption of the lemma we know that there are at most η-fraction paths violating (1) and by construction of at most 1/3log^C n-fraction paths violate (2). To account for condition (3) it will be useful to switch back to the equivalent view of as a set of paths over Z. We call a vertex v∈ Z bad if the corresponding 1-link X_v_1 is bad and an edge (v,w)∈ Z bad if the corresponding 2-link X_v_1,w_1 is bad. Using Claims <ref> and <ref> we get _v ∼ Z[v is bad] ≤(d)/q, _(v,w) ∼ Z[(v,w) is bad] ≤(d)/q, since by <Ref> we have equality of the distributions in question. Now note that condition (3) is equivalent to saying that path j contains only good vertices and good edges (from Z) in it. The protocol uses every vertex at most log^C_1n times which implies that at most (d)/q· |V(Z)|·log^C_1n≤|V(Z)|/3log^C n-paths contain bad vertices on them (by setting q to be a large enough polynomial of log n). Similarly since uses an edge in Z at most log^C_1n times, we get that at most (d)/q· |E(Z)|·log^C_1n≤|V(Z)|/3log^C n where we used that |E(Z)|=Θ(|V(Z)|) and q is large enough. Therefore by a union bound we get that at most η+1/log^C N-fraction paths fail in transmitting the majority symbol to the link L_π(v_j) correctly, giving us the conclusion in the lemma. §.§.§ Proofs of Omitted Claims In this section we give the proofs of several claims used throughout the proof of Lemma <ref>. [<Ref> restated] _u ∼ X(1)[X_u is bad] ≤(d)/q. Recall that is a set of corrupted edges, and let _jk denote the set of edges ∩ X_jk(2) with μ_jk() denoting its measure in X_jk(2). Note that _j≠ k∈ [d][μ_jk()]=μ()≤. Fix some i ∈ [d] and consider the bipartite graph, B_i=(X_i(1), ∪_j≠ k ∈ [d]∖ i X_jk(2)). We say that a vertex in the right side of B_i is corrupted if the corresponding edge belongs to . First note that for all u∈ X_i(1), the link X_u is bad if the fraction of u's neighbors in B_i that are corrupted is at least √(). Therefore let us bound the probability that this event occurs. By item 5 of <Ref> and <Ref>, the bipartite graphs (X_i(1),X_jk(2)) have second largest singular value at most (d)/√(q) for all j ≠ k. Therefore the second largest singular value of B_i is also at most (d)/√(q). Applying <Ref> we get _u ∼ X_i(1)[X_u is bad] ≤(d)/q, where we used that the expected fraction of bad neighbors is _j ≠ k ∈ [d]∖ i[μ_jk()]≤ 2. Since the above bound holds for all i, we get the conclusion in the lemma. [<Ref> restated] If X_u is a good link then, _v ∼ X_u(1)[v ∈_u] ^1/32. Since X_u is a good link we know that μ_u()≤√(). By the construction of the internal paths _u from <Ref> we know that at most O(^1/16)-fraction of the paths P_u(v,w) for v,w∼ X_u(1) are corrupted. Therefore by Markov's inequality we get that for at most O(^1/32)-fraction of v∼ X_u(1), _w∼ X_u(1)[P_u(v,w) is corrupted]≥^1/32. Therefore v∈_u with probability at most O(^1/32). [<Ref> restated] _(u,v) ∼ X(2)[X_u,v is bad] ≤(d)/q. Recall that the link X_u,v is bad if either X_u or X_v is a bad 1-link or one of μ_uv(_u) or μ_uv(_v) is at least ^1/64. Using <Ref> we can bound the probability that one of X_u or X_v is bad, so let us now bound the probability of the latter events. Fix a good 1-link X_u henceforth and let us bound the fraction of v∼ X_u(1) for which μ_uv(_u) is large. Using <Ref> we get that μ_u(_u)^1/32. The link X_u is a (d-1)-partite complex with colors [d-1] and the property that for all i≠ j ∈ [d-1] the bipartite graph (X_u,i(1),X_u,j(1)) has second largest eigenvalue at most O(1/√(q)). Let μ_j(_u) denote the measure of _u ∩ X_u,j(1) inside X_u,j(1). One can check that _j∼ [d-1][μ_j(_u)]=μ_u(_u). Fix a color i ∈ [d-1] of X_u and consider the bipartite graph, B_i=(X_u,i(1), ∪_j ∈ [d-1]∖ i X_u,j(1)). This graph has second largest singular value at most O(1/√(q)). We say that a vertex in the right side of B_i is doomed if it belongs to _u. Noting that for all v∈ X_i(1), μ_uv(_u) is the fraction of doomed neighbors of v in B_i and applying <Ref> we get _v ∼ X_u,i(1)[μ_uv(_u) ≥^1/64] ≤(d)/q, where we used that the expected fraction of bad neighbors is _j ∼ [d-1]∖{i}[μ_j(_u)]≤ 2μ_u(_u)^1/32. Since the above bound holds for all i, we get that, _v ∼ X_u(1)[μ_v(_u) ≥^1/64] ≤(d)/q. We can now bound the fraction of 2-links that are bad by a union bound: _(u,v)∼ X(2)[X_u,v is bad]≤ 2_u∼ X(1)[X_u is bad]+2_v ∼ X_u(1)[μ_uv(_u) ≥^1/64| X_u is good]≤(d)/q, where we used <Ref> to bound the first term. §.§ A Routing Protocol based on Cliques In this section, we formally state the performance of the clique-to-clique routing protocol. This protocol is similar in spirit to the link-to-link protocol and it is where most of our intuition comes from. There is _0 ∈ (0,1) such that for all C ∈, for large enough n ∈ the following holds. Let X be a d-dimensional complex with |X(1)|=n, |X(d)|=N, d=Θ(loglog^2 n) and N ≤ n^2, that is a γ-one-sided local spectral expander with γ < 1/(d). Let π:X(d)→ X(d) be any permutation. Then there is a routing protocol on G=(X(1),X(2)) with round complexity T=O(log N) and work complexity max_u O(Td|X_u(d-1)|) such that for all initial functions A_0:X(d)× [d]→Σ satisfying _D∼ X(d)[_0.99(A_0(D,u)| u∈ D)≠⊥] ≥ 1-η, and for all possible adversaries that corrupt at most -fraction of edges with ≤_0, the protocol computes the function A_T:X(d)× [d]→Σ satisfying _D∼ X(d)[_0.99(A_T(π(D),v)| v∈π(D))=_0.99(A_0(D,u)| u∈ D)]≥ 1-η-1/log^C N. We omit the formal proof and instead provide a brief sketch, as it is very similar to the proof of Lemma <ref> (also, we do not use this statement later on in the paper). Analogously to the proof therein, one sets up a collection of paths, this time in the clique to clique graph, whose vertices are X(d) and edges correspond to cliques that intersect in size α d (the zig-zag product trick is not necessary in this case). One defines the notion of “bad vertices”, which are vertices that have many corrupted edges adjacent to them, and subsequently defines the notion of bad cliques, which are cliques that contain many bad vertices. In contrast to the proof of Lemma <ref>, an upper bound of the fraction of bad cliques is not established by spectral bounds this time; instead one appeals to the Chernoff-type bound of <cit.>, which gives bounds of the form 2^-Θ_(√(d)). This is ultimately the reason that the argument requires the dimension of the complex to be somewhat super constant. § EMBEDDING A PCP ON AN HDX The works of <cit.> used routing networks to transform the graph underlying 2-CSPs to an explicit graph. Towards this end, they used a pebble routing protocol on De-Bruijn graphs. In this section we generalize their argument and show that any tolerant routing protocol on G gives rise to a PCP embedding result on G. We then apply this connection to our specific link-to-link routing protocol. We show that in this case, this connection is in fact stronger and gives a gap amplification statement. §.§ Connection between Routing Protocols and PCPs The transformation we describe for modifying the underlying graph of a 2-CSP has one significant downside: it increases the alphabet size considerably. Since the routing length is always Ω(log n) on constant degree graphs, the alphabet size always increases to be at least polynomial. In fact, since in our case the work complexity is poly-logarithmic, the alphabet size will increase further and will be 2^(log n). Therefore, to facilitate alphabet reduction steps later on, we need a more refined notion related to the alphabet size of CSPs, called the decision complexity. Additionally, to simplify the presentation, we also generalize the notion of 2-CSPs, and allow for varying alphabets Σ(u) ⊆Σ for the vertices u ∈ G.[This helps us to restrict the provers' strategy to be one that satisfies additional constraints. This technique is usually referred to as folding, and it makes the soundness analysis slightly cleaner.] An instance Ψ = (G=(V,E), Σ, {Σ(u)}_u∈ V, {Ψ_e}_e∈ E) of a generalized 2-CSP consists of a weighted graph G, alphabets Σ,{Σ(u)} with Σ(u)⊆Σ for all u∈ V, and constraints Ψ_e⊆Σ×Σ, one for each edge. The decision complexity of Ψ is defined as the maximum, over all edges e = (u,v), of the sum of the following circuit complexities: the circuit complexity of checking membership in Ψ_(u,v) i.e. the circuit complexity of deciding if (σ,σ') ∈Ψ_(u,v), the circuit complexity of checking membership in Σ(u), and the circuit complexity of checking membership in Σ(v). Informally, the decision complexity of a CSP is the complexity of the constraints of it. In many PCP reductions, the alphabet size is a constant, and as the decision complexity is always upper bounded by poly(|Σ|), it is often omitted from the discussion. In our case the decision complexity will be poly(log|Σ|), and it is closely related to the notion of work complexity in the context of the almost everywhere reliable transmission problem (as we exhibit next). The following lemma translates routing protocols for a graph G to PCPs on the graph G, generalizing <cit.>. Suppose G is a regular graph on 2n vertices that has an (,ν)-edge-tolerant routing protocol on the alphabet Σ with work complexity W, that can be constructed in time (n). Then there is a (n) time reduction that, given a 2-CSP instance Ψ' on a k-regular graph H and alphabet Σ, with |V(H)|≤ n, produces a 2-CSP instance Ψ on G such that: * If (Ψ')=1 then (Ψ)=1. * If (Ψ')≤ 1-8ν then (Ψ)≤ 1-. * The alphabet size of Ψ is at most |Σ|^kW. * The decision complexity of Ψ is O(W)+O(k|Σ|). We begin by reducing the 2-CSP Ψ' to a 2-CSP Φ on a bipartite 2k-regular graph G' on |V(G)| vertices such that if Ψ' is satisfiable then so is Φ and if (Ψ')≤ 1-8ν then (Φ)≤ 1-ν. The goal here is to obtain a regular bipartite graph (which can hence be decomposed into perfect matchings), as well as align the number of vertices of Φ to match the number of vertices in G. Reducing to Φ: Let n= a· |V(H)|+r for some a,r∈ with r≤ |V(H)|. Define a graph G” on the vertex set [n] which is formed by taking a-many disjoint copies of the k-regular graph H and an arbitrary k-regular graph (possibly with multi-edges) on the remaining r vertices. Define a 2-CSP Ψ” on G” which is equal to Ψ' on each of the disjoint copies of H and has the “equality” constraint on the edges of the remaining r vertices. It is easy to see that if (Ψ')=1 then (Ψ”)=1 and if (Ψ')≤ 1-8ν then (Ψ”)≤ 1-4ν. We will now reduce the 2-CSP Ψ” to a 2-CSP Φ whose constraint graph G'=(L∪ R, E') is bipartite. Let L,R be equal to V(G”). The left vertex set is defined as L = V(G”)×{1} and the right vertex set is defined as R = V(G”)×{2}. Let E' be the set of edges ((u,1),(v,2)) where (u,v) is an edge in G” and additionally add k edges between each pair ((u,1),(u,2)). For the former edges put the constraints corresponding to Ψ” and for the latter put in equality constraints. The completeness is clear, so let us prove the soundness of the reduction. Assume that (Φ)≥ 1-ν via the assignment A. Define the assignment B on G” as B(u)=A((u,1)). For at least 1-2ν-fraction of u, A((u,1))=A((u,2)), denoted by the set . Sampling a constraint ((u,1),(v,2)), with probability at least 1-4ν we have that v is in and the constraint is satisfied in Φ, in which case B satisfies the constraint (u,v) in Ψ”. In particular, it follows that val(Ψ”)≥ 1-4ν, finishing the proof of the soundness. Routing Setup: Let [2n] denote the vertex set of both G, G'; abusing notation we will use the same letters to denote a vertex in G and in G'. Since the graph G' is a bipartite 2k-regular graph, we may partition the edge set of G' into 2k perfect matchings, which we denote by π_1,…, π_2k. We think of π_1,…,π_2k both as matchings and also as the permutations on V(G). Note that since each π_i is a perfect matching we have that π_i^2=id. Let _i denote the protocol that routes the matching π_i on V(G). We know that _i has round complexity T and every vertex u∈ X(1) receives at most T_u messages (over all rounds), and has work complexity W. We now describe the CSP Ψ. The graph: The graph underlying the CSP Ψ' is G. The alphabet: Let Σ be the alphabet of Φ. The alphabet of each vertex u is a subset of Σ^O(kT_u) and we think of a label to u as describing the messages received by u throughout all the protocols _i. The messages that u sends to its neighbor v at every round is a deterministic function of the messages it has received so far. With this in mind, an assignment to the CSP can be thought of as a transcript of the messages that were routed in the protocol, and our goal in designing the constraints is to check that the message that is sent by a vertex u to its neighbor v at round t, is received correctly by v at round t+1. Formally, we think of an assignment to the CSP as a maps A_0 V(G) →Σ, and maps _i,t 2E(G)→Σ^* for each i∈ [2k] and round t∈ [T]; here 2E(G) denoted the set of ordered tuples [u,v] for each (u,v)∈ E(G). As part of its alphabet, every vertex u holds the value A_0(u)∈Σ and the symbols _i,t[u,v] that specify the messages that u receives from its neighbors v in the protocol _i at round t. Viewing the rules of each protocol as maps: We now define certain maps that are useful for specifying the constraints of Ψ. First, the rules of each protocol _i can be described using the following maps _i,t 2E(G)×Σ^* →Σ^*∪{⊥}, for i∈ [2k], t∈ [T-1]; the symbol _i,t[u,v,σ] is the message that u sends to v in round t in the protocol _i if the messages u received in previous rounds are given by σ, for a valid σ of the correct length, and otherwise _i,t[u,v,σ] = ⊥. Given the transcript _i,t'[u,·] for all t'≤ t (which is part of the assignment to u), the message that u sends to v is _i,t[u,v,A_0[·,u]∘_i,1[u,·]…∘_i,t[u,·]] which we will denote by _i,t[u,v] for brevity. We stress that this value only depends on the assignment given to u. Finally define the output of the protocol _i as the map A_i,T. Formally, for all i∈ [2k], consider the map A_i,T V(G) ×Σ^* →Σ where the symbol A_i,T[u,σ] specifies the output of the routing algorithm at a vertex u in _i when given as input the transcript σ, which in our case equals A_0[·,u]∘_i,1[u,·]∘…_i,T-1[u,·]. Again for brevity, we omit the dependence on σ as it is clear from context, and use A_i,T(u) to denote the corresponding output symbol. We emphasize that the maps A_i,T and _i,t are not part of the alphabet, but since they are a deterministic function of the messages received at a vertex, we use them while defining the constraints. Intuition towards defining constraints: In an ideal proof we want A_0 to be a satisfying assignment to Φ, and the maps _i,t,_i,t to be the transcript when the protocol _i is executed on G. It is therefore convenient to view the maps A_0, A_i,T and _i,t,_i,t from the point of view of what happens in the routing protocol. We want to ensure that the message transmission across every edge behaves as it is supposed to – for the edge (u,v) the outgoing message that v sends to u at any round should equal the message that u receives at the next round and vice versa. Note that this check only depends on the alphabet of u and v. Secondly, suppose that the routing protocol was successful and that A_0 was indeed a satisfying assignment to Φ. Then for every u, the protocol _i successfully transmitted the symbol A_0(π_i(u)) from the vertex π_i(u) to the vertex π_i(π_i(u))=u, that is, A_i,T(u) equals A_0(π_i(u)). In particular, we would have that (A_0(u),A_i,T(u)) would satisfy the constraint (u,π_i(u)) in Φ. Since this only depends on u, we enforce this as a hard constraint on the alphabet of u via folding. Folding: We constrain the label set of u to only be tuples where (A_0(u),A_i,T(u)) satisfies the constraint Φ(u,π_i(u)) in Φ, for all i∈ [2k]. By that, we mean that only labels that satisfy this condition are allowed in an assignment to Ψ. The constraints of Ψ: For an edge (v,u)∼ E(G), read the labels of u,v and for each i∈ [2k] and t∈ [T-1] check that _i,t+1(u,v) = _i,t(v,u) and _i,t+1(v,u) = _i,t(v,u). In words, the constraint on v,u checks that the message that u receives from v at round t+1 is the message that v sent to it at the prior round and vice versa. The decision complexity of the constraints is the sum of the circuit complexity of (1) computing _i,t(v,u), _i,t(u,v) and A_i,T(u), A_i,T(v) over i and t, (2) checking (A_0(u),A_i,T(u))∈Φ(u,π_i(u)) and (A_0(v),A_i,T(v))∈Φ(v,π_i(v)) over all i, and (3) checking if _i,t+1(u,v)=_i,t(v,u) and vice versa for all i,t. This in total amounts to O(W)+O(k|Σ|). This completes the description of Ψ, and we now analyze the completeness and the soundness of the reduction. Completeness: Suppose that val(Φ)=1 and that A:V(G') →Σ is satisfying assignment for Φ. We take A_0(u)=A(u) for all u∈ V(G) and define the maps _i,t,_i,t and A_i,T according to the execution of the routing protocols _i for each i∈ [2k] when instantiated with A_0. To argue that this is a valid assignment we must check that it satisfies the folding constraints; to check that its value is 1, we must verify that it satisfies all the constraints of Ψ. The latter condition is clear since the assignments _i,t satisfy all of the routing constraints of Ψ by definition. To check the folding constraint, fix a vertex u and i∈ [2k]. Since A_i,T is the output of _i when executed on a graph with no corrupted edges we get that A_i,T(u)=A_0(π_i(u))=A(π_i(u)) for all u. Since A is a satisfying assignment, (A_0(u),A_i,T(u)) satisfies the constraint Φ(u,π_i(u)) as required. Soundness: Suppose that (Ψ) ≥ 1-. Let (A_0, {_i,t}_i∈ [2k], t∈ [T]) be the assignment achieving this value. Let {_i,t}, A_i,T be the deduced maps with A_0,_i,t as input. We will show that this implies that (Φ) ≥ 1-ν by exhibiting that the assignment B defined as B(u)=A_0(u) has high value for Φ. Let ℰ⊆ E(G) be the set of edges violated by (A_0, {A_i,t}_i∈ [2k], t∈ [T]); we know that μ(ℰ) ≤. Fix any i∈ [2k]. We know that for all edges (u,v)∉ℰ, any message that u sends to v is received correctly by v, in which case the tables _i,t describe a correct simulation of the routing protocol initiated with the assignment A_0 and the set of corrupted edges ℰ. For every i∈ [2k], the tolerance guarantee of the routing protocol _i gives that _u∼ [2n][A_0(u) ≠ A_i,T(π_i(u))]≤ν. By folding, for all i and u∈ [2n], (A_0(u),A_i,T(u))∈Φ(u,π_i(u)). Therefore letting (B) denote the fraction of edges violated by B, we get that, viol(B) =_i∼ [2k],u∈ [2n][(B(u),B(π_i(u))) ∉Φ(u,π_i(u))] ≤_i,u[(A_0(u),A_i,T(u))∉Φ(u,π_i(u))]+_i,u[A_i,T(u) ≠ A_0(π_i(u))] ≤ν, where _i,u[(A_0(u),A_i,T(u))∉Φ(u,π_i(u))] = 0 by folding. The conclusion now follows by the fact that (Φ)≥ 1-ν implies that (Ψ')≥ 1-8ν. §.§ Embedding PCPs on an HDX, with amplification In this section we show how to use the link-to-link routing protocol from <Ref> to convert a 2-CSP Ψ to a 2-CSP Ψ' on a graph underlying an HDX. The idea is similar to the idea in the proof of Lemma <ref>, but since our graph G=(X(1),X(2)) may not be regular we cannot directly apply the lemma. As remarked earlier, by associating a vertex of the CSP to a link of X though, we can handle these regularity issues, as well as get an amplification result. More precisely, we have: There exists >0 such that for all C>0 and δ∈ (0,1) there are constants C',d_0,n_0∈ such that the following holds for all n≥ n_0 and d≥ d_0. Let X be the d-dimensional complex from <Ref> with parameters q=log^C' n and δ, and 2n≤ |X(1)|≤ O_δ(1) n (which can be constructed in time (n)). Then there is a (n) time procedure mapping any 2-CSP Ψ' with n vertices on a k-regular graph H and alphabet Σ, to a 2-CSP Ψ on the graph G = (X(1),X(2)) with alphabet size at most |Σ|^kq^O(d^2), satisfying the following properties: * Completeness: If (Ψ')=1 then (Ψ)=1. * Soundness: If (Ψ')≤ 1-1/log^C n, then (Ψ) ≤ 1-. * The decision complexity of Ψ is at most kq^O(d^2)|Σ|. We first use <Ref> to construct a complex X in (n)-time which is a d-dimensional complex with 2n≤ |X(1)| ≤ O_δ(1)n and q=Θ(log^C'n) for C' chosen to be large enough. As before, for S ∈ X(i) let Δ_S be the number of d-faces containing S, i.e. Δ_S = |X_S(d-i)|. Let G=(X(1), X(2)) and let Z=G be the zig-zag product of G (thought of as an undirected graph with multi-edges) with the family of expanders from <Ref>, as described in <Ref>. The number of vertices of Z is equal to the number of multi-edges in G, which is ∑_u∈ X(1)Δ_uv=d2|X(d)| and is denoted by N throughout. We start by reducing to a 2-CSP Φ on a bipartite 2k-regular graph G' on |V(Z)| vertices (which is at least 2n) such that if Ψ' is satisfiable then so is Φ and if (Ψ')≤ 1-8ν then (Φ)≤ 1-ν. This reduction is the same as that in <Ref>, and we omit the details. Since the graph G' is a bipartite 2k-regular graph, we may partition the edge set of G' into 2k perfect matchings, which we denote by π_1,…, π_2k. Abusing notation, we think of π_1,…,π_2k also as the permutations on V(Z) corresponding to the matchings. Note that since each π_i is a perfect matching, we have that π_i^2=id. Let _i denote the protocol from <Ref> that routes the matching π_i on V(Z) with the parameter 2C in place of C. We know that _i has round complexity T=O(log n) and work complexity q^O(d^2)log|Σ|. We now describe the CSP Ψ. The graph: the graph underlying the CSP Ψ is G = (X(1),X(2)). The alphabet: For every u∈ X(1) let T_u denote the number of messages that u receives over all rounds. The alphabet of each vertex u is a subset of Σ^O(kT_u). First recall that a vertex j∈ V(Z) is a tuple (j_1,j_2) with j_1∈ X(1) and j_2 in the cloud of j_1. We think of an assignment to the CSP as a collection of maps, A_0→Σ, for ={(j,u):j∈ V(Z),u∈ X_j_1(1)} and _i,t 2E(G)→Σ^* for each i∈ [2k] and round t∈ [T]. As a part of its alphabet, every vertex u∈ X(1) holds the symbols {A_0(j,u)| j∈ V(Z), u∈ X_j_1(1)}. Additionally it holds the symbols _i,t[u,v], which denotes the message that u receives from its neighbors v in the protocol _i at round t. Viewing the rules of each protocol as maps: We define the maps _i,t 2E(G)×Σ^* →Σ^*∪{⊥}, for i∈ [2k], t∈ [T-1] in the following way. The symbol _i,t[u,v,σ] is the message that u sends to v in round t and protocol _i if the messages received by it in previous rounds are given by σ, for a valid σ of the correct length; otherwise _i,t[u,v,σ] = ⊥. We let _i,t[u,v] denote the message that u sends to v at round t, on the transcript _i,·[u,·]. Similarly define the output of the protocol _i as the map A_i,T like in <Ref>. Formally, for all i∈ [2k], consider the map A_i,T×Σ^* →Σ where the symbol A_i,T[(j,u),σ] specifies the output of the routing algorithm at a vertex u in _i with respect to j∈ V(Z) (for some u∈ X_j_1(1)) when given as input the transcript σ. On the transcript _i,·[u,·] we use A_i,T(j,u) to denote the corresponding output symbol. Intuition towards defining constraints: In an ideal proof we want A_0(j,u) to be the same on all u∈ X_j_1(1) and A_0(j,·) to be a satisfying assignment for Φ. The maps _i,t,_i,t should be the transcript when the protocol _i is executed on G. We want to ensure that the message transmission across every edge behaves as it is supposed to – for the edge (u,v) the outgoing message that v sends to u at any round should equal the message that u receives at the next round and vice versa. Secondly, suppose that A_0(j,·) was a satisfying assignment to Φ and that no edge in the protocol is corrupted. Then, for every j for which u∈ X_j_1(1), the protocol _i successfully transmitted the symbol A_0(π_i(j),·) from the link X_π_i(j)_1 to the link X_j_1, that is, A_i,T(j,·) equals A_0(π_i(j),·). In particular, we would have that (A_0(j,u),A_i,T(j,u)) satisfies the constraint (j,π_i(j)) in Φ. Since this only depends on u, we enforce this as a hard constraint on the alphabet of u. Folding: We constrain the label set of u to only be tuples where (A_0(j,u),A_i,T(j,u)) satisfies the constraint Φ(j,π_i(j)) in Φ, for all i∈ [2k] and j∈ V(Z) where u∈ X_j_1(1). By that, we mean that only labels that satisfy this condition are allowed in an assignment to Φ. The constraints of Ψ: For an edge (v,u)∼ X(2), read the labels of u,v and check that, * For each j∈ V(Z) for which u,v are both in X_j_1(1): A_0(j,u) = A_0(j,v), and for all i∈ [2k] it holds that A_i,T(j,u) = A_i,T(j,v). * For each i∈ [2k] and t∈ [T-1]: _i,t+1(u,v) = _i,t(v,u) and _i,t+1(v,u) = _i,t(v,u). In words, the constraint on v,u check that they hold the same value when they are inside the same link at the beginning and end of the protocols, and that the message that u receives from v at round t+1 is the message that v sent to it at the prior round and vice versa. One can check that the decision complexity of the constraints is the sum of the circuit complexity of 1) computing _i,t(v,u) and _i,t(u,v) over i and t, 2) checking the routing constraints hold, 3) (A_0(j,u),A_i,T(j,u))∈Φ(j,π_i(j)) and (A_0(j,v),A_i,T(j,v))∈Φ(j,π_i(j)) over all i,j. Since the work complexity of _i is q^O(d^2)log|Σ| and the complexity of computing (2) is at most kq^O(d^2)|Σ| the decision complexity amounts to kq^O(d^2)|Σ|. This completes the description of Ψ, and we now analyze the completeness and the soundness of the reduction. Completeness: Suppose that val(Φ)=1, and let A:V(G') →Σ be a satisfying assignment. We take A_0(j,u)=A(j) for all u∈ X_j_1(1) and define the maps _i,t,_i,t and A_i,T according to the execution of the routing protocols _i for each i∈ [2k] when instantiated with A_0. To argue that this is a valid assignment we must check that it satisfies the folding constraints; to check that its value is 1, we must verify that it satisfies all the constraints of Ψ. The latter condition is clear since the assignment A_0 is equal on all the vertices in the link X_j_1 for all j, and the assignments _i,t satisfy all of the routing constraints of Ψ by definition. To check the folding constraints, fix a vertex u, take j∈ V(Z) with u∈ X_j_1(1), and take i∈ [2k]. Since A_i,T is the output of _i when executed on a graph with no corrupted edges we get that A_i,T(j,u)=A_0(π_i(j),u')=A(π_i(j)) for all u'∈ X_π_i(j)_1. Since A is a satisfying assignment, (A_0(j,u),A_i,T(j,u)) satisfies the constraint Φ(j,π_i(j)) as required. Soundness: Suppose that (Ψ) ≥ 1-, where is less than _0, the absolute constant in <Ref>. Let (A_0, {_i,t}_i∈ [2k], t∈ [T]) be the assignment achieving this value. Let {_i,t}, A_i,T be the deduced maps with A_0,_i,t as input. We will show that this implies that (Φ) ≥ 1-1/8log^C n by exhibiting a high-valued assignment B for it. In fact, our assignment for G' will be B(j) = _u∈ X_j_1(1)(A_0(j,u)) if a clear majority of at least 99% inside X_j_1 exists, and ⊥ otherwise. Let us upper bound (B), where we count every edge on the vertices assigned ⊥ as violated. Let ℰ⊆ X(2) be the set of edges violated by A_0, {A_i,t}_i∈ [2k], t∈ [T]; we know that μ(ℰ) ≤. We first upper bound the probability that B(j)=⊥. Suppose that j∈ V(Z) is such that, μ_j_1(ℰ)≤ 0.05. Then there exists σ_j∈Σ such that A_0(j,v)=σ_j for at least 0.99-fraction of v ∼ X_j_1 (since the spectral gap of the graph (X_u(1),X_u(2)) is at least 1/2). This in turn implies that B(j)≠⊥. Therefore, _j ∼ V(Z)[B(j)=⊥]≤_j∼ V(Z)[μ_j_1(ℰ) ≥ 0.05]=_u∼ X(1)[μ_u(E)≥ 0.05] ≤1/log^2C n, where we used <Ref> in the second transition and <Ref> in the last transition. For convenience of analysis, we also define B(i,π_i(j),T) as _u∈ X_j_1(1)(A_i,T(π_i(j),u)) if at least 99% of the vertices in X_j_1(1) have the same value in A_i,T, and ⊥ otherwise. As is the case above, for B(i,j,T) to be ⊥ it holds that at least 0.05-fraction of the edges in X_π_i(j)_1 are in ℰ too, so for all i∈ [2k], _j ∼ V(Z)[B(i,j,T)=⊥] ≤1/log^2C n. Fix any i∈ [2k]. We know that for all edges (u,v)∉ℰ, any message that u sends to v is received correctly by v, in which case the tables _i,t describe a correct simulation of the routing protocol in <Ref> initiated with the assignment A_0. Provided that is small enough, Lemma <ref> tells us that _j∼ V(Z)[B(j) ≠ B(i,π_i(j),T)]≤_j∼ V(Z)[B(j)=⊥]+1/log^2C N≤2/log^2C n. By folding, for all i,j and u∈ X_j_1(1), (A_0(j,u),A_i,T(j,u))∈Φ(j,π_i(j)), therefore we get that _i,j[(B(j),B(i,j,T))∉Φ(j,π_i(j))] ≤_i,j[B(j)≠⊥]+_i,j[B(i,j,T)≠⊥]≤2/log^2C n. Using the union bound we conclude that viol(B) =_i∼ [2k],j∈ [N][(B(j),B(π_i(j))) ∉Φ(j,π_i(j))] ≤_i,j[(B(j),B(i,j,T))∉Φ(j,π_i(j))]+_i,j[B(i,j,T) ≠ B(π_i(j))] ≤4/log^2C n, which implies that (Φ) ≥ 1-1/8log^C n. This implies that (Ψ')≥ 1-1/log^C n as required. § AMPLIFICATION OF 2-CSPS ON COMPLEXES WITH DIRECT PRODUCT TESTERS In this section, we show how to amplify the soundness of a 2-CSP on G=(X(1),X(2)) where X is an HDX that supports a direct product test. If X is a sparse complex, then this result is a derandomized parallel repetition procedure for 2-CSPs on G. In the time of writing this paper we only know of one family of sparse complexes with this property: the Chapman-Lubotzky complexes from <Ref>. Thus, in <Ref> we instantiate this idea with these complexes. §.§ Gap Amplification to Low Soundness Fix any complex X for which the (k,√(k))-direct-product tester on X has soundness δ, and consider any 2-CSP Ψ on G=(X(1),X(2)). Our reduction will produce a Label Cover instance Φ on the bipartite inclusion graph G'=(X(k),X(√(k))) with left alphabet Σ^k and right alphabet Σ^√(k). The constraint on (U,V) check whether the label on U ∈ X(k) satisfies all the constraints in G (since for all u,v∈ U, (u,v)∈ X(2)) and further if projected to B it equals the label given to B. As we did in <Ref>, to simplify the presentation of the proof, we define a generalized version of the Label Cover problem from <Ref>. In particular we allow for varying alphabets Σ_L(U) ⊆Σ_L to the left-side of vertices U ∈ L. This helps us to restrict the prover to provide a label to U that satisfies additional constraints (which in our case would be that the label in Σ^k given to U satisfies all the constraints inside U) which makes our soundness analysis cleaner to carry out. An instance Φ = (G=(L∪ R,E,w), Σ_L, Σ_R, {Σ_L(U)}_U∈ L, {Φ_e}_e∈ E) of generalized label cover consists of a weighted bipartite graph G, alphabets Σ_L, Σ_R, {Σ_L(U)} with Σ_L(U)⊆Σ_L for all U∈ L, and constraints Φ_e⊆Σ_L×Σ_R, one for each edge. Each one of the constraints is a projection constraint, meaning that for every e=(U,V)∈ E there is a map ϕ_eΣ_L(U)→Σ_R such that Φ_e = {(σ,ϕ_e(σ)) | σ∈Σ_L}. We remark that a hardness result for generalized label cover can be easily converted to a hardness result for the standard definition. When the alphabet size is super-constant though, one needs to be careful so as to preserve the decision complexity of the constraints while performing this translation. Therefore, since our alphabet size is large, in all the intermediate step in our reductions from now on we use the generalized Label Cover problem. After performing alphabet reduction to get a generalized Label Cover instance with constant-sized alphabet in <Ref>, we use this simple translation to go back to the standard label cover problem. The main result of this section is the following statement: For all δ>0 there exists k∈ℕ such that the following holds. Suppose that X is a complex for which the (k,√(k))-direct product tester has soundness δ^2. Then there is a polynomial time procedure such that given a generalized 2-CSP instance Ψ over the weighted graph G=(X(1), X(2)) with alphabets Σ,{Σ(u)}_u∈ G and decision complexity D, produces an instance of generalized Label Cover Φ over the weighted inclusion graph (X(√(k)), X(k)) over left alphabet Σ^k and right alphabet Σ^√(k) such that: * The projection map ϕ_(A,B) associated to the edge (A,B) is defined as the restriction of the assignment to A to the coordinates in B. That is, ∀σ∈Σ_L(A), ϕ_(A,B)(σ)=σ|_B. * For all A ∈ X(k), the circuit complexity of checking membership in Σ_L(A) is O(k^2D). * If val(Φ) = 1, then val(Ψ) = 1. * If val(Φ)≤ 1-4δ, then val(Ψ)≤δ. Our label cover instance Φ has vertices L = X(k), R = X(√(k)) and edges between them given by inclusion. Letting Σ be the alphabet of Ψ, we take Σ_L=Σ^k to be the alphabet of the left side of Φ and Σ^√(k) to be the alphabet of the right side of Φ. For every vertex A = (a_1,…,a_k) ∈ U let Σ_L(A) be the set of assignments (σ_1,…,σ_k) ∈Σ^k where for every i, σ_i∈Σ(a_i) and for every i ≠ j, (σ_i,σ_j) satisfies the constraint Ψ_(a_i,a_j) on the edge (a_i,a_j)∈ G. The decision complexity of membership in Σ_L(A) is easily seen to be O(k^2 D). The constraints Φ_e are as defined as in the lemma statement. The completeness of the reduction is clear, and we move on to the soundness analysis. Suppose that (Φ) ≥δ, and fix assignments F:X(k) →Σ^k and G: X(√(k)) →Σ^√(k) realizing (Φ), where F(A) ∈Σ_L(A) for all A ∈ X(k). Thus, (Φ) = _B ∼ X(√(k)) A ⊃_k B[F[A]|_B = G[B]] ≥δ. Using Cauchy-Schwarz we conclude that _D ∼ X(d) B ∼ X(√(k)) B ⊂ A,A' ⊂ D[F[A]|_B = F[A']|_B] ≥_D ∼ X(d) B ∼ X(√(k))[_B ⊂ A,A' ⊂ D[(F[A]|_B=G[B])(F[A']|_B=G[B])]] =_D ∼ X(d) B ∼ X(√(k))[_B ⊂ A ⊂ D[(F[A]|_B=G[B])]^2] ≥(_D ∼ X(d) B ∼ X(√(k))[_B ⊂ A ⊂ D[(F[A]|_B=G[B])]])^2 = (_D ∼ X(d) A ⊂_k D B ⊂_√(k) A[(F[A]|_B=G[B])]])^2 ≥δ^2. This implies that F passes the direct product test and therefore using the soundness of the test we get a function f:X(1) →Σ such that, _A∼ X(k)[Δ(F[A],f|_A)≤δ]≥(δ). Let ⊆ X(2) be the set of constraints that f violates. By construction F[A] satisfies all the constraints inside A, therefore wherever it holds that Δ(F[A],f|_A)≤δ k we get that f satisfies at least (1-δ)^2 ≥ 1-2δ fraction of the constraints inside A. In particular, we conclude that _A∼ X(k)[μ(|_A) ≤ 2δ]≥(δ). Suppose for the sake of contradiction that μ()>4δ. Applying <Ref> we get, _A ∼ X(k)[μ(|_K) ≤ 2δ] 1/kδ, since the bipartite graph (X(2), X(k)) has second largest eigenvalue at most O(1/√(k)) by <Ref>. Since k is chosen to be large enough as a function of δ, in particular at least 1/(δ), this is a contradiction to (<ref>). Thus we get that μ() ≤ 4δ, which in turn means that (Ψ) ≥ 1-4δ. § ALPHABET REDUCTION VIA DECODABLE PCPS In this section we discuss the construction of PCPs for Circuit-SAT with small alphabet but large size (polynomial, or even exponential). The tools presented in the paper so far lead to size efficient PCPs with large alphabets, and our goal here is to facilitate the use the efficient composition theorems of <cit.> to reduce the alphabet size. To apply the abstract composition theorem of <cit.> we require PCP constructions in which one has a “decodable verifier”. By that, we mean that the PCP verifier not only probabilistically checks whether a proof of satisfiability is correct or not, but it is also able to decode a symbol of the satisfying assignment with high probability. We present the formal definition in <Ref>. These constructions will be used as inner PCPs in our composition. We remark that so far in the paper we discussed 2-query PCPs using the framekwork as label cover, and in <cit.> the proof composition is presented in the language of “robust PCPs”. The language of robust PCPs can be seen to be an equivalent formulation of label cover, but it is easier to use in the context of composition. Thus, for convenience we carry out most of the argument in the language of robust PCPs, formally defined in Section <ref>. The material presented in Sections <ref> and <ref> is almost verbatim repeat of <cit.>, but we give it here for the sake of completeness. §.§ Robust PCPs We now discuss the notion of robust PCPs, which will be the outer PCPs in our composition. First defined in <cit.>, robust PCPs have been implicit in all PCP constructions. The only difference between robust PCPs and standard PCPs is in the soundness condition: while the standard soundness condition measures how often the PCP verifier accepts a false proof, the robust soundness condition measures the average distance between the local view of the verifier and an accepting local view. The definition given below is from <cit.>: For functions r, q, m, a, s : ℤ^+ →ℤ^+ and δ : ℤ^+ → [0,1], a verifier V is a robust probabilistically checkable proof (robust PCP) system for a language L with randomness complexity r, query complexity q, proof length m, alphabet size a, decision complexity s and robust soundness error δ if V is a probabilistic polynomial-time algorithm that behaves as follows: On input x of length n and oracle access to a proof string π∈Σ^m(n) over the (proof) alphabet Σ where |Σ| = a(n), V reads the input x, tosses at most r(n) random coins, and generates a sequence of locations I = (i_1, …, i_q) ∈ [m]^q(n) and a predicate f : Σ^q →{0,1} of decision complexity s(n), which satisfies the following properties: Completeness: If x ∈ L then there exists π such that _(I,f)[f(π_I) = 1] = 1. Robust Soundness: If x ∉ L then for every π _(I,f)[(π_I, f^-1(1))] ≤δ, where the distribution over (I,f) is determined by x and the random coins of V. Next we define the notion of proof degree and regularity for a robust PCP. Given a robust PCP system, we will refer to the maximum number of local windows any index in the proof participates in, as the proof degree, denoted by d(n). More precisely, for each i ∈ [m(n)], if we let R_i = { r ∈{0, 1}^r(n)| i ∈ I(r) }, then d(n) = max_i |R_i|. Furthermore, if |R_i| = d(n) for all i, we will say the PCP system is regular. Equivalence of Label Cover and Robust PCPs: the notion of robust PCP is in fact equivalent to generalized label cover (<Ref>) as shown in <cit.>, and we now give some intuition for this equivalence. If a language L has a robust PCP, then here is a reduction from L to generalized Label Cover: the set of left vertices is the set of random strings of the robust PCP, the set of right vertices is the set of the proof locations. An edge (r,i) exists if the proof location i is probed on random string r. The label to a left vertex r is an accepting local view of the verifier on random string r while a label to the right vertex i is the proof symbol in the corresponding proof location i. An edge (r,i) is consistent if the local view is consistent with the proof symbol. Conversely, given a reduction from L to generalized label cover, we can get a robust PCP verifier for L as follows: the verifier expects as proof a labeling of the set of right vertices, the verifier chooses a random left vertex, queries all its neighbors and accepts if there exists a label to the left vertex that satisfies all the corresponding edges. We summarize this discussion with the following lemma (see <cit.> for a formal proof): For every δ : ℤ^+ →ℝ^+, and r, q, m, a : ℤ^+ →ℤ^+, the following two statements are equivalent: * Gap-Generalized-Label-Cover[1,δ] is 𝐍𝐏-hard for instances with the following parameters: * left degree at most q(n), * right degree at most d(n) * right alphabet Σ(n) with |Σ| = a(n), * left alphabet {Σ_L(U)}_U∈ L, * size of right vertex set at most m(n), and * size of left vertex set at most 2^r(n). * Every L ∈𝐍𝐏 has a robust PCP with completeness 1, robust soundness error δ and the following parameters: * query complexity q(n), * proof degree at most d(n) * proof alphabet Σ(n) with |Σ| = a(n), * maximum number of accepting local views max_U∈ L(|Σ_L(U)|), * proof length m(n), and * randomness complexity r(n) Furthermore, suppose that Σ_L=Σ^k and Σ_R=Σ^t for some alphabet Σ and k,t∈, all the constraints ϕ_(u,v) of the Label Cover instance check if the label of u restricted to v is equal to the label of v, and the circuit complexity of checking membership in the language Σ_L(U) is at most D, then the decision complexity of the robust PCP is O(D+q(n)). It is important to note that this is a syntactic correspondence between the notions of generalized Label-Cover and robust PCPs and there is no loss of parameters in going from one framework to another. In particular, going from label cover to a robust PCP and back, one gets back the original label cover instance. Even though these two notions are syntactically equivalent, some results are easier to state/prove in one framework than the other. In Section <ref> we proved a hardness of generalized label cover with large alphabet, but applying alphabet reduction will be easier to carry out in the robust PCP framework. §.§ Decodable PCP We now describe the notion of a decodable PCP (dPCP) from <cit.>, which will serve as our inner PCP in the composition. It is sufficient to define dPCPs for the problem _Σ for our purposes, and as such we focus the discussion on it. The problem _Σ is concerned with circuits C whose input is a string from Σ^n. It will often be more convenient for us to think of circuits over large alphabet as the equivalent Boolean circuit C̃{0,1}^nlog |Σ|→{0,1} in which each input wire of C is split into log |Σ| wires in C̃ in the obvious way. With this in mind, we define the circuit size of C to be the size of C̃, and define the _Σ(N,S) problem in the following way: An instance of _Σ(N,S) is a circuit C:Σ^N→{0,1} of size at most S. The goal is to decide whether there exists an input x∈Σ^N such that C(x) = 1. Given an instance C of , a probabilistically checkable proof for C∈ often takes a string y such that C(y) = 1 and encodes it using a probabilistically checkable proof. We refer to such a y as an NP-witness of the fact that C ∈. A standard PCP verifier for the language would verify that the input circuit is satisfiable, with the help of a PCP, which is typically (but not necessarily) an encoding of the NP-witness y. A PCP decoder for CircuitSAT is a stronger notion. Just like a PCP verifier, it expects the PCP to be an encoding of the NP witness. However, in addition to that, after performing its local check, a PCP decoder is expected to decode back a location in the NP witness. A PCP decoder for _Σ over a proof alphabet σ is a probabilistic polynomial-time algorithm D that on input a circuit C: Σ^k →{0,1} of size n and an index j ∈ [k], tosses r = r(n) random coins and generates (1) a sequence of q = q(n) locations I = (i_1, …, i_q) in a proof of length m(n) over the alphabet σ and (2) a (local decoding) function f : σ^q →Σ∪{} whose corresponding circuit has size at most s(n), referred to henceforth as the decision complexity of the decoder. With this in mind we can now define decodable PCPs, where a verifier either rejects a proof, or decodes a symbol that belongs to a small list of satisfying assignments for the CircuitSAT instance. [Decodable PCPs] For functions δ : ℤ^+ → [0,1] and L: ℤ^+ →ℤ^+, we say that a PCP decoder D is a decodable probabilistically checkable proof (dPCP) system for CircuitSAT_Σ with soundness error δ and list size L if the following completeness and soundness properties hold for every circuit C : Σ^k →{0,1}: * Completeness: For any y ∈Σ^k such that C(y) = 1 there exists a proof π∈σ^m, also called a decodable PCP, such that _j,I,f [f(π_I) = y_j] = 1, where j ∈ [k] is chosen uniformly at random and I, f are distributed according to C_j and the verifier’s random coins. * Soundness: For any π∈σ^m, there is a list of 0 ≤ℓ≤ L strings y^1, …, y^ℓ satisfying C(y^i) = 1 for all i, and furthermore that _j,I,f [f(π_I) ∉{, y_j^1, …, y_j^ℓ}] ≤δ. * Robust Soundness: We say that D is a robust dPCP system for CircuitSAT_Σ with robust soundness error δ, if the soundness criterion in can be strengthened to the following robust soundness criterion, 𝔼_j,I,f [agr(π_I, BAD(f))] ≤δ, where BAD(f) := { w ∈σ^q | f(w) ∉{, y_j^1, …, y_j^ℓ}}. §.§ Constructions of Decodable PCPs from Reed-Muller and Hadamard Codes In this section we discuss two well-known constructions of decodable PCPs. These constructions are based on classical primitives in PCP literature, and we include them in full details for the sake of completeness. First, we have the following construction of dPCPs based on Hadamard codes. For all δ >0, for q = 1/δ^O(1) and for all alphabets Σ, the language _Σ(N,S) has a regular decodable PCP with the following parameters: * Robust soundness error δ. * Proof alphabet size q. * Proof length q^O(S^2). * Randomness complexity O(S^2log(q)). * Query complexity and decision complexity q^O(log|Σ|). * List size 1/δ^O(1). Deferred to <Ref>. Second, we have the following construction of dPCPs based on Reed-Muller codes. For all δ >0 and all alphabets Σ, _Σ(N,S) has a regular decodable PCP with the following parameters: * Robust soundness error δ. * Proof alphabet size and proof length at most S^O(1). * Randomness complexity at most O(log S). * Query and decision complexity at most (log(S))^O(log|Σ|). * List size at most 1/δ^O(1). Deferred to <Ref>. § THE FINAL PCP: PUTTING IT ALL TOGETHER In this section we combine all the components from the previous sections to get a 2-query PCP of quasi-linear size, constant alphabet and small soundness, thereby proving Theorem <ref>. We begin by presenting a few tools from <cit.> that are necessary for us, namely their regularization and alphabet reduction lemmas and their composition theorem. §.§ Regularization Procedures for PCPs First we state the following lemma <cit.> to convert an arbitrary constraint graph to a 2-CSP on a regular graph with constant degree. There exist constants c,k ∈ and a polynomial time procedure that when given as input a 2-CSP instance Ψ over a constraint graph G' with |V(G')|+|E(G')|=n over alphabet Σ, outputs a 2-CSP Ψ' over a constraint graph G' with |V(G')|≤ 2|E(G)| and |E(G')|=Θ(kn) over alphabet Σ such that, * G is k-regular. * If (Ψ)=1 then (Ψ')=1. * If (Ψ)=1-ρ then (Ψ')≤ 1- ρ/c. Next we state a similar procedure that converts a robust PCP into a robust PCP that is also regular. Additionally it also reduces the alphabet of a robust PCP. There exists a constant C > 0 such that for all : ℤ^+ → [0,1], the following holds. Suppose L has a robust PCP verifier V with randomness complexity r, query complexity q, proof length m, average proof degree d, robust soundness error δ over a proof alphabet Σ. Then L has a regular reduced robust PCP verifier, which we shall denote by regular_(V) with: * randomness complexity log m + log d, * query complexity Cq log |Σ|^1/4, * proof length Cq^2 2^r log |Σ|^1/10, * proof degree C/^4, * proof alphabet of size at most C/^6, * and robust soundness error δ +. §.§ PCP Composition We need the following efficient and abstract composition theorem due to <cit.>: For all >0 the following holds. Suppose 3SAT has a regular robust PCP verifier V with robust soundness error Δ, proof alphabet Σ, query complexity Q, decision complexity S(n) and suppose CircuitSAT_Σ(Q,S(n)) has a robust PCP decoder with proof alphabet σ, robust soundness error δ and list size ℓ. Then, 3SAT has a robust PCP verifier V' = V ⊛, with query complexity O(q/^4), robust soundness error Δℓ + 4ℓ + δ and other parameters as stated in <Ref>. Furthermore, if the PCP decoder is regular, then so is the composed verifier V'. Note that all the parameters (for V) with capitalized letters are functions of n and the parameters (for ) with uncapitalized letters are functions of S(n). The parameters of the composed PCP should be read accordingly. §.§ Proof of Theorem <ref> We start from a known size efficient PCP construction; either the construction of <cit.> that has soundness 1-1/ n or or the construction of <cit.> that has soundness 1-Ω(1), will do. For a graph G=(V,E), let (G) = |V|+|E|. Below we state the result of <cit.> in its more convenient formulation in terms of hardness of 2-CSPs; this formulation can be found in <cit.>. There exist constants c_1, c_2 > 0 such that there is a polynomial time reduction mapping a 3SAT instance φ of size n to a 2-CSP instance Ψ over the graph G = (V, E) and alphabet Σ where * We have (G) ≤ n(log n)^c_1 and |Σ| = O(1). * If φ is satisfiable, then (Ψ) = 1. * If φ is not satisfiable, then (Ψ) ≤ 1-1/(log n)^c_2. The work of <cit.> showed how to get to constant soundness while maintaining quasi-linear size. Again we state her result in the more convenient 2-CSP formulation. There exist constants c_1, c_2, c_3 > 0 such that there is a polynomial time reduction mapping a 3SAT instance φ of size n to a 2-CSP instance Ψ over the graph G = (V, E) and alphabet Σ where, * (G) ≤ n(log n)^c_1 and |Σ| = c_2. * If φ is satisfiable, then (Ψ) = 1. * If φ is not satisfiable, then (Ψ) ≤ 1-c_3. Using Dinur's PCP in conjunction with <Ref>, we get a 2-CSP instance with soundness 1-Ω(1) whose constraint graph is the base graph of the complex from Theorem <ref>. There exists >0 such that for all δ∈ (0,1) there exist constants C,C'>0 so that the following holds for all sufficiently large integers d and n. Let {X_n'}_n'∈ N be the infinite sequence of complexes from <Ref>, where every X_n' is a d-dimensional complex on n' vertices with parameters q=Θ(log^C' n') and δ, that is constructible in time (n'). Then there is a polynomial time reduction mapping any 3SAT instance φ of size n to a 2-CSP Ψ over the graph G=(X_n'(1),X_n'(2)), for some complex X_n' from the family, such that: * We have that n'≤ nlog^C n, the alphabet Σ satisfies that log(|Σ|)≤ q^Cd^2, and the decision complexity of the constraints of Ψ is at most q^Cd^2. * If φ is satisfiable, then Ψ is satisfiable. * If φ is unsatisfiable, then val(Ψ)≤ 1-. Applying Dinur's reduction from <Ref> to φ and then applying the regularization procedure in <Ref>, in (n) time we get a 2-CSP Ψ' whose constraint graph G' is k-regular for an absolute constant k, with |V(G')|≤ nlog^O(1) n, and alphabet size |Σ'|=O(1). We have that (Ψ')=1 if (φ)=1 and (Ψ')=1-' if (φ)<1, for some universal constants '∈ (0,1). We will now apply the polynomial time reduction in <Ref> to Ψ'. This gives us a 2-CSP Ψ on the constraint graph G=(X_n'(1),X_n'(2)), where X_n' is a d-dimensional complex with |V(G')|≤ n' ≤ O_δ(1)|V(G')| and parameters q=Θ(log^C'n') for some large enough constant C' and δ. The alphabet size of Ψ satisfies log|Σ|=kq^O(d^2)log|Σ'| = q^O(d^2) and the decision complexity is kq^O(d^2)|Σ|=q^O(d^2). If Ψ' is satisfiable then so is Ψ, and if (Ψ')≤ 1-' then (Ψ)≤ 1- for some absolute constant >0, as required.[One can check that the proof above works even if we apply the result of <cit.>, <Ref>, instead of <Ref>, since <Ref> only requires that (Ψ')≤ 1-1/(log n)^c to get the desired conclusion.] Now that we have a constant soundness PCP on the graphs underlying the complexes from <Ref>, we can apply the gap amplification procedure from <Ref> to get a 2-CSP with small soundness (but large alphabet size). This uses the fact that these complexes support a direct product test with small soundness. For all δ∈ (0,1) there exist constants C,C'>0 so that the following holds for all sufficiently large integers k, d and n. Let {X_n'}_n'∈ N be the infinite sequence of complexes from <Ref>, where every X_n' is a d-dimensional complex on n' vertices with parameters q=Θ(log^C' n') and δ that is constructible in time (n'). There is a polynomial time reduction mapping a 3SAT instance φ of size n to a generalized label cover instance Ψ over the weighted inclusion graph (X_n'(k), X_n'(√(k))) for some n'≤ nlog^C n, such that, * If φ is satisfiable, then Ψ is satisfiable. * If φ is unsatisfiable, then val(Ψ)≤δ. * The left alphabet of Ψ is Σ^k and right alphabet is Σ^√(k) for some alphabet Σ with log|Σ|≤ q^Cd^2. * The projection map ϕ_(A,B) associated to the edge (A,B) in Ψ is defined as: ∀σ∈Σ_L(A), ϕ_(A,B)(σ)=σ|_B. Furthermore, for all A ∈ X(k), the circuit complexity of checking membership in Σ_L(A) is at most q^Cd^2. Fix δ and then fix k,d∈ to be sufficiently large constants depending on δ, as dictated by <Ref> and <Ref>. Applying the polynomial time reduction in <Ref> on φ, we get a 2-CSP Ψ' on the weighted graph (X_n'(1),X_n'(2)), where X_n' is a d-dimensional complex from <Ref> with n'≤ nlog^O(1)n and parameters q=Θ(log^C' n) and δ. The alphabet size of Ψ' satisfies log|Σ|≤ q^O(d^2) and the decision complexity is at most q^O(d^2). If φ is satisfiable then so is Ψ' and if not then (Ψ')≤ 1- for some absolute constant >0. <Ref> states that the (k,√(k))-direct product test on X_n' has soundness δ. Thus applying the polynomial time reduction in <Ref> on Ψ' we get a generalized label cover instance Ψ with (Ψ)≤δ if (Ψ')≤ 1- which is at most 1-4δ (by lowering δ if required). The other properties required of Ψ follow immediately from <Ref>. We now apply alphabet reduction using the standard technique of proof composition of PCPs, and for that we switch to the framework of robust PCPs using <Ref>. Alphabet reduction for label cover corresponds to query/decision complexity reduction for the equivalent robust PCP, therefore applying proof composition with the PCP above as an outer PCP and the decodable PCP based on the Reed-Muller code from <Ref> as an inner PCP, we can reduce the queries to (logloglog n), while maintaining the almost-linear size. For all δ>0 there exists C ∈, such that for sufficiently large n∈, 3SAT on n variables has a regular robust PCP with proof length ≤ n(log n)^C, randomness complexity ≤log_2(n) + Cloglog n, query and decision complexity ≤ (logloglog n)^C and robust soundness error δ. Let δ' be a function of δ that we will set later. Applying the polynomial time reduction in <Ref> on φ with soundness parameter δ' and parameters k,d chosen to be a large enough constants, we get a generalized label cover instance Ψ' on the weighted graph (X_n'(k),X_n'(√(k))) for n'≤ nlog^O(1)n, and alphabet Σ satisfying, log|Σ| and decision complexity at most log^O(1)n. Note that the distribution over the left side of vertices in Ψ' equals μ_k, which is not uniform. The randomness complexity of sampling from μ_k is at most log_2(|X(d)|)+log_2(dk)=log_2(nlog^O(1)n), therefore we can replace the left side by putting in a vertex for every random string. Now, using the equivalence between generalized Label Cover and robust PCPs from <Ref> this gives us a robust PCP P_φ for 3SAT with the parameters in <Ref>. Now we can conclude by applying the composition from <Ref> with the Reed-Muller based dPCP from <Ref>. Our goal is to reduce the decision complexity of P_φ to (logloglog n). To get some intuition of the parameters, note that one step of composition with the Reed-Muller dPCP roughly reduces the original decision complexity D to roughly log(D), while increasing the proof size by a factor of (D). Since the PCP P_φ has a decision complexity of n, if we apply the composition twice we will reduce the decision complexity to polylogloglog n, while incurring a factor of n blow-up in size. Regularizing: before applying composition we must ensure that we have a regular robust PCP with constant-sized alphabet. The robust PCP P_φ produced in Lemma <ref> may not have these properties. To remedy this we first apply <Ref> with the parameter δ' that regularizes the PCP and also reduces its alphabet while paying nominally in the proof length, to get the PCP verifier P'_φ. A first composition step with Reed-Muller based robust dPCP: we apply composition, namely <Ref>, with the parameter _1=δ'^c_1 for large enough c_1 > 0, with the decodable PCP D_δ_1 from <Ref> with soundness δ_1=δ'^c_2 for small enough c_2 ∈ (0,1), to get the PCP verifier V_1 = P'_φ⊛ D_δ_1 with soundness δ'(1/δ_1)+_1(1/δ_1)+δ_1 which is δ'^c for some constant c∈ (0,1). The parameter evolution of both these operations is summarized below in <Ref>. A second composition step with Reed-Muller based robust dPCP: we again apply the alphabet reduction procedure in <Ref> with the parameter δ'^c to get a regular robust PCP verifier V'_2. After that we apply one more step of composition, with the parameter _2=δ'^c_3, using the Reed-Muller dPCP D_δ_2 with the soundness parameter δ_2=δ'^c_4. This gives us the regular robust PCP verifier V_2=V'_2⊛ D_δ_2 with soundness error δ'^c' for some constant c'∈ (0,1). Setting δ'=δ^1/c' finishes the proof. The parameter evolution is summarized below in <Ref>. The PCP construction in <Ref> is size efficient and has a moderately small alphabet size (which is not constant yet). Thus, we now apply another step of query reduction and composition, this time using the Hadamard code based dPCP as an inner PCP from <Ref>. The result of this process will be a robust PCP with constant query complexity and small soundness, which in the language of label cover corresponds to constant size alphabet and small soundness, thereby establishing <Ref>. For all δ>0, there exists C>0 and a polynomial time procedure such that given an instance φ of 3-SAT of size n produces a label cover instance Ψ with the following properties: * The size of Ψ is at most nlog^C n and the alphabet size of Ψ is at most (1/δ). * If φ is satisfiable, then val(Ψ) = 1. * If φ is unsatisfiable, then val(Ψ)≤δ. Let δ'>0 be a function of δ that we will set later. Given a 3SAT instance φ, using <Ref> we get a robust and regular PCP P_φ with soundness δ'. We first apply alphabet reduction using <Ref> with the parameter δ' to get a robust PCP P”_φ with constant-sized alphabet. We then apply composition, with the parameter δ'^c_1 for c_1∈ (0,1) chosen to be a large enough absolute constant, and then apply composition with the Hadamard-based decodable PCP from <Ref> with soundness δ_1, denoted by D_δ_1. This gives us a regular PCP V with constant query complexity and robust soundness error δ by setting δ'=δ^c for some absolute constant c >0. The evolution of parameters is summarized below in <Ref>. Using <Ref> to view the robust PCP V as a generalized label cover instance, gives us the instance Ψ as required. Note that this is a generalized label cover instance, with |Σ_L|≤ O_δ(1), but where for every vertex U ∈ L the alphabet Σ_L(U) might be a subset of Σ_L. To convert this to a usual Label cover instance, we can simply allow the left alphabet to be all of Σ_L, where for every U ∈ L we fix a mapping G_U: Σ_L →Σ_L(U) with G_U(σ)=σ for all σ∈Σ_L(U), and interpret the prover's assignment A(U) as the assignment G_U(A(U)). It is easy to see that the modified instance has the same soundness. § ACKNOWLEDGEMENTS We thank Pavel Etingof for bringing us into contact with Zhiwei Yun. We sincerely thank Zhiwei Yun for helpful communication about Theorem <ref> and for kindly agreeing to write an appendix to this paper showing that variants of the Chapman-Lubotzky complexes can be constructed with q= n. We thank Shiva Chidambaram for helpful conversations about pro-p groups. Dor Minzer is supported by NSF CCF award 2227876, and NSF CAREER award 2239160. Nikhil Vyas is supported by a Simons Investigator Fellowship, NSF grant DMS-2134157, DARPA grant W911NF2010021, and DOE grant DE-SC0022199. alpha § PROOF OF THEOREM <REF> This section is devoted to the proof of Theorem <ref> in the case of large alphabet Σ. We first recall how the proof proceeds in the case Σ={0,1}. In that case, the argument consists of two parts. The first part is the work of <cit.>, which shows that complexes that possess coboundary expansion support a direct product tester over Σ={0,1}. The second part is the work <cit.>, which constructs sufficiently good coboundary expanders using variants of the Chapman-Lubotzky complexes <cit.>. Combining the two results gives Theorem <ref> in the case that Σ = {0,1}. The only part of the argument that changes for larger alphabets is the first one, and in particular <cit.>. Below we verify that the result of <cit.> in fact works for all Σ, and combined with <cit.> this implies <Ref>. The main result of this section is the following strengthening of <cit.>: There is c>0 such that for all δ>0 there are m,r∈ℕ such that for sufficiently large k, sufficiently large d and γ small enough function of d the following holds. If a d-dimensional simplicial complex X is a γ-spectral expander and (m,r,2^-o(r),c) weak UG coboundary expander, then the direct product test over X(k) with respect to every alphabet Σ has soundness δ. Namely, if F X(k)→Σ^k passes the (k,√(k)) direct product tester with respect to X with probability at least δ, then there is f X(1)→Σ such that _A∼μ_k [Δ(F[A], f|_A)≤δ]≥(δ). §.§ Direct Product Testing over Moderately Sized Alphabets In this section we prove Theorem <ref> for “moderately large” alphabet. By that, we mean that final result will depend on the alphabet size |Σ|. We use the notation from <cit.>, and more specifically: the definition of agreement of a global function g:[d]→Σ with respect to G:X(k)→Σ^k, denoted by _ν(g,G), the definition of Unique-Games coboundary expansion, and the definition of list-agreement-testing. We refer the reader to <cit.> for a formal presentation of these notions. There is c>0 such that for all δ>0 there are m,r∈ℕ such that for all R∈, sufficiently large k, sufficiently large d and γ small enough function of d the following holds. If a d-dimensional simplicial complex X is a γ-spectral expander and (m,r,exp(-o(r)),c) weak UG coboundary expander, then the direct product test over X(k) with respect to every alphabet Σ with |Σ|≤ R has soundness δ. Namely, if F X(k)→Σ^k passes the (k,√(k)) direct product tester with respect to X with probability at least δ, then there is f X(1)→Σ such that _A∼μ_k [Δ(F[A], f|_A)≤δ]≥(δ). Note here that d is allowed to depend on the alphabet size, which means that |Σ| cannot be arbitrarily large as a function of n or d. The proof of Lemma <ref> follows the argument in <cit.> closely. That proof has two major components: * The first of which is <cit.>, which reduces the 1% agreement testing problem to the 99%-list-agreement testing. * The second of which is <cit.>, which deduces the soundness of the list-agreement test from coboundary expansion. Below we explain how to adapt each one of these components in the case of moderate size alphabets. §.§.§ ModifyingBM23, Lemma B.2 Their proof utilizes <cit.> that says that the value of a dense max-k-CSP is preserved up-to small additive factors under sub-sampling of the variables of the CSP. This result is proved only for CSPs over {0,1}. Below we use the generalization of the statement from <cit.> to larger alphabet due to <cit.>. We state it in a format analogous to <cit.> and show how it follows from <cit.>. For all k,R ∈ and d ≥max((k),log R), consider a k-CSP Ψ over the alphabet [R] with dk constraints that each depend on a unique k-set of variables. Then _Q ⊂_d/2 [d][|(Ψ|_Q) - (Ψ)| ≤1/d^1/8] ≥ 1-O(1/d^1/8). Let ζ=1/d^1/4. We start by proving that (Ψ|_Q) is at least (Ψ)-ζ with high probability. Let f be the maximizer of (Ψ). Using Lemma <ref>, with the set of constraints B ⊆[d]k that f satisfies, we get that, _Q ⊂_d/2 [d][(Ψ|_Q) ≤(Ψ)-ζ]≤k(Ψ)/dζ^2≤k/√(d). For proving the other direction, let p denote _Q ⊂_d/2 [d][(Ψ|_Q)≥(Ψ)+√(ζ)]. Combining with the above equation, we can lower bound the expectation of (Ψ|_Q) as follows, _Q ⊂_d/2[d][(Ψ|_Q)] ≥ p((Ψ)+√(ζ))+(1-k/√(d)-p)((Ψ)-ζ). To upper bound the expectation we use <cit.> which asserts that _Q⊂_d/2 [d][(Ψ|_Q)] ≤(Ψ)+1/√(d), in the parameter regime under consideration here. Combining the lower and upper bound on the expectation and solving for p we get that, p ≤ O(√(ζ)) as required. Analogously to <cit.>, Lemma <ref> implies the following result (we omit the formal proof). For all k∈, alphabets Σ, d ≥max((k),log |Σ|), and all functions G: [d]k→Σ^k that satisfy _t(g,G) ≤α for all g [d]→Σ, the following holds: _B⊆_d/2 [d][max_g _t(g|_B,G|_B) < α+1/d^1/8] ≥ 1-O(1/d^1/8). We remark that the dependence of d on the alphabet size in the above lemma is ultimately why <Ref> only works for moderately sized alphabet. Using this statement we get the following lemma that reduces the 1% agreement test to the 99% list agreement test. For all δ>0, all alphabets Σ, for sufficiently large k,d ∈, sufficiently small γ compared to d, some i ∈ [1/δ^80], and τ = 1/tower_i(1/δ), the following holds. Suppose that X is a d-dimensional simplicial complex which is a γ-spectral expander, and F: X(k) →Σ^k passes the (k,√(k))-agreement-test with probability δ. Then, there exists lists (L[D])_D ∈ X(d) satisfying: * Short, non-empty lists: With probability 1-O(τ) over the choice of D∼ X(d), the list L[D] is non-empty and has size at most O(1/δ^12). * Good agreement: For all D∈ X(d) and every f ∈ L[D], we have that _ν(f, F|_D) ≥Ω(δ^12) for ν = 1/k^Ω(1). * Distance in the lists: With probability at least 1-O(τ) over the choice of D∼ X(d), the list L[D] has distance at least Ω(1/log(1/τ)). Furthermore the lists above pass the List-Agreement-Test with parameter Θ(τ), with probability 1-τ. In the proof of <cit.>, the direct product testing result for the complete complex from <cit.> is used to get a list of functions L[D], such that for most D ∼ X(d) the lists satisfy properties (1), (2) and (3) from the lemma statement. The result of <cit.> is alphabet independent hence this part of the proof ports over easily. To show that these lists are consistent with each other, i.e. they pass the list-agreement-test, they use <cit.> that asserts that the maximum agreement that G has with any global function g:[d]→{0,1} doesn't increase after restricting to a random subset B⊂_d/2 D with high probability. We replace that invocation with the analogous statement for large alphabet– namely <Ref> in place of  <cit.>, gives us that the list-agreement test passes with probability 1-τ. §.§.§ Modifying[BM23, Lemma B.5] We now explain how the analysis of the list agreement testing problem is reduced to coboundary expansion, again following the argument in <cit.> closely. Assume there exists a collection of lists {L[D]}_D ∈ X(d) that satisfy the premise of Lemma <ref>, and assume that X is a γ-spectral expander for γ < 1/(d) and a weak (O(1/δ^12),t, exp(-o(t)), c) UG coboundary expander for t=Θ(tower_i-1(1/δ)^2). Then there exists G: X(1) →Σ such that _D ∼ X(d)[Δ(G(D),L[D]) ≤δ/3] ≥ 1-O(c^1/2 + exp(-√(t)) + γ). The proof uses the UG coboundary expansion of X to reduce the 99% list agreement testing problem to the 99%-agreement testing problem on HDX. This part of the proof is alphabet-independent. It then uses the result of <cit.> that showed soundness for the 99%-agreement test on HDX to get a global function G that agrees with some element of the lists on D with high probability. It is easy to verify that the <cit.> result holds for any alphabet Σ, therefore in the large alphabet case too we get a global function G as required, finishing the proof of the lemma. We can now prove Lemma <ref>. Combining Lemma <ref> and Lemma <ref> we get that there are lists L[D] as in the former lemma and a function G X(1)→Σ as in the latter lemma. Sampling D∼ X(d) we get that with probability at least 1/2 we have that G|_D is δ/3-close to some f∈ L[D]. Conditioned on that, with probability at least Ω(δ^12) we have that Δ(f|_K, F[K])≤ν and Δ(f|_K, G|_K)≤ 2δ/3-ν, in which case Δ(G|_K, F[K])≤δ, as required. §.§ Direct Product Testing over All Alphabets In this section we complete the proof of <Ref> by a reduction to <Ref>. We start with a simple claim that relates the singular value of a large induced subgraph in terms of the singular value of the full graph. Let G=(V,E) be a graph and :L^2(V) → L^2(V) be a random walk with stationary distribution μ over V and second singular value σ. Then for all subgraphs H ⊆ V(G) with μ(H) ≥ 2σ, the random walk _H: L^2(H) → L^2(H) defined as conditioned on H, i.e. _H f(v)=_u∼(v)[f(u)|u ∈ H], has singular value at most O(σ/μ(H)). We know that the second singular value of is σ_2() = sup_f ⊥1f, f/f,f and the same holds for _H. Let f ⊥1∈ L^2(H) be a vector achieving σ_2(_H) and define f∈ L^2(G) as the vector which is equal to f on H and 0 outside it. Note that f is also perpendicular to 1. Let (u,v) ∼ E(G) and (u,v)∼ E(H) denote an edge picked according to the random walk and _H respectively. We have that, σ_2(_H)=f,_H f/f,f =_(u,v)∼ E(H)[f(u)f(v)]/_u ∼π_H[f(u)^2] =_(u,v)∼ E(G)[f(u)f(v)|u∈ H,v∈ H]/_u ∼π[f(u)^2|u ∈ H] =_(u,v)∼ E(G)[f(u)f(v)]/_(u,v)∼ E(G)[u ∈ H,v∈ H]·_u ∼π[u ∈ H]/_u ∼π[f(u)^2]. By the expander mixing lemma we have that, _(u,v)∼ E(G)[u ∈ H,v∈ H] ≥μ(H)^2 - σ_2()μ(H)(1-μ(H)) ≥μ(H)^2/2. Plugging this in we get, σ(_H)≤2/μ(H)·_(u,v)∼ E(G)[f(u)f(v)]/_u ∼π[f(u)^2]≤2/μ(H)σ_2(). We are now ready to prove <Ref>. We first explain the high level overview of the argument. Given the function F that passes the test with probability δ, we choose a large constant R∈_+ and create a function G X(k)→ [R]^k using a random hash function h:Σ→ [R]. We apply the direct product testing result, <Ref> on G to get a global function g agreeing with G on a large fraction of k-sets. Then using g we finally deduce a global function f taking values in Σ that agrees with F on a large fraction of k-sets. Hashing to smaller alphabet: Given δ we choose δ_1 ≪δ appropriately, and then set η_1 = (δ_1) as dictated by <Ref>, and finally choose R ≫ 1/η_1. The constants k,d for the complex are chosen to be large in terms of δ_1,δ,R as required by <Ref>. To summarize our parameters satisfy, δ≫δ_1 ≫η_1 ≫ 1/R ≫ 1/k ≫ 1/d. Fix such a choice henceforth. Let h:Σ→ [R] be a randomly chosen function. For A ∈Σ^k let h(A) denote the string obtained by applying h to every coordinate separately. Consider the function G_h:X(k)→ [R]^k defined as G_h = h ∘ F. We know that the distance between two distinct strings S_1,S_2 ∈Σ^t can only decrease under hashing, therefore for every h, G_h passes the agreement test with probability ≥δ. Moreover for all S_1 ≠ S_2 ∈Σ^t, _h[Δ(h(S_1),h(S_2))<Δ(S_1,S_2)-δ_1]≤1/δ_1R≤1/√(R). Therefore, _h[_B∼ X(√(k)) A,A'⊃_k B[Δ(G_h[A]|_B,G_h[A']|_B)≤Δ(F[A]|_B,F[A']|_B)-δ_1]]≤1/√(R). Fix any hash function h for which the inner probability above is at most 1/√(R). Since G_h passes the agreement test with probability δ, we can apply the agreement testing result for moderate alphabet, Lemma <ref>, with δ_1 to get a function g:X(1)→ [R] such that _A∼ X(k)[Δ(G_h[A],g|_A)≤δ_1] ≥η_1. Let ={A ∈ X(k): Δ(G_h[A],g|_A)≤δ_1}; then the above inequality translates to μ() ≥η_1. For each A∈, applying Chernoff-type bound gives that _B∼ X(√(k)) A⊃_k B[Δ(G_h[A]|_B,g|_B)≤ 2δ_1 | A ∈] ≥ 1-exp(-√(k)). From here on we will draw the tuple (B,A,A') from the distribution B∼ X(√(k)), A,A'⊃_k B and omit this notation. Applying the triangle inequality and a union bound we get, _B,A,A'[Δ(G_h[A]|_B,G_h[A']|_B)≤ 4δ_1 | A,A' ∈] ≥ 1-exp(-√(k)). By the expander mixing lemma we have that _B,A,A'[A,A' ∈] ≥μ()^2 - O(1/√(k)) ≥(η_1). We now show that F[A]|_B,F[A']|_B are close, analogously to (<ref>), and for that we use (<ref>) and a union bound as follows: _B,A,A'[Δ(F[A]|_B,F[A']|_B)>5δ_1| A,A'∈] ≤_B,A,A'[Δ(F[A]|_B,F[A']|_B)> Δ(G_h[A]|_B,G_h[A']|_B)+δ_1| A,A'∈] +_B,A,A'[Δ(G_h[A]|_B,G_h[A']|_B)> 4δ_1 | A,A' ∈] ≤1/(η_1) √(R)+exp(-√(k)) ≤δ_1, where the last inequality holds since R,k are both much larger than δ_1,δ and therefore also (η_1). For the rest of the proof it will be convenient to work with the simplicial complex Y(k)= and its downward closure, endowed with the set of measures {π_i}_i∈ [k], where π_k is the conditional distribution μ_k|, and as is usually the case, for all i<k, π_i is defined as A ∼π_k, I ⊂_i A. Rewriting the above equation we get that F passes the direct-product test (allowing for approximate equality on the intersection) with probability close to 1: _B ∼ Y(k) A,A' ⊃_k B[Δ(F[A]|_B,F[A']|_B)>5δ_1] ≤δ_1. We will use this observation to find a global function f that agrees with F on Y(k). The proof is essentially the same as proving the soundness of the direct product test on HDX in the 99% regime. We use <Ref> to get that the relevant random walks on Y have good expansion since they are derived from restricting X to which has large measure. Finding a global function on Y(1): To find a global function that agrees with F on Y(k), let us first define a set of good indices in I ⊆ Y(1) as follows: * For every i∈ Y(1) define the quantity, p_i := _B ∼ Y_i(√(k)-1) B ⊂ A,A' ∼ Y(k)[F[A]|_i≠ F[A']|_i]. If p_i > √(6δ_1), do not include i in I. * Consider μ_i(), the measure of in the link of i (with respect to X). Do not include i in I if μ_i() < μ()/2. Let _i:={A∈| i∈ A} which also equals Y_i(k-1). Our global function f is defined as: f(i)=(F[A]|_i | A ∼_i) if the majority exists and arbitrary otherwise. For every i∈ I, we will show that this is an overwhelming majority, i.e. with high probability over A∼_i, F[A]|_i=f(i). To do so we will first bound the second singular value of the down-up random walk _i on Y_i(k-1) defined as: B ∼ Y_i(√(k)-1), B ⊂ A,A'∼ Y_i(k-1). Let '_i be the random walk: B ∼ X_i(√(k)-1), B ⊂ A,A'∼ X_i(k-1). One can check that _i = '_i| A,A'∈_i. Lemma <ref> implies that the second singular value of '_i is bounded by O(1/k). Since μ(_i) ≥μ()/2 ≫ 1/k, by Claim <ref> we get that the induced random walk _i has singular value σ_2(_i) ≤ O(1/kμ(_i)) ≤ 1/√(k). We are ready to prove that f(i) agrees with F[A]|_i for most A ∼_i. For every σ∈Σ let _i,σ be the set of A ∈_i where F[A]|_i=σ and let π_i,σ denote its measure inside Y_i(k-1). Using Cheeger's inequality we get that, 1/2_(A,A') ∼_i[A ∈_i,σ,A' ∉_i,σ] ≥ (1-λ_2(_i))π_i,σ(1-π_i,σ). Using that λ_2(_i)≤ 1/√(k) and summing up the above over σ∈Σ we get, 1/2∑_σ_(A,A') ∼_i[A ∈_i,σ,A' ∉_i,σ] ≥ (1-1/√(k))(1-∑_σπ_i,σ^2). The LHS above is equal to _(A,A')∼_i[F[A]|_i ≠ F[A']|_i] which equals p_i. By the assumption that i is in I, p_i is less than √(6δ_1) so rearranging the above equation we get max_σπ_i,σ≥ 1-O(√(δ_1)). Since f(i)=max_σπ_i,σ it follows that _A ∼ Y_i(k-1)[F[A]|_i = f(i)] ≥ 1-O(√(δ_1)). Bounding the measure of I⊆ Y(1): We bound the measure of I, π_1(I), by showing that each of the two conditions defining I is violated with small probability. Let us start with condition (1). First we bound the expectation of p_i as follows, _i∼ Y(1)[p_i]=_i∼ Y(1) B∼ Y_i(√(k)-1) A,A'⊃_k B[F[A]|_i≠ F[A']|_i]=_B∼ Y(√(k)) A,A'⊃_k B[Δ(F[A]|_B,F[A']|_B)] ≤ 5δ_1, where we used (<ref>) in the last inequality. Applying Markov's inequality we get that, _i∼ Y(1)[p_i >√(6δ_1)]≤√(6δ_1). Now we will bound the probability of violating condition (2). Using Lemma <ref> we have that, _i ∼ X(1)[μ_i() ≤μ()/2]≤ O(1/kη_1)≤1/√(k). We can translate this bound to i ∼ Y(1) in a straightforward way: _i ∼ Y(1)[μ_i() ≤μ()/2]= ∑_i∈ Y(1)π_1(i)μ_i()/μ()[μ_i() ≤μ()/2]≤1/2_i∼ X(1)[μ_i() ≤μ()/2] ≤1/2√(k). By a union bound we conclude that _i∼ Y(1)[i ∉ I] ≤ O(1/√(k))+√(6δ_1)√(δ_1). Direct-Product Test Soundness on Y: We are ready to conclude that Δ(F[A],f|_A) δ_1^1/4 for a large fraction of A ∼ Y(k). We do so by calculating the expectation of Δ(F[A],f|_A): _A ∼ Y(k)[Δ(F[A],f|_A)] =_A ∼ Y(k)_i∼ A[[F[A]|_i ≠ f(i)]] =_i∼ Y(1)[_A ∼ Y_i(k-1)[F[A]|_i ≠ f(i)]] ≤_i ∼ Y(1)[i ∉ I]+_i ∼ Y(1)[_A ∼ Y_i(k-1)[F[A]|_i ≠ f(i)] | i ∈ I] √(δ_1), where we used (<ref>) and (<ref>) in the last inequality. Applying Markov's inequality we get _A ∼ Y(k)[Δ(F[A],f|_A) ≤δ_1^1/4] ≥ 1-O(δ_1^1/4). Moving from the complex Y to the complex X we conclude that _A ∼ X(k)[Δ(F[A],f|_A) ≤δ_1^1/4] ≥μ()(1-O(δ_1^1/4)) ≥η_1/2, which gives us the desired conclusion if we set δ_1=Θ(δ^4). § CONSTRUCTION OF DECODABLE PCPS In this section we prove Lemmas <ref> and <ref>. Both constructions are based on low-degree testing, and in Section <ref> we begin by covering the necessary background about it. In Section <ref> we construct a decodable PCP that has exponential size but constant alphabet size, establishing Lemma <ref>. This construction is based on the Hadamard Code. In Section <ref> we construct a decodable PCP that has polynomial size and quasi-polynomial alphabet size, establishing Lemma <ref>. This construction is based on the Reed-Muller code. §.§ Preliminaries of Low Degree Testing Let be a finite field. A linear plane P ⊆^m is associated with two points x,y ∈^m and is equal to the set P={t_1 x+t_2 y: t_1,t_2 ∈}. Suppose f: ^m → is a purported linear function. Let be an oracle that assigns every plane P in ^m a linear function (P) that is supposedly the restriction of f onto P. Then one can perform the following “plane-vs-point” test to verify if f is indeed linear. Given a function f:^m → and a planes oracle the plane-vs-point tester proceeds as follows: * Sample a uniformly random linear plane P ⊂^m and a random point x ∈ P. * Accept if (P)(x) = f(x), reject otherwise. In general we can perform a subspaces-vs-point test given an arbitrary distribution over subspaces that is well-behaved in the following sense: Let π be a distribution over tuples of vectors (x_1,…,x_t)∈_q^t. Abusing notation, we use π to also denote the induced distribution of (x_1,…,x_t) where (x_1,…,x_t) are sampled according to π. Let _1 be the joint distribution over ((x_1,…,x_t),P) where (x_1,…,x_t) ∼π and P=(∑ c_i x_i, ∑ c'_i x_i) for c_i,c'_i ∈_q chosen independently and uniformly. Let _2 be the distribution over (Ω, P) where a plane P is drawn uniformly from _q^n and Ω is then drawn from π conditioned on containing P. We say that π is η-good if the total-variation distance of _1 and _2 is at most η. The following linearity-testing theorem, first proved in <cit.>, provides a list-decoding guarantee for the plane-vs-point linearity test. One can get soundness for the generalized subspaces-vs-point test using a simple reduction, so long as the associated distribution over subspaces is good. We use a version of the linearity-testing as stated in <cit.>. There exists c>0 such that the following holds. For all m ∈_+ and primes q that are large enough the following holds. Let 𝔽 be a field of size q and let δ∈ (0,1) be such that δ≥1/q^c. For any function f : 𝔽^m →𝔽, there exists a list of linear functions L_1, L_2, …, L_t for t = O(1/δ^3) such that the following holds for any planes oracle (even for a randomized one): _, P,x ∈ P[(P)(x) ≠ f(x) ∨∃ i ∈ [t], L_i|_P ≡(P)] ≥ 1 - δ. Furthermore the same holds for subspaces Ω sampled from an η-good distribution π, _, Ω∼π,x ∈Ω[(Ω)(x) ≠ f(x) ∨∃ i ∈ [t], L_i|_P ≡(P)] ≥ 1 - δ-η. The proof of the first statement can be found in <cit.>, and we provide a proof of the second statement by reducing it to the first. Let π be an η-good distribution over subspaces and let be any subspace oracle. Consider a randomized planes oracle defined as follows: given a plane P, (P) is defined as (Ω)|_P for Ω∼π | Ω⊃ P. By the soundness of linearity testing, there is a list of t = (1/δ) linear functions L_1, …, L_t such that _, P, x∈ P[(P)(x) = f(x) ∨∃ i, L_i|_P ≡(P)] ≥ 1 - δ/2. Let _1 and _2 be the distribution over (Ω,P) as specified in <Ref>. Rewriting the above we get, _,(Ω,P)∼_2,x[A(Ω)(x) = f(x) ∨∃ i : L_i|_P ≡(Ω)|_P] ≥ 1 - δ/2, and since π is η-good we conclude that _,(Ω,P)∼_1,x[(Ω)(x) = f(x) ∨∃ i : L_i|_P = (Ω)|_P] ≥ 1 - δ/2 - η. Above we have the agreement of L_i with (Ω) on a random P chosen from Ω instead of over all of Ω. However, by a standard Schwartz-Zippel argument, for any i and Ω, since P contains a random point in Ω we get _,P[L_i ≠(Ω) ∧ L_i|_P = (Ω)|_P ] ≤1/q. Hence, by a union bound over i, we have: _,Ω∼π,x[(Ω)(x) = f(x) ∨∃ i L_i|_Ω = (Ω) ] ≥ 1 - δ/2 - η - t/q≥ 1-δ-η. Suppose that now we want to verify whether a function f: ^m → has degree at most d. Let be an oracle that assigns every affine plane P⊂^m (namely, a set of the form P={x+t_1y+t_2z | t_1,t_2∈} for some x,y,z∈^m) a polynomial of degree at most d, denoted by (P). The polynomial (P) is supposedly the restriction of f onto P. Then one can perform the same plane-vs-point test as above. The theorem below, first proved in <cit.>, provides a list-decoding guarantee for the plane-vs-point test, and by the same reduction as above we also get soundness for the subspaces-vs-point low-degree test. The analogous statement for η-good distributions for subspaces follows by a reduction to the plane-vs-point test along the lines of the proof of <Ref>, hence we omit it. There exists c>0 such that the following holds. Let be a field of size q. Let m, d ∈ℤ^≥ 0 and δ∈ (0, 1) be such that δ > (md/q)^c. For any function f : 𝔽^m →𝔽, there exists a list of polynomials Q_1, Q_2, …, Q_t of degree at most d where t = O(1/δ), such that the following holds for any planes table (even a randomized one): _, P,x ∈ P[(P)(x) ≠ f(x) ∨∃ i ∈ [t], Q_i |_P ≡(P)] ≥ 1 - δ. Furthermore the same holds for subspaces sampled from an η-good distribution π, _, Ω∼π,x ∈Ω[(Ω)(x) ≠ f(x) ∨∃ i ∈ [t], Q_i |_P ≡(P)] ≥ 1 - δ-η. §.§ Proof of Lemma <ref>: Hadamard-based dPCP In this section we build a dPCP for Circuit-SAT that encodes a satisfying assignment of n bits using 2^O(n^2) symbols from an alphabet of size O(1). Our reduction goes through the Quadratic Equations problem, defined as follows: An instance (X,E) of Quadratic Equations over a field , abbreviated as _m,n(), is a system of m quadratic equations E in the variables X = (x_1,…,x_n) and the goal is to decide whether the system is satisfiable or not. The value of an instance Q of is the maximum fraction of equations satisfied by any x∈ and is denoted by (Q). For all δ >0, for q = 1/δ^O(1) and for all alphabets Σ, the language _Σ(N,S) has a regular decodable PCP with the following parameters: * Robust soundness error δ. * Proof alphabet size q. * Proof length q^O(S^2). * Randomness complexity O(S^2log(q)). * Query complexity q^O(log|Σ|). * Decision complexity q^O(log|Σ|). * List size 1/δ^O(1). We start with an overview of this proof. We reduce the problem to Gap-Quadratic-Equations () over _q, i.e. the problem of deciding whether a system of quadratic equations over _q is satisfiable or has value at most O(1/q). Then we reduce this to Gap-Generalized Label Cover, with the right side of vertices being points in _q^n+n^2 and the left side of vertices is low-dimensional subspaces, where n is the number of variables in the instance Q. The assignment X ∈_q^n to Q is thought of as coefficients of a linear function; in fact, to facilitate checking quadratic equations in X we encode the vector (X,X^⊗ 2)∈_q^n+n^2 using the Hadamard code. The prover is supposed to provide the evaluation of this function on the left and right side of vertices. Additionally the left side is constructed so that it contains a random equation and vectors corresponding to the locations of X that the verifier wants to decode at. Using the soundness of linearity testing, by querying a random left vertex and a random right vertex inside it, the verifier can reject if the prover assignment is not an evaluation of (X,X^⊗ 2) or if X does not satisfy Q. If it does not reject, then with high probability, it is able to decode the required values of X. We now proceed to the formal proof. Reduction to Quadratic Equations: Let Σ=[r] and C be an instance of _Σ(N,S), i.e. C is a Boolean function, C:Σ^N→{0,1} also represented by an S-sized circuit C that computes the equivalent function, C:{0,1}^Nlog r→{0,1}. Let the Σ-valued variables be denoted by Y=(y_1,…,y_N), and each y_i is associated to a block of log r Boolean variables, denoted by (z_(i,1),…,z_(i,log r)). We will use this identification to move back and forth between the Y and Z variables. Let us start by reducing the problem to (_2) while preserving satisfiability. This reduction is standard, following the proof of the Cook-Levin theorem. Formally, we get an instance Q_1=(X,E_1) on n=O(S) Boolean variables denoted by X = (Z,B) (where B is a set of auxiliary variables that the reduction produces) and m_1=O(n) equations, and we have the property that X is a satisfying assignment for Q_1 if and only if Z is satisfies the circuit C. Generating a gap for : Fix a prime number q∈ with q=1/δ^C for a large enough absolute constant C>0 to be chosen later. Firstly, consider the instance Q_2=(X,E_2) over _q where E_2 = E_1 ∪{x_i^2-x_1=0}_i ∈ [n] with |E_2|=m_2. It is easy to see that X is a satisfying assignment for E_1 if and only if it satisfies E_2. Let C be any linear code with the generating matrix G ∈_q^m × m_2 with m=O(m_2) and distance ≥ 1-2/q (such a code can be constructed in polynomial time by concatenation of standard error correcting codes). Then consider the instance Q=(X,E) with |E|=m, where the j^th-equation in E is the linear combination of equations from Q_1 where the kth equation is multiplied by G_j,k. If X satisfies Q_1 then it also satisfies Q_2, but if Q_1 is unsatisfiable then using the distance of C we get that every assignment X satisfies at most 2/q-fraction of the equations in E, i.e. (Q_2) ≤ 2/q. Construction of Label-Cover Instance: We now construct a generalized label cover instance Ψ using the Hadamard code. The left vertex set L of Ψ, is (log r+O(1))-dimensional linear subspaces of _q^n+n^2 endowed with a distribution π_L and the right side R is points in _q^n+n^2. To describe π_L we start with some notation. Recall that the i^th variable y_i for i ∈ [N] is associated to a block of variables (z_(i,1),…,z_(i,log r)) whose indices are a subset of [n]. Each variable z_(i,k) corresponds to the vector e_(i,k)∈_q^n+n^2 that has a 1 in the (i,k)-location (that occurs in the first block of n indices) and 0 everywhere else. Let S_i = {e_(i,k):k≤log r}. Additionally, let the j^th-equation in E be (X,X⊗ X),E_j = b_j for some E_j ∈_q^n+n^2,b_j ∈_q. To pick a random vertex from π_L, sample i ∼ [N], j ∼ [m], y ∼_q^n, z,z' ∼_q^n+n^2 and then pick the subspace Ω_i,j,y,z,z'⊂_q^n+n^2 defined as (S_i,E_j,(y,0),(0,y⊗ y),z,z'). For notational convenience, we drop the subscript in Ω when clear from context. We now discuss the alphabets for Ψ, also viewed as a prover assignment. As an assignment to the right-side, the prover is supposed to provide us with a linear function L=(A,A⊗ A) mapping a point C ∈_q^n+n^2 to L,C=∑_i∈ [n] C_i L_i+∑_i,j∈ [n] C_ijL_ij, where A is a satisfying assignment for Q. On the left side the prover is supposed to provide the restriction of L to each subspace. Formally, * Right Alphabet: For each point in V=_q^n+n^2 the prover provides a value in _q. That is, an assignment of the prover to the vertics on the right side is thought of as a points oracle f:_q^n+n^2→_q. * Left Alphabet: For each subspace Ω_i,j,y,z,z' the prover provides a degree 1 polynomial (Ω) via its coefficients ((Ω)-many) on the subspace. For convenience of notation we represent (Ω) as a vector in _q^n+n^2, although this choice is not unique. The evaluations of (Ω) must satisfy, * (Ω),E_j = b_j. * (Ω), (0,y⊗ y)= (Ω),(y,0)^2. Note that the right alphabet size is q and left alphabet size is at most q^O(log r). Given this we have the following PCP decoder– at input i ∈ [N], * Randomly sample Ω_i,j,y,z,z'∼π_L|i and x ∼Ω_i,j,y,z,z'. * If (Ω),x≠ L(x) output ⊥, else output the symbol F(Ω,x) ∈Σ corresponding to the tuple ((Ω),e_(i,1),…,(Ω),e_(i,log r)). Completeness: Suppose the instance C we started with is satisfiable, and let A' be a satisfying assignment. In that case the instance Q we generated is satisfiable, and we can pick an assignment B to the auxiliary variables so that the assignment A=(A',B) satisfies Q. Assign the right-side of the label cover according to the linear function L=(A,A⊗ A), i.e. every point v ∈ V is assigned the value L,v. For each subspace Ω∈ U assign the linear function (Ω)=L|_Ω. It is easy to check that the left assignment satisfies all the conditions that the left alphabet is supposed to. Furthermore, _i ∼ [N] Ω_i,j,y,z,z'∼π_L|i x ∼Ω_i,j,y,z,z'[F(Ω,x) = (A(i,1),…,A(i,log r))] = 1. Soundness: We will now verify the soundness condition, and assume that the initial instance C is unsatisfiable. Fix an assignment f to the right vertices of the label cover instance. We start by verifying that the distribution π_L is good. The distribution π_L is O(1/q)-good. Consider the distribution _1 that samples Ω_i,j,y,z,z'∼π where Ω_i,j,y,z,z'=(S_i,E_j,(y,0),(0,y⊗ y),z,z'), and then samples P ⊆Ω with P=(c_1 e_(i,1)+…+c_r+4z+c_r+5z',c'_1e_(i,1)+…+c'_r+4z+c'_r+5z') for uniformly and independently chosen c_i,c'_i ∈_q. If c_r+4,c'_r+5 are both not equal to zero and both z,z' ≠ 0, which happens with probability at least 1-O(1/q), then the marginal on P ∼_1 is the same as a uniformly random plane. Therefore the total variation distance between the distributions P ∼_1, Ω∼_1|P and P ∼_2, Ω∼_2|P is at most O(1/q), as required. By Claim <ref> we may use <Ref> to get a list of linear functions L_1,…, L_t ∈_q^n+n^2 for t = O(1/δ^3) such that for all plane oracles , _Ω, x[(Ω),x≠ f(x) ∨∃ j such that L_j|_Ω≡(Ω)] ≥ 1-δ/4-O(1/q). We will now prune the above list of linear functions so that we are only left with L_j such that: * L_j = (A_j,A_j ⊗ A_j) for some A_j ∈_q^n. * L_j satisfies the quadratic system Q, i.e. L_j,E_k=b_k for all k ∈ [m]. Denote by the set of indices j ∈ [t] for which L_j satisfies both of the conditions above. First note that if L_j is good then A_j is a satisfying assignment for Q. Therefore let us bound the probability that for some j ∉, (Ω) ≡ L_j|_Ω. Fix such an index j. Suppose condition (1) is violated for L_j = (A_j,B_j), i.e. B_j ≠ A_j⊗ A_j. Then consider the degree 2 polynomials B_j(y) = B_j,y ⊗ y and A'_j(y) = A_j ⊗ A_j,y ⊗ y=A_j,y^2 for y ∈_q^n. By the Schwartz-Zippel lemma B_j(y)≠ A_j'(y) for at least (1-2/q)-fraction of y. Since Ω∼π_L contains a random y we get that L_j,(0,y⊗ y)≠A_j,y^2, thus implying that L_j,(0,y⊗ y)≠L_j,(y,0)^2 with probability at least 1-O(1/q) over π_L. However, since our assignment always satisfies (Ω),(0,y⊗ y)=(Ω),(y,0)^2, we see that the probability, for a random Ω, that L_j|_Ω≡(Ω) is at most O(1/q). Let us now suppose that L_j violates (2). Then it can only satisfy 2/q-fraction of the equations in E since (Q) ≤2/q (when it is unsatisfiable). Again since a random Ω contains a random equation E_k ∼ E, and (Ω) satisfies E_k, we get that L_j|_Ω≡(Ω) with probability at most O(1/q). Thus, we have shown that for any bad L_j, _Ω[ L_j|_Ω≡(Ω)] 1/q. Hence, a simple union bound gives us that a modification of (<ref>) holds, _Ω,x[(Ω),x≠ f(x) ∨∃ j ∈: L_j|_Ω≡(Ω)] ≥ 1 - δ/4 - O(t/q) ≥ 1-δ/2, where the last inequality holds by choosing q ≥Ω(1/δ^4). Reformulating (<ref>) we get that there is a list of satisfying assignments (A_j)_j ∈ for Q such that for all , _i ∼ [N] Ω_i,j,y,z,z'∼π_L|i x ∼Ω_i,j,y,z,z'[F(Ω,x) ∈{⊥}∪{(A_j(i,1),…,A_j(i,log r)): j∈}] ≥ 1-δ/2, which completes the proof of soundness of Ψ. Modifying Ψ to be regular: The label cover instance Ψ may not be regular, but this is easy to fix as we explain now. First for simplicity we put in a vertex on the left for every choice of randomness so that we now have a uniform distribution over the left-side of vertices (instead of π_L). Note that the degree of a subspace Ω on the left is equal to q^(Ω), therefore we can make it regular by throwing away the subspaces that have small dimension, which are at most a O(1/q)-fraction of all the subspaces. To make the instance right-regular first note that the distribution on the right side of Ψ is :=O(1/q)-TV-close to uniform (the proof is the same as that of <Ref>). Let d_u be the degree of u ∈ R, let d be the average right degree, and let N be the number of vertices on the right. We discard the right vertices u for which |d_u-d| ≥ d√(), which is at most ≤√()-fraction of all points in _q^n+n^2. Next we add some dummy vertices on the left, and then to each vertex on the right we add at most √() d edges to the dummy vertices so that the resulting right vertex set is regular. By discarding the high-degree vertices on the right we might have ruined the left regularity by a bit, so we add some dummy vertices on the right to bring the left degree back to the original. This whole operation costs us at most 1/√(q) in the soundness, which means that (<ref>) holds with probability ≥ 1-δ. Converting to a robust PCP: Using the equivalence between generalized Label Cover and robust PCPs in <Ref>, one gets that this is a regular robust decodable PCP, thus finishing the proof. §.§ Proof of Lemma <ref>: Reed-Muller based dPCP In this section we build a dPCP for _Σ that has polynomial size and uses an alphabet of quasi-polynomial size. We follow the exposition from the lecture notes <cit.> very closely. §.§.§ Zero on Subcube Test The Zero-on-Subcube Testing problem is the following: given a subset H ⊆ and a function f we want to test if f ≡ 0 on H^m and (f)≤ d. This section contains several helpful tools about the zero-on-subcube testing problem, and as the proofs are straightforward we omit them. The claim below is a useful starting point in designing a local test for this problem. A polynomial f of degree at most d is identically zero over H^m if and only if there exist polynomials P_1, P_2, …, P_m of degree at most d such that f(x) = ∑_i = 1^m g_H(x_i) P_i(x_1, x_2, …, x_m), where g_H(x) denotes the univariate polynomial ∏_h∈ H(x-h). Let us now state a local test for verifying if f≡ 0 on H^m along with (f)≤ d. Let be a planes oracle such that for each affine plane P, (P) is an (m+1)-tuple (P_0, P_1, P_2, …, P_m) of polynomials of degree at most d such that P_0 = ∑_i g_H(x_i) P_i. Similarly let f be a points oracle that is an (m+1)-tuple of functions f = (f, f_1, …, f_m) from 𝔽^m to 𝔽. The zero-on-subcube test then proceeds as follows, * Sample an affine plane P uniformly at random and a random point x from it. * Query the planes oracle for (P) and the points oracle for f(x). * Accept iff (P)(x) = f(x). It is easy to prove the soundness of the Zero-on-Subcube test using the soundness of the plane-vs-point test, namely <Ref>, and an application of the Schwartz-Zippel lemma. There exists c>0 such that the following holds. Let be a field of size q, let H ⊆, let m, d ∈ℤ^≥ 0 and let δ∈ (0, 1) be such that δ≥((d+|H|)m/q)^c. For any function f : 𝔽^m →𝔽, there exists a list of polynomial maps Q^(1), …, Q^(t), with (Q^(i)) ≤ d and Q^(i)≡ 0 on H^m, where t = O(1/δ) such that the following holds for any planes oracle (even for a randomized one): _, P,x ∼ P[(P)(x) ≠f(x) ∨∃ i ∈ [t], Q^(i)|_P ≡(P)] ≥ 1 - δ. Furthermore the same holds for subspaces sampled from an η-good distribution π, _, Ω∼π, x ∼Ω[(Ω)(x) ≠f̅(x) ∨∃ i ∈ [t], Q̅^̅(̅i̅)̅|_P ≡(Ω)] ≥ 1 - δ-η. The proof can be found in <cit.>. §.§.§ Proof of Lemma <ref> In this section we prove Lemma <ref>, restated below. For all δ >0 and all alphabets Σ, _Σ(N,S) has a regular decodable PCP with the following parameters: * Robust soundness error δ. * Proof alphabet size and proof length at most S^O(1). * Randomness complexity at most O(log S). * Query and decision complexity at most (log(S))^O(log|Σ|). * List size at most 1/δ^O(1). The proof is broken into several steps. Reduction to 3-SAT: Let Σ=[r] and be an instance of _Σ(N,S), i.e. is a Boolean function, :Σ^N→{0,1} also represented by an S-sized circuit that computes the equivalent function, :{0,1}^Nlog r→{0,1}. Let the Σ-valued variables be denoted by Y=(y_1,…,y_N), and each y_i is associated to a block of log r Boolean variables, denoted by (z_(i,1),…,z_(i,log r)). We will use this identification to move back and forth between the Y and Z variables. Let us start by reducing the problem to 3-SAT while preserving satisfiability, again using the Cook-Levin reduction. Formally, we get a 3-SAT instance φ on n=O(S) Boolean variables denoted by X = (z,b) where b is a set of auxiliary variables that the reduction produces, and we have the property that X is a satisfying assignment for φ if and only if Z satisfies . Arithmetization: Next, we perform an “arithmetization” procedure on φ. Let 𝔽 be a field of size q = log^C n, for some large absolute constant C>0 to be chosen later. Fix any subset H of 𝔽 such that H contains {0, 1}, |H|=Θ(log n) and there is an integer m=Θ(log n/loglog n) such that |H|^m = n; we will identify [n] with H^m and use these interchangeably. We will get a polynomial representation of the formula φ. For each possible clause of 3 variables, the polynomial encodes whether the clause belongs to φ or not. We think of the formula φ as a function mapping [n]^3 ×{0, 1}^3 to {0, 1} as follows: φ(i, j, k, b_1, b_2, b_3) = 1 if x_i^b_1∨ x_j^b_2∨ x_k^b_3 is a clause in φ, 0 otherwise, where x_i^0 and x_i^1 represent the negative and positive instances of x_i, respectively. Since we have identified H^m with [n] and H contains {0, 1}, we can think of φ as a function from H^3m+3 to 𝔽 (define φ to be 0 outside the points mentioned above). As in the case of the assignment, we can define a polynomial φ over 3m + 3 variables of degree O(m|H|) that agrees with φ on H^3m. Similarly, every Boolean assignment A : [n] →{0, 1} can also be thought of as a function mapping H^m to 𝔽. Let A(x) also denote the polynomial of degree O(m |H|) on ^m that agrees with A when evaluated on inputs from H^m. Given the polynomials φ and A define p_φ, A on ^3m+3 as follows, p_φ, A(i, j, k, b_1, b_2, b_3) = φ(i, j, k, b_1, b_2, b_3)(A(i) - b_1)(A(j) - b_2)(A(k) - b_3). Note that (p_φ, A) ≤ O(m|H|). We have the following claim (we omit the straightforward proof). Let A be any polynomial defined on m variables. Assume the polynomial p_φ, A is constructed from A as above. Then, p_φ, A is identically zero on H^3m+3 if and only if A|_H^m is a satisfying assignment for the formula φ. Construction of Label-Cover Instance: Given φ we will construct the label cover instance Ψ. The left-side L will be O(log r)-dimensional linear subspaces of ^3m+3 endowed with a distribution π_L and the right side R will be ^3m+3. First define the linear map ρ:^3m+3→^3m+3 as ρ(i,j,k,b_1,b_2,b_3)=(k,i,j,b_1,b_2,b_3). Recall that the i^th variable for i ∈ [N] is associated to a block of variables {(i,1),…,(i,log r)}⊆ [n] which in turn corresponds to a log r-sized set S_i ⊂ H^m. To pick a random subspace from π_L first sample i ∼ [N], extend each element in S_i randomly to 3m+3 coordinates to get a log r-sized set S_i⊂^3m+3. Then sample y,y',z ∼^m, and z' ∼^m such that the first m coordinates of z' are the same as the first m coordinates of z and the remaining coordinates are uniformly chosen from ^2m+3. Then pick the subspace Ω_S_i,y,y',z,z' defined as (S_i,y,y',z,z',ρ(z),ρ^2(z)). For notational convenience, we drop the subscript in Ω_S_i,y,y',z,z' when clear from context. The PCP we construct is based on the zero-on-subcube test for subspaces, where the prover is supposed to prove that p_φ,A is zero over H^m while also allowing us to decode coordinates of A. * Right Alphabet: For each point in R = ^3m+3, the prover provides a (3m+5)-tuple of values in . This can also be thought of as a “points oracle” or a collection of functions f : 𝔽^3m+3→𝔽^3m+5, f=(f_-1,f_0,…,f_3m+3). * Left Alphabet: For each subspace Ω_S_i,y,y',z,z'∈ L, the prover provides (Ω) which is a (3m+5)-tuple of polynomials (p_-1, p_0, p_1, …, p_3m+3) of degree O(m|H|) defined on Ω such that: * p_0(x) = ∑_1 ≤ j ≤ 3m+3 g_H(x_j)p_j(x) for each x ∈ L(Ω). * p_-1(z) = p_-1(z'). * p_0(z) = φ(z)(p_-1(z) - z_3m+1)(p_-1(ρ(z)) - z_3m+2)(p_-1(ρ^2(z)) - z_3m+3). This can be thought of as a “subspaces oracle”. Each polynomial p_i is provided via its values on the subspace. Note that the right alphabet has size q=log(S) and the left alphabet for each subspace is a subset of [q]^(3m+5)D, where D denotes the number of points in any O(log r)-dimensional subspace, which is at most q^O(log r). It is easy to check that given σ∈ [q]^(3m+5)D, one can decide whether it belongs to Σ_L(Ω) (that is, it satisfies the three properties that the left-alphabet is supposed to) with a circuit of polynomial size, i.e. size equal to (3m+5)q^O(log r)log q=(log S)^O(log r). Given this our PCP decoder is simple – given i ∈ [N]: * Randomly sample Ω_S_i,y,y',z,z'∼π_L|i and x ∼Ω_S_i,y,y',z,z'. * If (Ω)(x) ≠f(x), output ⊥, else output the symbol F(Ω,x) ∈Σ that corresponds to the tuple (p_-1(z))_z ∈S_i. Completeness: Suppose is satisfiable. Then φ is also satisfiable, and we let A be some satisfying assignment for it (whose first Nlog r variables correspond to a satisfiable assignment of ). Additionally let A:^3m+3→ be the polynomial A(i,j,k,b_1,b_2,b_3) = A(i). Let f_-1 = A and f_0 = p_φ, A. We know that p_φ,A is zero on H^m therefore by <Ref> we get the witness polynomials f_1 = P_1, …, f_3m+3 = P_3m+3, with p_φ,A = ∑_1≤ j≤ 3m+3g_H(x_j)P_j(x). Then assign the right-side of the label cover to be f̅ = (f_-1,f_0,f_1,…,f_3m+3). To assign the left-side of Ψ, for each subspace Ω let (Ω) = (p_-1,…,p_3m+3) with p_i being the restriction of f_i to Ω. It is easy to check that the p_i's satisfy all the conditions they are supposed to. Furthermore, _i ∼ [N] Ω_S_i,y,y',z,z'∼π_L|i x ∼Ω_S_i,y,y',z,z'[F(Ω,x) = (A(i,1),…,A(i,log r))] = 1. Soundness: Towards etablishing the soundness of the reduction, we first prove that π_L is good. The distribution π_L is O(1/q)-good. The proof is identical to the proof of <Ref>. Fix an assignment f to the label cover instance. Using <Ref> we get that there exists a short list of polynomial maps Q̅^̅(̅1̅)̅,…, Q̅^̅(̅t̅)̅ for t = O(1/δ), such that for all j, Q^(j)_0 is zero on the subcube H^m and (Q̅^̅(̅j̅)̅)≤ d, and for all plane oracles _Ω∼π_L, x∼Ω[(Ω)(x) ≠f̅(x) ∨∃ j such that Q̅^̅(̅j̅)̅|_Ω≡(Ω)] ≥ 1 - δ/2. We will now prune the above list of polynomial maps so that we are only left with those tuples Q^(j) such that Q^(j)_0 is p_φ, A for some satisfying assignment A of the formula φ, and yet the above condition holds for this smaller list of polynomials. Let be the set of indices j ∈ [t] for which Q̅^̅(̅j̅)̅ satisfies: * For all x ∈^3m+3, Q̅^̅(̅j̅)̅_̅0̅(x) = φ(x)(Q̅^̅(̅j̅)̅_̅-̅1̅(x) - x_3m+1)(Q̅^̅(̅j̅)̅_̅-̅1̅(ρ(x)) - x_3m+2)(Q̅^̅(̅j̅)̅_̅-̅1̅(ρ^2(x)) - x_3m+3). * For all z_1 ∈^m and z_2,z_3 ∈^2m+3, Q̅^̅(̅j̅)̅_̅-̅1̅(z_1, z_2) = Q̅^̅(̅j̅)̅_̅-̅1̅(z_1, z_3). First note that if j is good then Q̅_̅0̅^̅(̅j̅)̅ can be associated with a satisfying assignment A^(j) for φ. Therefore let us bound the probability that for some j ∉, (Ω) ≡Q̅^̅(̅j̅)̅|_Ω. Fix such an index j. Suppose condition (1) is violated for Q̅^̅(̅j̅)̅. By the Schwartz-Zippel lemma, this implies that with probability at least 1 - d/q over the choice of a random Ω, the above inequality continues to hold when restricted to Ω, since Ω contains a random z such that z, ρ(z), and ρ^2(z) lie in Ω. However, since our assignment always satisfies condition (1) with equality, we see that the probability, for a random Ω, that Q̅^̅(̅j̅)̅|_Ω≡(Ω) is at most d/q. Similarly, if (2) above is violated, then it is violated with probability at least 1-d/q over Ω∼π_L, since Ω contains a random z, z' that agree on the first m coordinates. Hence, the probability that Q̅^̅(̅j̅)̅|_Ω≡(Ω) is at most d/q. Thus, we have shown that for any bad j, _Ω[Q̅^̅(̅j̅)̅|_Ω≡(Ω)] d/q. Hence, a simple union bound gives us that a modification of (<ref>) holds: _Ω,x[(Ω)(x) ≠f̅(x) ∨∃ j ∈: Q̅^̅(̅j̅)̅|_Ω≡(Ω)] ≥ 1 - δ/2 - O(td/q) ≥ 1-δ, where in the last inequality we used that δ≥(m,|H|/q). Reformulating (<ref>) we get that there is a list of satisfying assignments (A^(j))_j ∈ such that _i ∼ [N] (i,y,y',z,z')∼π_L|i x ∼Ω_i,y,y',z,z'[F(Ω,x) ∈{⊥}∪{(A^(j)(i,1),…,A^(j)(i,log r)): j∈}] ≥ 1-δ, which completes the proof of soundness. Modifying Ψ to be regular: The proof of regularization of Ψ is the same as that of <Ref>, hence we omit it here. Converting to Robust dPCP: Using the equivalence between generalized Label Cover and robust PCPs in <Ref>, one gets that this is a regular robust decodable PCP. § PROOF OFTHEOREM 3.3 Notation: For a graph G = (V, E) let E(u, S) denote the fraction of edges incident of u which are also incident on set S ⊆ V. For a set of vertices ⊆ V, let G() denote the induced graph with vertices . For a graph G = (V, E) and a subset of vertices T ⊆ V define Q(T) by the following algorithm: Set Q(T) = V∖ T. We now iteratively remove a vertex u from Q(T) if E(u, V∖ Q(T)) ≥ 1/5, until Q(T) halts. The proof of <Ref> requires the following two lemmas, which are easy consequences of <cit.> and <cit.>. There exists an absolute constant α > 0 such that the following holds. Let G = (V, E) be a regular expander graph with second largest singular value σ_2(G) ≤α. Let T ⊂ V be any set such that |T| ≤α n. At convergence |Q(T)| ≥ |V|-μ|T| for some universal constant μ. There exists an absolute constant α>0 such that the following holds. Let G = (V, E) be a regular expander graph with second largest singular value σ_2(G) ≤α. Let T ⊂ V be any set such that |T| ≤α n. For all v_1, v_2 ∈ Q(T) there exists a path of length O(log n) between v_1 and v_2 in G(V∖ T). The conclusion follows immediately from <cit.> by setting T_2 = ∅ and T_1 = T therein. Let |V| = n, |E| = m, and δ = n/m. For a pair u, π(u), we wish to construct a path between them, which we denote by P(u). Fix ℓ = Θ(log n) and t = Θ(log^c+1(n)). To construct the paths P(u), we present an algorithm that simply picks the shortest paths between a pair of vertices iteratively and deletes any vertex that has been used at least t times. Note that since paths are of length ℓ, in an ideal scenario where every edge occurred equally then each edge would belong to O(δ·log n) paths. Therefore, by taking t to be larger, we allow some slack, which still gives us the uniformity over edges that is sufficient for the later arguments. We now proceed to the formal argument. Our algorithm proceeds as follows: * Instantiate ∀ u, P(u)=⊥, =V. * For every u ∈ V do the following: * If u ∉ set P(u)=⊥. * Otherwise, find the shortest path p between u and π(u) in the graph G(). If the length of p is at most ℓ then set P(u) = p, else set P(u)=⊥. * If any vertex v in has been used ≥ t times in the paths {P(u)}_u, then remove it from . It is easy to see that the above algorithm runs in polynomial time in |E| and that every vertex is used at most t times over all paths in {P(u)}_u. It remains to argue that the algorithm finds paths of length at most ℓ between all but O(1/log^c(n)) fraction of the (u, π(u)) pairs. Let _f denote the set when the algorithm terminates, and let _0=V denote the set at the start. It is easy to check that the number of vertices that the algorithm removes from _0 to get _f is at most |V|·ℓ/t = Θ(n/log^c(n)) implying that for T = V∖_f, we have that |T| = |V∖_f| ≤ O(n/log^c (n)). Using <Ref> we get that |V∖ Q(T)| ≤μ· O(n/log^c(n)) = O(n/log^c(n)). Finally, using <Ref> we get that for all v_1, v_2 ∈ Q(T) there exists a path of length O(log n) between v_1 and v_2 in G(_f). Hence, our algorithm would also have found a path of length ℓ between them. Since the set V∖ Q(T) can touch at most 2 · O(1/log^c(n)) of the (u, π(u)) pairs, we get that we can find a path of length ℓ for all but O(1/log^c(n)) fraction of the (u, π(u)) pairs. §.§ Proof ofTheorem 1.5 Note that we say an edge (u,v) ∈ G is uncorrupted if whenever u transfers a message σ across (u,v), then v receives σ. Conversely when it is corrupted it behaves arbitrarily. The two main differences between <Ref> and <Ref> are: * Unlike <Ref>, where the value of a single variable is held by all the vertices in a link, in <Ref> each vertex holds its own message. * In <Ref> we transferred a message from u to π(u) for a given permutation π, while in <Ref> we want to transfer messages between all u, v pairs. To resolve the first difference, the first step in the protocol is to have each vertex u send its message to all the vertices in its link L_u. Similarly, the last step will be to transfer the message from the link of v, L_v, to v. Let E_u be the edges going from u to L_u. Then, each edge only occurs in two of the sets in {E_u}_u. Hence, it is easy to show by Markov's inequality that if ≤ fraction of the edges are corrupted, at most O() links L_u will have greater than 0.01 fraction of incorrect values. For transferring the message from L_v to v, each vertex in L_v sends its message to v, and v takes the majority of these messages. Hence, if a majority of vertices in L_v hold the correct message, v will receive the correct message. This reduces the problem to the transfer of messages from L_u to L_v. To resolve the aforementioned second difference between <Ref> and <Ref>, consider n-1 permutations π_1, π_2, …, π_n-1, such that all pairs (u, v), where u ≠ v, occur in exactly one of these permutations. We can apply <Ref> with each of these permutations π_i to obtain a protocol ℛ_i for message transfer from L_u to L_π_i(u) such that the message is transmitted correctly for all but O() fraction of u's. This gives us a protocol for transferring messages from L_u to L_v by using the appropriate ℛ_i such that the message is transmitted correctly for all but O() fraction of the (u, v) pairs. Finally, using Lemma <ref> and the fact that links are of n size, it follows that each message transmission only requires n computation. § CONSTRUCTION OF VARIANTS OF THE CHAPMAN-LUBOTZKY COMPLEXES By Zhiwei Yun equationsection Å𝔸 𝔹 ℂ 𝔻 𝔼 𝔽 𝔾 ℍ 𝕀 𝕁 𝕂 𝕃 𝕄 ℕ 𝕆 ℙ ℚ ℝ 𝕊 𝕋 𝕌 𝕍 𝕎 𝕏 𝕐 ℤ 𝒜 ℬ 𝒞 𝒟 ℰ ℱ 𝒢 ℋ ℐ 𝒥 𝒦 ℒ ℳ 𝒩 𝒪 𝒫 𝒬 ℛ 𝒮 𝒯 𝒰 𝒱 𝒲 𝒳 𝒴 𝒵 𝐀 𝐁 𝐂 𝐃 𝐄 𝐅 𝐆 𝐇 𝐈 𝐉 𝐊 𝐋 𝐌 𝐍 𝐎 𝐏 𝐐 𝐑 𝐒 𝐓 𝐔 𝐕 𝐖 𝐗 𝐘 𝐙 G _a §.§ Setup Fix an odd prime ℓ. Let D be the quaternion algebra over ramified at ℓ and ∞. Let V=D^g be a D-vector space of dimension g, equipped with a standard Hermitian form h(x_1,⋯, x_g)=∑_i=1^gN(x_i). Let G=(V,h) be the unitary group for (V,h), which is a reductive group over . Then G is a form of the symplectic group _2g over . We have a conjugation-invariant algebraic function (over ) : G→Å^1 that sends A: V→ V, written as a g× g matrix with entries in D under a basis of V, to the reduced trace of the sum of diagonal entries of A. For each prime pℓ, G_ is isomorphic to _2g,. The trace function base changed to is the usual trace for _2g,. We choose a subgroup K_p⊂ G() that is isomorphic to _2g() under some isomorphism G_≅_2g,. These subgroups {K_p}_pℓ are chosen so that for some (equivalently any) integral model of G over [1/N], K_p=(_p) for all but finitely many p. The Lie group G() is a compact form of _2g(). By writing = j, we may identify G() with a subgroup of the compact unitary group _2g. The trace function on G() becomes the usual trace of a 2g× 2g unitary matrix. In particular, if A∈ G() has (A)=2g, then all eigenvalues of A are equal to 1, hence A=I is the identity element. For each compact open subgroup H⊂ G(), let K_H=G()× H×∏_pℓ K_p, _H=K_H∩ G(). Let p be a prime different from ℓ. Let be the building of G_≅_2g,. Then G() acts on simplicially. Let K'_H=G()× H× G()×∏_qℓ,p K_q, '_H=K'_H∩ G(). Consider the action of '_H on via the embedding '_H⊂ G()⊂ G(). Suppose the image of H under the trace map is contained in 2g+ℓ^b with ℓ^b>4gp^3, then for any 1∈'_H and any vertex v∈(0), d(v, v)≥ 4. Suppose ∈'_H and v∈(0) are such that d(v, v)< 4. Consider the rational number (). By Lemma <ref> below, the p-adic valuation of () is at least -3. For any prime qℓ and q p, since ∈_2g(_q) under an isomorphism G(_q)≅_2g(_q), we have that the q-adic valuation of () is ≥0. By assumption, the ℓ-adic valuation of ()-2g is at least b. Finally, by considering trace on G()⊂_2g, we have |()|≤ 2g. Combining the above information we see that 2g-() takes the form a/p^3 for some ℓ^b|a∈ and |a/p|≤ 4g, or |a|≤ 4gp^3. So if ℓ ^b>4gp^3, we must have a=0, which means ()=2g. Viewing as an element in _2g, () is the sum of 2g eigenvalues of all of which have complex norm 1. The fact ()=2g then forces all eigenvalues of to be equal to 1, hence =I∈ G()⊂_2g. If ∈ G() and v∈(0) is a vertex such that d(v, v)≤ k, then ()∈ p^-k. Fix an isomorphism G_≅_2g,. Let v=v_0, v_1,⋯, v_k= v be a sequence of vertices in such that v_i is adjacent to v_i-1 for i=1,⋯, k. Recall that each vertex of corresponds uniquely to a -lattice L⊂^2g such that pL^∨⊂ L⊂ L^∨⊂ p^-1L. Here, for a -lattice L⊂^2g, we write L^∨={x∈^2g|⟨ x, y⟩∈,∀ y∈ L} (where ⟨ -, -⟩ is the symplectic form on ^2g). Two vertices v and v' are adjacent if and only if their corresponding lattices L and L' satisfy either L⊂ L'⊂ L^∨ or L'⊂ L⊂ L^∨. In either case, we have L'⊂ p^-1L. Let L_i be the lattice corresponding to v_i. Since v_i is adjacent to v_i-1, we have L_i⊂ p^-1L_i-1. Therefore L=L_k⊂ p^-kL_0=p^-kL. Under a -basis of Ł, is then a matrix with p^-k-entries, hence ()∈ p^-1. §.§ Construction of H at l Let _D,ℓ⊂ D_ℓ=D_ be the maximal order. Let ∈_D,ℓ be an element such that N() has ℓ-adic valuation 1. For example if D is generated over by i,j such that i^2=-1, j^2=-ℓ and ij=-ji (where ℓ≡ 3 4), then we can take to be j. Now ^i_D,ℓ is the two-sided ideal of _D,ℓ consisting of elements whose reduced norm is in ℓ^i. We have ^i_D,ℓ/^i+1_D,ℓ≅_ℓ^2. The reduced traces of elements in ^i_D,ℓ lie in ℓ^⌈ i/2⌉. Identify G() with a subgroup of M_g(D_ℓ) using the standard basis of V=D^g. For i≥0, let H(i)⊂ G() be the subgroup consisting of elements A∈ M_g(_D,ℓ) such that A≡ 1 mod ^i_D,ℓ. Then (H(i))⊂ 2g+ℓ^⌈ i/2⌉. We have H(0)/H(1)≅_g(_ℓ) (unitary group for a Hermitian space of dimension g over _ℓ^2), whose cardinality is ℓ^O(g^2). Direct calculation shows that H(i)/H(i+1)≅^2(^g_ℓ^2) if i is odd, which has cardinality ℓ^g^2+g, and H(i)/H(i+1) can be identified with g× g skew-Hermitian matrices with entries in _ℓ^2, if i>0 is even, which has cardinality ℓ^g^2. We conclude that [H(0):H(i)]= ℓ^i/2· O(g^2). The above discussion also shows that H(1)/H(i) is an ℓ-group for i≥1. Since H(1)=_i H(1)/H(i), H(1) is a pro-ℓ-group. Therefore, for any i≥1, H(i) is also a pro-ℓ-group. §.§ Upper bound for |G'H cB(g)| Let (g) be the set of g-dimensional (maximal) simplices of . We have |'_H(0)(g)|≤ Cp^O(g^2) for a constant C independent of p (and depending only on g and ℓ). Identify G_ with _2g, and let [0] be the set of type 0 vertices of , namely vertices corresponding to self-dual -lattices in ^2g under the symplectic form. Then [0]=_2g()/_2g()=G()/K_p. By construction we have an injection '_H(0)[0]='_H(0) G()/K_p G() G(Å)/K_H(0). The right side is independent of p and depends only on ℓ and g. Denote by C is cardinality of the right side. Consider the map v_0: (g)→[0] sending each g-dimensional simplex to its unique type 0 vertex. The fibers of this map are in bijection with X(_p) where X is the flag variety of _2g. Therefore fibers of v_0 have cardinality p^O(g^2). Passing to the quotient, the fibers of the map '_H(0)(g)→'_H(0)[0] then also have cardinality ≤ p^O(g^2). There exists a compact open subgroup H⊂ G() such that: * For any 1∈'_H and any vertex v∈(0), d(v, v)≥ 4. * |'_H(g)|≤ C_ℓ, gp^O(g^2) for some constant C_ℓ, g depending only on ℓ and g. Take a positive integer i such that 4gp^3<ℓ^⌈ i/2⌉≤ 4gp^3ℓ. Let H=H(i) as constructed in <ref>. Since (H(i))⊂ℓ^⌈ i/2⌉ and ℓ^⌈ i/2⌉>4gp^3, H=H(i) satisfies the hypothesis of Proposition <ref>, therefore (1) is satisfied. For (2), we have |'_H(g)|≤ [H(0):H]|'_H(0)(g)|. By (<ref>), we have [H(0):H]=ℓ^i/2· O(g^2)≤ (4gp^3ℓ)^O(g^2). By Lemma <ref> we have |'_H(0)(g)|≤ Cp^O(g^2) for a constant C depending only on ℓ and g. Using (<ref>) and (<ref>) we get |'_H(g)|≤ [H(0):H]|'_H(0)(g)|≤ (4gp^3ℓ)^O(g^2)· Cp^O(g^2)=C_ℓ, gp^O(g^2).
http://arxiv.org/abs/2407.11936v1
20240716172650
Thermal Imaging and Radar for Remote Sleep Monitoring of Breathing and Apnea
[ "Kai Del Regno", "Alexander Vilesov", "Adnan Armouti", "Anirudh Bindiganavale Harish", "Selim Emir Can", "Ashley Kita", "Achuta Kadambi" ]
cs.CV
[ "cs.CV" ]
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Thermal Imaging and Radar for Remote Sleep Monitoring of Breathing and Apnea Kai Del Regno, Alexander Vilesov, Adnan Armouti, Anirudh Bindiganavale Harish, Selim Emir Can, Ashley Kita, Achuta Kadambi K. Del Regno, A. Vilesov, S. E. Can, A. Armouti, A. B. Harish, and A. Kadambi, at the time of the project, were with the Department of Electrical and Computer Engineering, University of California, Los Angeles. Email: ktdraper@g.ucla.edu. A. Kita is with the David Geffen School of Medicine at the University of California, Los Angeles. July 22, 2024 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Polysomnography (PSG), the current gold standard method for monitoring and detecting sleep disorders, is cumbersome and costly. At-home testing solutions, known as home sleep apnea testing (HSAT), exist. However, they are contact-based, a feature which limits the ability of some patient populations to tolerate testing and discourages widespread deployment. Previous work on non-contact sleep monitoring for sleep apnea detection either estimates respiratory effort using radar or nasal airflow using a thermal camera, but has not compared the two or used them together. We conducted a study on 10 participants, ages 34 - 78, with suspected sleep disorders using a hardware setup with a synchronized radar and thermal camera. We show the first comparison of radar and thermal imaging for sleep monitoring, and find that our thermal imaging method outperforms radar significantly. Our thermal imaging method detects apneas with an accuracy of 0.99, a precision of 0.68, a recall of 0.74, an F1 score of 0.71, and an intra-class correlation of 0.70; our radar method detects apneas with an accuracy of 0.83, a precision of 0.13, a recall of 0.86, an F1 score of 0.22, and an intra-class correlation of 0.13. We also present a novel proposal for classifying obstructive and central sleep apnea by leveraging a multimodal setup. This method could be used accurately detect and classify apneas during sleep with non-contact sensors, thereby improving diagnostic capacities in patient populations unable to tolerate current technology. digital-health, remote health, multi-modal, sleep apnea, sleep monitoring § INTRODUCTION Sleep apnea is a condition in which breathing is interrupted during sleep. One form of sleep apnea is obstructive sleep apnea (OSA), which occurs due to airway narrowing or obstruction during sleep <cit.>. This is in contrast to central sleep apnea (CSA) which occurs when pauses in breathing are due to problems with communication between the brain and muscles for respiration <cit.>. Both forms of sleep apnea cause the affected person to have reduced breathing (hypopnea) or pauses in their breathing (apnea). These disorders are clinically defined and categorized into severities based on the apnea-hypopnea index (AHI) - the number of respiratory events per hour of sleep <cit.>. Untreated sleep apnea can result in daytime sleepiness, leading to a higher risk of motor vehicle and workplace accidents, as well as quality of life impacts, higher risk of cardiovascular health issues, and metabolic dysregulation, resulting in an increased risk of diabetes <cit.>. OSA, in particular, is a tremendous public health problem that affects roughly 17% of women and 34% of men and is likely underdiagnosed <cit.>. The gold standard for diagnosing sleep apnea disorders is polysomnography (PSG) conducted in a sleep lab <cit.>. At-home testing with a portable monitor, known as home sleep apnea testing (HSAT), is also considered acceptable so long as the portable monitor, at minimum, measures nasal airflow, respiratory effort, and blood oxygenation. In both of these methods, signals are recorded throughout the patient's sleep and scored by a trained sleep technician according to the American Academy of Sleep Medicine (AASM) criteria. An apnea is scored if the nasal airflow signal amplitude drops by at least 90% for at least 90% of a duration of least 10 seconds <cit.>. Apnea may be classified as obstructive if there is continued or increased respiratory effort during the duration, as central if respiratory effort is absent during the duration, or mixed if respiratory effort is initially absent but returns while nasal airflow is still reduced. Hypopnea is scored if the nasal airflow signal amplitude drops by at least 30% for at least 90% of a duration of at least 10 seconds and the blood oxygenation desaturates by at least 4% across that duration. Respiratory-effort-related arousal is scored if there is a sequence of breaths lasting at least 10 seconds where nasal airflow decreases or respiratory effort increases, and arousal from sleep occurs <cit.>. The AASM recommends respiratory monitoring in PSG through the use of oronasal thermal sensors for apnea identification, nasal pressure transducers for hypopnea identification, esophageal manometry or dual thoracoabdominal inductance plethysmography belts for respiratory effort monitoring; a pulse oximeter for blood oxygenation monitoring; a microphone, piezoelectric sensor, or nasal pressure transducer for monitoring snoring; an arterial, transcutaneous, or end-tidal PCO2 sensor for hypoventilation detection <cit.>. We visualize salient examples of how breathing and apneas manifest on sleep lab airflow and respiratory effort sensors in <ref>. For HSAT, the AASM recommends at least a nasal airflow sensor, a respiratory effort sensor, some type of oxygen saturation sensor, and a heart rate sensor, either using photoplethysmography or electrocardiography. Optionally, they recommend a sensor for body position, a sensor for sleep/wake monitoring, and a sensor for snoring using either nasal pressure, a microphone, or a piezoelectric sensor. A contactless method of detecting and differentiating obstructive, central, and mixed events would allow for individuals who do not tolerate contact sensors (e.g. young children and individuals with intellectual disabilities) to be assessed for sleep apnea. Sleep apnea is underdiagnosed and undertreated in these populations due to poor patient ability to tolerate current PSG or HSAT <cit.>. A contactless method of evaluation for sleep apnea may allow for repeat studies to be performed more easily to assess how well an intervention (e.g., sleeping on one’s side, using an oral appliance, or undergoing a surgical procedure) changes one's sleep apnea. This would allow patients to try different management techniques and see what works best for them. Furthermore, information could be transmitted wirelessly if interpretation by a technician is required or automatically through the use of an analysis app. Therefore, finding alternatives to conventional sleep monitoring that are both non-contact and can be conducted in a patient's home can greatly improve patient care and speed of diagnosis. Inspired by the use of multimodal camera+radar setup for remote vital sensing <cit.>, we investigate using thermography-based nasal airflow measurements as a replacement for the contact nasal airflow sensors and radar-based respiratory effort measurements as a replacement for chest motion sensors. We present the following contributions of this work: * A comparison of radar and thermal modalities for remote sleep monitoring detection of breathing and sleep apnea. * A non-contact multimodal thermal and radar stack for detection and clinically relevant classification of apnea. * A dataset composed of 10 sleeping patients with hardware-synchronized thermal videos, frequency-modulated continuous wave (FMCW) radar data, and ground truth waveforms and annotations by a certified sleep technician at a sleep center. In addition, we open-source our data-collection framework, code base, and circuit schematics for collecting hardware-synchronized radar and thermal data. § RELATED WORKS §.§ Thermography for Airflow Estimation Thermal imaging has been explored for many medical applications <cit.>. In clinical PSG, a thermistor measures nasal airflow by detecting the thermal fluctuations near nostrils due to breathing <cit.>. These fluctuations are also visible in thermal videos of a patient’s face <cit.>. Mozafari et al. decompose the videos into rank-1 tensors and perform a spectral analysis on their power spectral densities to estimate the signal <cit.>. Szankin et al. uses a slow-fast convolutional neural network <cit.> as a regressor to estimate the airflow signal <cit.>. Methods for estimating airflow from thermal videos have also been validated on newborns <cit.> and preterm infants <cit.>. In addition to the previously discussed unimodal methods, several multimodal methods have been proposed that augment low-resolution thermal videos with RGB <cit.> and depth <cit.> cues for lower-cost hardware deployment. §.§ Visual and Wireless Sensing for Respiratory Effort Respiratory effort can be defined as the muscle movements of the chest that drive respiration <cit.>, visible to the human eye as the expansion of the chest, abdomen, and neck as the lungs fill with air. Clinically, they are detected using esophageal manometry or inductance plethysmography <cit.>. Several video-based algorithmic <cit.>, and data-driven approaches  <cit.> have been proposed to extract the respiratory effort signal. In addition to visual methods, thoracic and abdominal vibrations can be measured using wireless sensors, such as impulse-radio radars <cit.>, Doppler radars <cit.>, and FMCW radars  <cit.>. These approaches include both algorithmic  <cit.> and data-driven <cit.> solutions to estimate the respiratory effort. §.§ Automatic Apnea Detection Several methods have been developed to detect sleep apneas from ground truth breathing data using wavelet features  <cit.>, and neural networks (e.g., MLPs, ANNs, CNNs, and LSTMs) <cit.> of many architectures and envelope detection  <cit.>. Methods have also been developed to detect sleep apneas from features of heart rate waveforms <cit.>. Several methods have been developed using nasal airflow information from infrared thermography to detect apneas in conjunction with either signal processing  <cit.> or deep learning  <cit.>. An et al. propose a method to detect sleep apneas using nasal airflow information from infrared optical gas imaging <cit.>. Parallel work has also been conducted to detect apnea events from acoustic recordings  <cit.>. Ambient recordings of the patient have been analyzed to identify breathing signals and isolate periods where the breathing stops or is obstructed. While they can be effective for apnea detection, these methods do not directly measure the same clinically relevant physiological signals as the AASM recommended contact sensors <cit.>. Kang et al. provide a method for detecting apnea events using a respiratory effort signal measured with an impulse-radio radar only <cit.>. A high-quality signal is extracted by performing a range-doppler analysis, followed by a Kalman filtering operation <cit.>. Binary classifiers are then trained to predict apneas based only on this estimated respiratory effort signal  <cit.>. Akbarian et al. provide a method for detecting apnea events from near-infrared videos of patients by computing the optical flow between frames of the videos and using a convolutional neural network to classify 10-second durations of the optical flow between apneic and non-apneic breathing with technician-annotated data as supervision <cit.>. Carter et al provide a method for detecting apnea events using both respiratory and photoplethysmograph waveforms extracted from near-infrared videos <cit.> While airflow and respiratory effort methods can provide good accuracy in remote apnea detection, an important vital sign that is missed by these methods is SpO2. For non-contact methods, the most common method to achieve remote SpO2 extraction is through two NIR cameras with different wavelength sensitivities. Typically, these methods first detect a remote-photoplethysmography signal <cit.>, followed by application of the ratio-of-ratios method <cit.>. This general framework has been applied in several instances <cit.> with various algorithmic innovations to extract a better remote photoplethysmography signal. Other work also detects SpO2 using spectroscopic methods with a multi-aperture camera <cit.>. § BACKGROUND We provide a brief background of the primary mechanisms of the thermal and radar modalities relevant to understanding this work in sections <ref> and <ref>. §.§ Thermal Imaging We overview the thermal camera image signal processing and underlying physics used to estimate changes in the temperature of the human body. All objects with a temperature higher than absolute zero (-273.15^∘ C) emit electromagnetic radiation that scales with the temperature of the body. The relationship is described by Planck's law. For a given wavelength λ, an object with emissivity ε and temperature T has spectral radiant exitance I(λ, T) <cit.>: I(λ, T) = 2πε hc^2/λ^51/e^hc/(λ kT) - 1 W m^-2, where h=6.63· 10^-34 J s (Planck's constant), k=1.38· 10^-23 J K^-1 (Boltzmann constant) and c=3· 10^8 m s^-1. However, since cameras integrate incoming radiance over a range of wavelengths, <ref> can be rewritten to obtain the total radiant flux emitted by a surface as: I(T) =∫_λ M(λ, T) dλ = εσ T^4, also known as the Stefan-Boltzmann law, where σ=5.67· 10^-8 W m^-2 K^-4 is the Stefan-Boltzmann constant. The thermal radiation is proportional to the fourth power of temperature scaled by emissivity. Estimation of temperature can proceed by taking the fourth root of <ref> as. T = √(I(T)/εσ). Thermal cameras are excellent tools to measure changes in temperature in an environment, regardless of whether the camera is radiometric or not. For humans, most of this power is located in the infrared region which constrains thermal cameras to sense in the 8-14 μ m region. Several prior works  <cit.> have shown that thermal cameras can be used to detect the respiratory waveform due to the rhythmic change in temperature around the facial region due to breathing. For further details about thermal imaging characteristics and methodology, we defer the reader to Vollmer et al.  <cit.>. §.§ Radar Frequency modulated continuous wave (FMCW) radar emits and receives (reflected) chirps, that are linearly frequency modulated electromagnetic (EM) waves. The unique framework of FMCW radars modulates the transmitted chirp with the received chirp to produce a signal that allows for rough estimation of absolute distance on a centimeter resolution by observing its frequency as well as measuring fine-grained changes in distance at micrometer resolutions by observing changes in the phase of the chirp. The transmitted and received signal, s(t) and u(t) can be modeled as: s(t) = A_s cos(2π f_c t + π k t^2), 0 < t < T_c. u(t) = A_u cos(2π f_c (t-t_d) + π k (t-t_d)^2), t_d < t < T_c. where k is the frequency slope (the rate of change of frequency of the chirp), f_c is the starting frequency of the chirp, T_c is the duration of the chirp transmission, and t_d is the time delay between the start of transmission and arrival of the reflection. The time delay is proportional to the round trip distance, t_d = 2R/c, where R and c are the range of the object and speed of light respectively. The radar modulates the received chirp with the still transmitting signal. The resulting signal is proportional to s(t) · u(t) and contains 2 components: a beat signal component with a frequency equal to the frequency difference of s(t) and r(t), Δ f = k t_d, and a high frequency component situated near 4π f_c. The higher frequency component is filtered out and thus generating m(t). For brevity, the following equations are represented with just the in-phase component. The signal m(t), can then be written as: m(t) ∝ cos(2π f_c t_d + 2 π (k t_d) t + π k t_d^2), t_d < t < T_c. Equation <ref> can be rewritten into a more succinct form: m(t) ∝ cos (ω t + ϕ), t_d < t < T_c, ω = 4 πkR/c, ϕ = 4πR/λ. The signal's phase and frequency depend on the range, R. They can be extracted through a discrete Fourier transform (DFT) of the signal after being passed through an analog to digital converter (ADC). The frequency term, ω=2πΔ f, provides the range through the following relation R = cΔ f/2k. The phase term ϕ is inversely proportional to the wavelength of the radar, λ=c/f_c. The range of an object can be parameterized as R(t) = R_o + r(t), where r(t) models changes due to vibrations. To extract a breathing rate, r(t) needs to be sampled with multiple chirps, thus creating a range matrix that is depicted in <ref>. Typically, the frequency term cannot be used to extract the possibly sub-centimeter displacement of the chest since the frequency resolution is on the order of centimeters. Instead, we use the highly sensitive phase to determine the oscillations of r(t) <cit.>. § METHODS We begin by describing breathing waveform and respiratory rate (RR) extraction in <ref>, followed by apnea detection in <ref> for both the radar and thermal modalities. An overview of the process is shown in <ref>. We conclude with a description of our proposed apnea classification method in <ref>. §.§ Breathing and Respiratory Rate (RR) Estimation While both thermal cameras and radars yield RR estimates, they do so by sensing slightly different physical phenomena. Thermal cameras monitor intensity changes, while radars track the instantaneous displacement of the chest (and/or the abdomen) to calculate the breathing rate. §.§.§ Pre-processing Thermal Recordings The breathing rate information is primarily located in a small region below the nostrils caused by temperature changes during inhalation of room temperature air or exhalation of warm air from the lungs. Therefore, all videos of patients are manually cropped to a tight region around the nose as shown in <ref>. §.§.§ Airflow and RR from Thermography A simple approach taken by prior works <cit.> collapse the video to obtain a 1D temporal signal, x[t], by spatially averaging each frame. This is followed by filtering operations to limit frequencies to the accepted range of breathing rates. However, this approach does not translate perfectly to our setting because we do not have control over patient motion, which severely degrades the signal. Since motion and disturbances can be modeled as spikes or delta functions, we can compensate for them by averaging the derivative signal over a N=25 frame window following by a derivative operation to remove low frequency trends. We found that this operation dampens artifacts from motion as well as compensates for any spurious calibrations that the thermal camera requires. The operation can be written as: y[t] = 1/N∑_i=-⌊ N/2 ⌋^⌊ N/2 ⌋ x[t+i]-x[t+i-1]. §.§.§ Respiratory Effort and RR from Radar Sensors The initial processing of FMCW radar data is described in <ref>. Here, we describe additional processing steps that are required for the detection of respiratory effort in a sleeping patient. Once a range matrix is constructed, the first step is to find the primary range bin that a person is located in. This is usually chosen as the range bin with the maximum power <cit.>. However, this assumes that the patient is the main object in view, which is not always the case in a sleep monitoring setting where the radar is placed on the side of the patient. We improve upon this by taking a window of range bins, M, around the maximum power range bin and choosing the range bin with the maximum SNR. We calculate the SNR of of the ith unwrapped range bin, x_i[t] as: α_i = ∑_f ∈ F_signal|X_i[f]|^2/∑_f ∈ F_noise|X_i[f]|^2, where X_i[f] is the DFT representation of x_i[t], F_signal is a small set of frequency bins centered around the frequency bin that contains the most power in the range of 0.1-0.5 Hz, and F_noise contains the remaining frequency bins in the breathing frequencies. The final breathing signal, y[t] is: y[t] = α_i^*· x_i^*[t]. where i^* = argmax_i α_i. We additionally process this signal with <ref> to also help with motion and phase unwrapping artifacts. §.§ Sleep Apnea Detection We employ an envelope detection algorithm for detecting sleep apnea events remotely using a thermal camera or a radar sensor. With conventional algorithms, such as the Hilbert Transform, limited to narrow-band fluctuations <cit.>, we instead opt to detect critical points in the signal and process them for continuous predictions of the lower and upper envelopes of the signal. In addition to envelop detection, detecting motion is crucial for detection to filter false positives. §.§.§ Motion Detection After the thermal video is captured, it is processed and cropped in real time to eliminate any artifacts related to the background of the video. In some instances, significant amounts of patient motion can hinder the data quality of the thermal camera due to the lack of visibility of ora-nasal airflow from the nose of the patient. To combat this, we designed a peak-based motion detection algorithm. A common way patient motion manifests itself in the breathing signal is in the form of singular high-magnitude peaks that significantly alter the envelope of the thermal signal. In order to filter out these peaks, for each key point, we compute the average distance to its K nearest neighbors and filter out signal chunks surrounding the key point whose distance metric is unusually high. That is, given an array of peaks, s[p] ∈ℝ^P with P local minima or maxima, the filtered set of peaks is given by: {s[i] | ∑^k = i - 1 + K/2_k = i - K/2 |s[k+1] - s[k]|/K > β=2.5 } §.§.§ Envelope Detection After filtering for motion, we extract the breathing signal as described in <ref>. To find the envelope of the breathing signal, s, we need to determine its keypoints. We reconstruct the lower envelope with the minima points and the upper envelope with maxima points. The noisy nature of the signal necessitates a denoising operation, both pre and post-keypoint detection. Once we filter out unwanted key points, a continuous version of the lower and upper envelopes of the signal is constructed via linear interpolation. We then use the envelope difference normalized by its mean for apnea detection. Several data-driven methods <cit.> also exist for detecting sleep apneas from contact-based respiratory signals. However, due to the limited size of our non-contact dataset, we cannot replicate machine-learning driven algorithms. §.§ Sleep Apnea Classification Differentiating between OSA and CSA is difficult with access to only one modality. However, the reader may notice in <ref> that during a CSA, both the respiratory effort and nasal airflow signals decrease in amplitude, while for OSA, only the nasal airflow decreases in amplitude. We propose leveraging this observation to perform sleep apnea classification using both the thermal and radar modalities, where the remote sensors replace the nasal airflow and respiratory effort sensors, respectively. Classification then reduces to simple boolean algebra. Given the apnea predictions from radar, y_Radar(t), and thermal, y_Thermal(t), (where y(t)=1 and y(t)=0 denote apnea present and no apnea present, respectively), then we can formula CSA and OSA classification as: y_CSA(t) = y_Radar(t)· y_Thermal(t) y_OSA(t) = y_Thermal(t)·(1-y_Radar(t)) § EXPERIMENTS AND RESULTS §.§ Hardware Setup As part of our clinical validation, six-hour recordings of patients participating in their PSG study were captured. Our hardware setup primarily consists of a thermal camera and radar placed in the periphery of the bed, visualized in <ref>. This is in addition to the existing ground truthing equipment used in PSG studies. A radiometrically calibrated Teledyne FLIR Boson with a 512 × 640 resolution was positioned to the side of the bed and aimed at the face. This choice of placement was necessitated by the constraints of existing PSG procedures. As a result, the patient’s nose was not always visible; for example, if they turned to the side facing opposite the camera, their face would be obscured. Beyond these experimental constraints, as a product set, we conjecture that a ceiling-mounted thermal camera could alleviate concerns about occlusions. We also place an AWR1443BOOST FMCW radar beside the thermal camera and in the periphery of the patient, next to the bed. Similar placement constraints exist for the radar; however, unlike the thermal camera, the radar is not adversely affected by the orientation of the patient. We balance attenuating factors from lateral position shifts and elevation offsets by operating the radar with all 3 transmitters and 4 receivers enabled. This allows us to perform beamforming post-acquisition and improve the SNR. A synchronization signal was sent via an Arduino microcontroller to align the ground truth signals recorded by the external hardware used in contemporary PSG studies, as well as triggering the thermal and radar sensors at a 30 Hz rate. The alignment was performed on the vital sign recordings, as well as other ground truth labeling obtained from the PSG and technicians involved with the sleep study. A full-night or split-night PSG was recorded and annotated by a trained sleep technician in accordance with AASM guidelines  <cit.>. §.§ Dataset Details 10 patients with suspected OSA undergoing full-night or split-night PSG were enrolled in this study. Of these patients, 1 was diagnosed with mild OSA (5 ≤ AHI ≤ 15), 3 were diagnosed with moderate OSA (15 ≤ AHI ≤ 30), and 1 was diagnosed with severe OSA (AHI ≥ 30). The patients provided their written informed consent in accordance with our Institutional Review Board permissions, and all methods were performed in compliance with relevant guidelines and regulations of the University of California, Los Angeles. This study was approved by the UCLA Institutional Review Board, IRB#21-000018. In our dataset, we discarded one patient's data due to synchronization error and another patient's data due to improper setup of the thermal camera. We also discarded periods of the recordings where the patients' noses were not in the frame of the thermal video. The entire dataset contains 20 hours and 25 minutes of synchronized thermal videos, radar recordings, and ground truth PSG recordings. Of the patients in the final dataset, two patients had sleep apnea events during the valid recording periods. When calculating metrics in Table 1, we over-sampled 1-minute-long chunks of the data twenty times for every 5 minutes of data, resulting in an over-sampled dataset that is 59 hours 40 minutes long. Using our motion detection algorithm, we classified that 34 hours and 42 minutes of data in the over-sampled dataset do not contain significant levels of motion. §.§ Results Breathing Estimation Evaluation We present qualitative in <ref> as well as in a Bland-Altman plot in <ref>. We also show qualitative results of the estimated breathing waveforms in <ref>. We find that the thermal modality outperforms the radar in breathing rate estimation. We hypothesize that this is due to reflections from the 77 GHz radar being specular. This can lead to a reduced signal when a patient's chest is not perpendicular to the radar's optical axis. We also find that our radar smoothing and SNR weighting scheme, <ref>, improves upon prior breathing estimation methods <cit.>. Sleep Apnea Detection We demonstrate the qualitative results in <ref>. Overall, from <ref>, we found that thermal imaging provides more robust sensing of apneas than the radar when accounting for motion. While the radar was able to achieve the highest recall, it suffered from low precision. We believe that this is due to the sensitivity of the radar wave's phase to movement. Even small movements can cause changes in the amplitude of the signal, resulting in false positives. Using motion detection and handling is advantageous as it allows us to filter out parts of the signals that have anomalies due to patient motion. The motion handling algorithm removes adverse distribution shifts to the distribution of local maxima and minima that make up the envelope of the signal, causing our algorithm to classify false-positive apnea events. Furthermore, our motion handling algorithm is not only limited to our apnea detection algorithm but can also be used to inform other apnea detection or breathing rate estimation algorithms about the presence of motion and allow for proper handling. However, motion detection comes with the tradeoff of possibly discarding sleep apnea samples. For the final results, we used a threshold of 0.4 for thermal and 0.5 for radar for the motion detection algorithm and a window size of 23 for the upper and lower envelopes. Two patients had apnea events in the valid recording period. The first participant had 20 ground truth apneas (1 OSA and 19 CSA): 18 of the apneas were predicted by the thermal camera data and 16 of the apneas were predicted by the radar data by our algorithm. The second participant had 7 ground truth apneas (1 OSA and 6 CSA): 2 of the apneas were predicted by the thermal camera data and 4 of the apneas were predicted by the radar data. predicted apneas on the thermal camera data. Sleep Apnea Classification We also demonstrate an application of using both radar and thermal modalities for apnea classification between OSA and CSA. In <ref>, we can see an example of OSA and CSA from our dataset classification according to <ref> and <ref>. Due to the small number of OSA and CSA examples in our dataset, we can only show qualitative multimodal results of OSA and CSA classification. § LIMITATIONS Due to our radar and thermal setup, we are not able to incorporate blood oxygenation estimation techniques, although the AASM guidelines recommend that a blood oxygenation sensor be included in both PSG and HSAT. Therefore, since blood oxygenation is a criterion for hypopnea classification, hypopnea detection using completely non-contact methods is not possible in alignment with AASM criteria. However, it may be possible to use relaxed criteria to score hypopneas, omitting the blood oxygenation information. Furthermore, recent work has suggested that oxygen saturation can be remotely acquired in limited settings <cit.>. We hope that in future work, we can incorporate these methods to enable full non-contact monitoring. Additionally, as previously stated in <ref>, extreme sleeping positions can obscure the nose from the thermal camera, the modality found to be crucial for apnea detection. While remote sensing has the potential to benefit patients, it is still a new technology that warrants further studies to understand generalizability and fairness <cit.> of the technology to diverse patient populations. § CONCLUSION We propose a novel method of contactless nasal airflow and respiratory effort estimation of apnea to function as a real-time drop-in replacement for existing contact sensors. We verify the performance of these methods using data collected from patients with suspected sleep apnea. We demonstrate that these methods can be used to detect sleep apnea events and even potentially distinguish between some obstructive and central apnea events. These methods could lead to the development of a portable, repeatable, non-contact diagnostic tool for populations underdiagnosed with sleep disorders due to their inability to access or tolerate current PSG and HSAT diagnostics. § ACKNOWLEDGMENT The authors would like to thank Dr. Jean-Paul Chretien and Dr. Rebecca McFarland for their valuable feedback and comments throughout the course of the project. The authors would also like to thank UCLA Sleep Disorders Center and its staff, especially Julie Toomey and Weiguang Zhong; undergraduate students Rishabh Sharma, Rui Ma, Jianchong Ma, and Julia Craciun for their assistance in developing parts of the preprocessing code; Junaid Ahmad for his work designing protective circuits for data collection; medical student Clare Moffatt for help with data collection; and Dr. Laleh Jalilian for valuable advice in collecting clinical data. Authors on this research were supported by a DARPA Young Faculty Award; Achuta Kadambi was supported by an National Science Foundation (NSF) CAREER award IIS-2046737, Army Young Investigator Program Award, and Defense Advanced Research Projects Agency (DARPA) Young Faculty Award. acm
http://arxiv.org/abs/2407.13665v1
20240718164007
Quasi-optimal mesh generation for the virtual element method: A fully adaptive remeshing procedure
[ "Daniel van Huyssteen", "Felipe Lopez Rivarola", "Guillermo Etse", "Paul Steinmann" ]
math.NA
[ "math.NA", "cs.NA" ]
Projection-based model-order reduction for unstructured meshes with graph autoencoders [ July 22, 2024 ====================================================================================== § ABSTRACT The mesh flexibility offered by the virtual element method through the permission of arbitrary element geometries, and the seamless incorporation of `hanging' nodes, has made the method increasingly attractive in the context of adaptive remeshing. There exists a healthy literature concerning error estimation and adaptive refinement techniques for virtual elements while the topic of adaptive coarsening (i.e. de-refinement) is in its infancy. The creation of a quasi-optimal mesh is based on the principle of quasi-even error distribution over the elements which inherently relies on localized refinement and coarsening techniques. Thus, necessitating a fully adaptive remeshing procedure. In this work a novel fully adaptive remeshing procedure for the virtual element method is presented. Additionally, novel procedures are proposed for the identification of elements qualifying for refinement or coarsening based on user-defined targets. Specifically, a target percentage error, target number of elements, or target number of nodes can be selected. Numerical results demonstrate that the adaptive remeshing procedure can meet any prescribed target while creating a quasi-optimal mesh. The proposed fully adaptive procedure is of particular interest in engineering applications requiring an efficient simulation of a given accuracy, or desiring a simulation with the maximum possible accuracy for a given computational constraint. § INTRODUCTION An optimal mesh represents the pinnacle for the numerical analyst or engineer. An optimal mesh satisfies a specified accuracy target while meeting some `mesh optimality criterion'. Through this combination of objectives an optimal mesh represents the most computationally efficient solution to a specific problem. In the context of the finite element method (FEM), the generation of optimal meshes has been studied since (at least) the early 1970s. These works typically considered a mesh to be optimal if it met a specified accuracy target with the mesh optimality criterion of minimizing the number of element or nodes in the mesh <cit.>. A somewhat more recent approach has emerged in which a mesh is considered optimal if it meets the target accuracy with the optimality criterion of equal error distribution over all elements in the mesh <cit.>. In either case, the mesh optimality criterion results in the computation of an element resizing parameter that defines whether a particular element should be refined, coarsened (i.e. de-refined), or is optimally sized. In recent times the attention paid to, and number of publications concerning, mesh optimization comprising combined refinement and coarsening processes has seemingly waned. However, there remains great interest in adaptive refinement techniques. The decreased attention in combined processes is likely due to ever increasing advances in computational power that permit very fine mesh simulations of requisite accuracy. These fine mesh simulations may be sufficiently accurate, however, they are inherently inefficient and wasteful of resources. Furthermore, the perceived implementation difficulties, and limited number of works investigating adaptive coarsening techniques may deter one from implementing a traditional mesh optimization procedure comprising combined refinement and coarsening processes. Adaptive remeshing techniques for the FEM are already well-established and there exists a wide range of approaches to a-posteriori error estimation <cit.> and a variety of tools/packages for the creation of updated meshes <cit.>. Performing localized refinement or coarsening of finite element meshes is non-trivial as significant manipulation of not only the elements being adapted but also of the surrounding elements is required to preserve the method's conformity. In general, coarsening of finite element meshes is more complex than refinement. As such, most (possibly all) coarsening processes performed using finite elements only reverse previously performed refinement to return a mesh, or parts thereof, to an initially coarser state, see for example <cit.>. This is particularly problematic in the context of mesh optimization which necessitates the ability to locally refine or coarsen any given mesh which may not contain information about a previously coarser state. The introduction of the virtual element method (VEM) gave rise to many new opportunities in the context of adaptive remeshing. The VEM is an extension of the FEM that permits arbitrary polygonal and polyhedral element geometries in two- and three-dimensions respectively <cit.>. A feature of the VEM of particular interest in the context of adaptive remeshing is the permission of arbitrarily many nodes along an element's edge. That is, nodes that would be considered `hanging' in a finite element context are trivially incorporated into the VEM formulation <cit.>. The geometric robustness of the VEM has been demonstrated with the method exhibiting optimal convergence behaviour in cases of challenging, including strongly non-convex, element geometries <cit.>. Additionally, in cases of distorted, and possibly stretched, element geometries that could arise during adaptive remeshing (particularly during anisotropic remeshing) the VEM stabilization term can be easily tuned to improve the accuracy of the method <cit.>. Furthermore, the robustness of the VEM under challenging numerical conditions, such as near-incompressibility and/or near-inextensibility, is increasingly well reported <cit.>. The geometric flexibility and numerical robustness mean that the VEM is particularly well-suited to problems involving fully adaptive remeshing and mesh optimization. Adaptive remeshing techniques for the VEM is an area of rapidly growing interest. There are many works concerning a-posteriori error estimation <cit.> and several approaches have been presented for localized refinement of the unstructured polygonal element geometries permitted by the method <cit.>. Furthermore, recent works have proposed adaptive coarsening (i.e. de-refinement) techniques for VEM meshes with attention paid to the identification of elements, or groups of elements, to coarsen and the presentation of algorithms for performing the coarsening <cit.>. The geometric and numerical suitability of the VEM for problems involving mesh optimization, and the high degree of efficacy of even standalone adaptive refinement and coarsening procedures, strongly motivate the development of a mesh optimization procedure for the VEM. The generation of a truly optimal mesh would require complex algorithms for the precise resizing of individual elements based on the resizing parameter. Furthermore, since error-estimation is approximate, and depends on the mesh, the resizing would need to be performed iteratively. It is unlikely that this iterative procedure would stabilize and yield a fully optimal mesh within a reasonable (or even finite) number of iterations. Thus, in this work novel procedures are proposed for the creation of `quasi-optimal' meshes of virtual elements. These procedures consider various targets. Specifically, a target accuracy, target number of elements, or target number of nodes can be set. Additionally, the mesh optimality criterion of `quasi-equal' error distribution is chosen due to its simplicity of implementation and practical suitability. The procedures comprise the novel combination of adaptive refinement and coarsening algorithms previously proposed and numerically studied by the authors, as well as novel algorithms for the identification of elements to refine or coarsen. Finally, in this work the accuracy and efficacy of the novel fully adaptive procedure with the novel element selection algorithms is measured through an approximation of the well-known energy error. The structure of the rest of this work is as follows. The governing equations of linear elasticity are set out in Section <ref>. This is followed in Section <ref> by a description of the first-order virtual element method. The procedures used to generate, refine, and coarsen meshes are presented in Section <ref>. This is followed, in Section <ref>, by a description of the procedures used to compute local and global error estimations. Thereafter, the procedures for the selection of elements to resize via refinement and coarsening are presented in Section <ref> for various remeshing strategies. Section <ref> comprises a set of numerical results through which the performance of the various remeshing strategies is evaluated. Finally, the work concludes in Section <ref> with a discussion of the results. § GOVERNING EQUATIONS OF LINEAR ELASTICITY Consider an arbitrary elastic body occupying a plane, bounded, domain Ω⊂ℝ^2 subject to a traction and body force (see Figure <ref>). The boundary ∂Ω has an outward facing normal denoted by and comprises a non-trivial Dirichlet part Γ_D and a Neumann part Γ_N such that Γ_D∩Γ_N = ∅ and Γ_D∪Γ_N=∂Ω. In this work small displacements are assumed and the strain-displacement relation is given by ( ) = 1/2[∇ + [ ∇ ]^T] . Here the displacement is denoted by , is the symmetric infinitesimal strain tensor and ∇( ∙) = ∂( ∙)_i/∂ x_j e_i⊗e_j is the gradient of a vector quantity. Additionally, linear elasticity is assumed and the stress-strain relation is given by = ℂ : . Here, is the Cauchy stress tensor and ℂ is a fourth-order constitutive tensor. For a linear elastic and isotropic material (<ref>) is given by = λ( ) + 2μ , where ( ∙) denotes the trace, is the second-order identity tensor, and λ and μ are the well-known Lamé parameters. For equilibrium it is required that ÷ + = 0 , where ÷( ∙) = ∂( ∙)_ij/∂ x_je_i is the divergence of a tensor quantity. The Dirichlet and Neumann boundary conditions are given by = on Γ_D , and · = on Γ_N , respectively, with and denoting prescribed displacements and tractions respectively. Equations (<ref>)-(<ref>), together with the displacement-strain relationship (<ref>), constitute the boundary-value problem for a linear elastic isotropic body. §.§ Weak form The space of square-integrable functions on Ω is hereinafter denoted by ℒ^2(Ω). The Sobolev space of functions that, together with their first derivatives, are square-integrable on Ω is hereinafter denoted by ℋ^1(Ω). Additionally, the function space 𝒱 is introduced and defined such that 𝒱 = [ ℋ^1_D(Ω) ]^d = { | v_i∈ℋ^1(Ω), = 0 on Γ_D} where d=2 is the dimension. Furthermore, the function _g∈[ ℋ^1(Ω) ]^d is introduced satisfying (<ref>) such that _g|_Γ_D=. The bilinear form a(·,·), where a : [ ℋ^1(Ω) ]^d×[ ℋ^1(Ω) ]^d→ℝ, and the linear functional ℓ(·), where ℓ : [ ℋ^1(Ω) ]^d→ℝ, are defined respectively by a(, ) = ∫_Ω( ) : ( ) dx , and ℓ( ) = ∫_Ω· dx + ∫_Γ_N· ds - a(_g, ) . The weak form of the problem is then: given ∈[ ℒ^2(Ω) ]^d and ∈[ ℒ^2(Γ_N) ]^d, find ∈[ ℋ^1(Ω) ]^d such that = + _g , ∈𝒱 , and a( , ) = ℓ( ) , ∀∈𝒱 . § THE VIRTUAL ELEMENT METHOD The domain Ω is partitioned into a mesh of non-overlapping arbitrary polygonal elements[If Ω is not polygonal the mesh will be an approximation of Ω.] E with ∪ E=Ω. Here E denotes the element domain and ∂ E its boundary, with ( ∙ ) denoting the closure of a set. An example of a typical first-order element is depicted in Figure <ref> with edge e_i connecting vertices V_i and V_i+1. Here i=1,…,n_ v with n_ v denoting the total number of element vertices. A conforming approximation of order k is constructed in a space 𝒱^h⊂𝒱 where 𝒱^h is built-up element-wise and comprises vector valued functions _h. The functions _h are those that are 𝒞^0 continuous on the domain Ω, are polynomials of degree ≤ k on element edges, and whose strain gradient divergence is a polynomial of degree ≤ k-2 on an element (see <cit.>). For the most general case of an approximation of arbitrary order k the space 𝒱^h|_E is defined as 𝒱^h|_E = {_h∈𝒱 | _h∈[ 𝒞^0(E) ]^2 , ∇^2 _h∈𝒫_k-2 on E , _h|_e∈𝒫_k(e) } . Here 𝒫_k(X) is the space of polynomials of degree ≤ k on the set X ⊂ ℝ^d with d=1, 2 and ∇^2=∇·∇ is the Laplacian operator. In this work a first-order, i.e. k=1, approximation is considered, thus (<ref>) simplifies to 𝒱^h|_E = {_h∈𝒱 | _h∈[ 𝒞^0(E) ]^2 , ∇^2 _h = 0 on E , _h|_e∈𝒫_1(e) } . All computations will be performed on element edges and it is convenient to write, for element E, _h|_∂ E = ·^E . Here, is a matrix of standard linear Lagrangian basis functions and ^E is a 2n_ v× 1 vector of the degrees of freedom associated with E. The virtual basis functions are not known, nor required to be known on E; their traces, however, are known and are simple Lagrangian functions. The virtual element projection for a first-order formulation Π : 𝒱^h|_E→𝒫_0(E) is required to satisfy ∫_EΠ _h·( ) dx = ∫_E(_h) ·( ) dx ∀∈𝒫_1 , where Π _h represents the ℒ^2 projection of the symmetric gradient of _h onto constants <cit.>. Since the projection is constant at element-level, after applying integration by parts to (<ref>), and considering (<ref>), the components of the projection can be computed as (Π _h)_ij = 1/21/|E|∑_e∈∂ E∫_e[ N_iA d_A^E n_j + N_jA d_A^E n_i] ds , where summation is implied over repeated indices. The virtual element approximation of the bilinear form (<ref>) is constructed by writing a^E(, ) : = a(, )|_E = ∫_E(_h) : [ ℂ : (_h) ] dx , where a^E(·,·) is the contribution of element E to the bilinear form a(·,·). Consideration of (<ref>) allows (<ref>) to be written as (see <cit.>) a^E(_h, _h) = ∫_EΠ _h : [ ℂ : Π _h] dx _Consistency term + ∫_E[ ( _h) : [ ℂ : ( _h) ] - Π _h : [ ℂ : Π _h] ] dx _Stabilization term , where the remainder term is discretized by means of a stabilization. §.§ The consistency term The projection (<ref>), and thus the consistency term, can be computed exactly yielding a_ c^E(_h, _h) = ∫_EΠ _h : [ ℂ : Π _h] dx = ^E·[ _ c^E·^E] . Here _ c^E is the consistency part of the stiffness matrix of element E with ^E and ^E the degrees of freedom of _h and _h respectively that are associated with element E. §.§ The stabilization term The remainder term cannot be computed exactly and is approximated by means of a discrete stabilization term <cit.>. The approximation employed in this work is motivated by seeking to approximate the difference between the element degrees of freedom ^E and the nodal values of a linear function that is closest to ^E in some way (see <cit.>). The nodal values of the linear function are given by = 𝒟· . Here is a vector of the degrees of freedom of the linear function and 𝒟 is a matrix relating to with respect to a scaled monomial basis. For the full expression of 𝒟 see <cit.>. After some manipulation (see, again, <cit.>) the stabilization term of the bilinear form can be approximated as a_stab^E(_h, _h) = ∫_E[ ( _h) : [ ℂ : ( _h) ] - Π _h : [ ℂ : Π _h] ] dx ≈ ^E·[ _ s^E·^E] , where _ s^E is the stabilization part of the stiffness matrix of element E and is defined as _ s^E = μ[ - 𝒟·[ 𝒟^T·𝒟]^-1·𝒟^T] . The total element stiffness matrix ^E is then computed as the sum of the consistency and stabilization matrices. § MESH GENERATION, REFINEMENT AND COARSENING In this section the procedures used to generate meshes, refine elements and coarsen patches of elements are described. §.§ Mesh generation The mesh generation procedure used in this work is identical to that described in <cit.> and is summarized briefly here for the sake of self-containment. All meshes are created by Voronoi tessellation of a set of seed points. Seed points will be generated in both structured and unstructured sets to create structured and unstructured meshes respectively. In the case of structured meshes seeds points are placed to form a structured grid, while in the case of unstructured/Voronoi meshes seeds are placed arbitrarily within the problem domain. Hereinafter the terms `unstructured' and `Voronoi' meshes will be used interchangeably to refer to meshes created from arbitrarily placed seed points. An initial Voronoi tessellation of the seed points is created using PolyMesher <cit.>. Then, a smoothing algorithm in PolyMesher is used to iteratively modify the locations of the seed points to create a mesh in which all elements have approximately equal areas. Clearly, in the case of structured meshes the smoothing step is trivial. The mesh generation procedure is illustrated in Figure <ref> where the top and bottom rows depict the generation of structured and unstructured/Voronoi meshes respectively. §.§ Mesh refinement The mesh refinement procedure used in this work is identical to that described in <cit.> and is summarized briefly here for the sake of self-containment. Once an element has been marked for refinement the process is performed using a modified version of PolyMesher <cit.>. An overview of the element refinement procedure is illustrated in Figure <ref> for structured and unstructured/Voronoi meshes. The element marked for refinement is indicated in grey within the initial mesh. Refinement is performed by subdividing a marked element into smaller elements via Voronoi tessellation of a set of seed points, similar to the mesh generation process. For simplicity the number of seed points is chosen to be equal to the number of nodes of the element. In the case of structured meshes the seeds are placed in a structured grid, while in the case of unstructured/Voronoi elements the seeds are placed randomly within the element. An initial Voronoi tessellation of the seeds is created and then smoothed using PolyMesher. After smoothing, a procedure is used to `optimize' the positions of the newly created nodes that lie on the edges of the original element (see <cit.>). The smoothed and optimized elements are depicted in the right-hand column of Figure <ref>. §.§ Mesh coarsening The mesh coarsening procedure used in this work is identical to that described in <cit.> and is summarized briefly here for the sake of self-containment and illustrated in Figure <ref>. The patch/group of elements to be coarsened/combined is indicated in grey. The geometry of the coarsened element is created by constructing a convex hull around the patch of elements as indicated in red. The geometries of the elements in the patch are modified to coincide with the convex hull using the edge straightening procedure proposed in <cit.>. Once the geometries of the marked elements, and the surrounding elements, have been modified the marked elements are deleted and one new element is created using the geometry of the convex hull. § ERROR ESTIMATION AND PREDICTION In this section the procedures for calculating the approximate local (element-level) and global errors are presented along with the procedure for the prediction of error after coarsening. In this work error is measured through the well-known energy error norm <cit.>. §.§ Global error estimation The global error in the ℋ^1 semi-norm, i.e. the energy error norm, is defined as e_ℋ^1 = [1/2∫_Ω[ ^ex - ^h]^T𝔻^-1[ ^ex - ^h] dΩ]^0.5 , where ^ex is the exact/analytical stress solution and 𝔻 is the constitutive matrix. In practical applications the exact stress is typically unknown and is replaced with an approximation ^∗(see Section <ref>). A relative energy error e_rel is introduced and defined as the ratio of the energy error e_ℋ^1 to the elastic energy of the deformed body U such that e_rel = e_ℋ^1/U , where the elastic energy is computed as U = [1/2∫_Ω[ ^ex]^T𝔻^-1[ ^ex] dΩ]^0.5 . The global error e_ℋ^1 and global energy U can be computed as a sum of element-level contributions given by e_ℋ^1 = [1/2∑_i=1^n_el e_i ]^0.5 and U = [1/2∑_i=1^n_el U_i ]^0.5 respectively. Here n_el is the total number of elements in the domain with e_i and U_i respectively the element-level error and energy contributions. §.§ Local (element-level) error estimation The element-level error contribution to the global energy error is computed, approximately, for the i-th element as e_i ≈|E_i|/n_v^i∑_j=1^n_v^i[ [ ^∗(_j) - ^h(_j) ]^T𝔻^-1[ ^∗(_j) - ^h(_j) ] ] , where ^∗ is an approximation of the exact stress ^ex. Additionally, the area of the i-th element is denoted by |E_i| and n_v^i denotes the number of nodes/vertices. Furthermore, analogously to (<ref>), the energy error on a single element can be approximated as e_i_ℋ^1≈[ 1/2 e_i ]^0.5 . Similarly, the element-level energy contribution is approximated as U_i ≈|E_i|/n_v^i∑_j=1^n_v^i[ [ ^∗(_j) ]^T𝔻^-1[ ^∗(_j) ] ] . §.§ Local (common node) error prediction An energy error prediction is introduced that aims to predict how much coarsening a particular patch of elements would increase the local and global approximations of the energy error (see <cit.>). Here, a patch refers to all of the elements connected to a particular node. Thus, every node has an associated patch of elements and the energy error prediction is computed for each node. The energy error prediction e_p_i_ℋ^1 is approximated over patch i as e_p_i_ℋ^1≈[ 1/2|E_p_i|/n_v^p_i∑_j=1^n_v^p_i[ [ ^∗(_j) - ^h_p_i(_j) ]^T𝔻^-1[ ^∗(_j) - ^h_p_i(_j) ] ] ]^0.5 . Here |E_p_i| denotes the area of patch i and n_v^p_i is the number of unique vertices/nodes associated with the patch. Additionally, ^h_p_i denotes the `predicted stress over the coarsened patch computed as the weighted average of the element stresses on the patch. §.§ Super-convergent patch recovery In practical applications the exact stress ^ex is typically unknown and is replaced with an approximation ^∗. A simple and effective approach in the VEM context is to compute ^∗ at only the nodal positions using a patch-based recovery technique based on super-convergent sampling points (see <cit.>). In this work a low-order VEM is considered where the approximation of the stress field is piece-wise constant. Thus, the approximation of ^∗ should be piece-wise linear and can computed at each node via a least-squares best fit over a patch of elements. The super-convergent stress at a node is computed by considering the patch of elements connected to the node. The location of the centroids of the elements in the patch are treated as the super-convergent sampling points and the element-level stresses are assigned as the degrees of freedom of the sampling points. Since a linear fit is required, at least three sampling points are needed to determine a unique fit. Thus, in cases where a node is connected to less than three elements the patch is enlarged to increase the number of sampling points. Specifically, the patch is enlarged to include elements that are connected to any of the elements in the original patch. For clarity, a few examples of element patches and sampling points are depicted in Figure <ref>. Here, the node at which the super-convergent stress is to be computed is indicated as a blue circle, the elements in the patch connected to the node are indicated in dark grey, and (if applicable) the elements included in the enlarged patch are indicated in light grey. Additionally, the locations of the sampling points are indicated as red triangles. The super-convergent stress component σ^∗_i computed over a specific patch is given by σ^∗_i = (x, y) _i = [ 1 x y ][ a_i^1; a_i^2; a_i^3 ] where _i are the degrees of freedom of the super-convergent stress component. The degrees of freedom are computed as _i = ^-1_i where = ∑_k=1^n_sp(x_k, y_k)^T(x_k, y_k) and _i = ∑_k=1^n_sp(x_k, y_k)^Tσ^h_i(x_k, y_k) respectively. Here n_sp is the number of sampling points, x_k and y_k are the coordinates of the sampling points, and σ^h_i is the stress component at the sampling point (computed via (<ref>)). § PROCEDURE FOR THE SELECTION ELEMENTS TO REFINE AND COARSEN In this section the procedures proposed for the identification of elements qualifying for refinement, and element patches qualifying for coarsening, are presented for various remeshing targets. Specifically, procedures are presented for a target global error, a target number of elements, and a target number of nodes. The error-based target is suited to applications in which the engineer is performing analysis/simulation work with a specified requisite accuracy. The adaptive procedure will then generate a mesh that meets the accuracy target with a quasi-minimal computational load. The resource-based targets (i.e., the element and node targets) are suited to applications in which the engineer has a specific computational constraint. The adaptive procedure will then generate a mesh that meets the resource target with a quasi-minimal error. A restriction on the number of elements permitted is a common constraint. However, the computational load, in terms of array sizes and memory allocation, is directly related to the number of degrees of freedom of the system. Thus, motivating the additional presentation of a node-based target. §.§ Target error The global relative error target e_rel^targ is set by the user based on their requirements and would typically fall in the range of 1 - 10 %. From e_rel^targ and the elastic energy a specific global error target e_ℋ^1^targ is computed as e_ℋ^1^targ = % e_rel^targ·U . Assuming an optimal mesh with even error distribution, and considering (<ref>), a target element-level error contribution is computed as e_targ = 2 ( e_ℋ^1^targ)^2/n_el . Thus, if e_i = e_targ ∀ i ∈ [1, n_el] the mesh would be optimal and the specified error target would be satisfied. Finally, a corresponding target element level energy error is computed as e_loc_ℋ^1^targ = [ 1/2 e_targ]^0.5 . Here the subscript loc is introduced for clarity to distinguish the local e_loc_ℋ^1^targ and global e_ℋ^1^targ error targets. Since the objective of this work is to create a quasi-optimal mesh, with quasi-even error distribution, an allowable target error range is introduced. The bounds of this range are based on well-known convergence behaviours. Since a first-order VEM is considered it is expected that under uniform refinement the method would exhibit 𝒪(h^1) convergence. That is, if every element is refined the global error e_ℋ^1 should decrease by a factor of half. Subsequently, it is expected that if a single element i is refined its local error e_i_ℋ^1 should decrease by a factor of a quarter. Therefore, the upper and lower bounds of the allowable element-level error range are chosen to be ⌈ e_loc^targ⌉ = 2 e_loc_ℋ^1^targ and ⌊ e_loc^targ⌋ = 0.5 e_loc_ℋ^1^targ respectively. These bounds cover the error range spanned by one refinement, or one coarsening, iteration of a single element. That is, if an element has a local error equivalent to ⌈ e_loc^targ⌉ and is refined its `children'/`successor' elements would each have an error of ⌊ e_loc^targ⌋. Based on these error bounds an element is marked for refinement if e_i_ℋ^1 > ⌈ e_loc^targ⌉ and an element patch is marked for coarsening if e_p_i_ℋ^1 < ⌈ e_loc^targ⌉. In addition to the error-prediction criterion, an element patch can only be coarsened if it meets a geometric eligibility criterion. In short, an element patch is eligible for coarsening if the geometry of the coarsened patch does not modify the geometry of the problem domain (for details see <cit.>). An overview of the adaptive procedure for a specified target error is presented in Figure <ref>. The user selects a target accuracy, e.g. e_rel^targ = 3% and this value is set as the working target. The pre-processing, solution procedure, and post processing steps are all performed in a similar manner to a typical finite or virtual element program. A query is made to check if the system is stable. For the system to be stable the global number of nodes and global error must not deviate by more than 1% for structured meshes and 2% for Voronoi meshes for at least three successive iterations/loops. If the system is not stable then elements are marked for refinement and element patches are marked for coarsening using the procedure described previously. If the system is stable it is checked for accuracy. The solution is considered sufficiently accurate if the approximation of the global error is within 1% of the specified accuracy target. I.e., if the target accuracy is e_rel^targ = 3% then the solution is accurate if 2.97%≤e_rel≤ 3.03%. In the case of Voronoi meshes a 2% deviation from the target accuracy is permitted. In rare cases the global error of a stable system is not sufficiently accurate. In these cases an updated working target accuracy is computed from which updated error bounds are determined. The updated working target is computed by subtracting half of the current discrepancy. For example, if the approximate global accuracy is 3.3% and the current working target is 3%, the updated working target will be 2.85%. Conversely, if the approximate global accuracy is 2.7% and the current working target is 3%, the updated working target will be 3.15%. It has been found through experimentation that the computation of an updated working target is most common in cases of larger target accuracies, typically for e_rel^targ > 8%. §.§ Target number of elements An overview of the program flow for the adaptive procedure for a target number of elements is presented in Figure <ref>. Due to the similarities in the procedures for a target number of elements or target number of nodes (see Section <ref>), as well as for brevity, the program flow in Figure <ref> is expanded to cover both types of resource-based objective. The procedure begins with the user inputting the desired type of resource target (i.e., element target or node target). Then, the specific target is entered. In this case the user inputs the target number of elements n_el^targ. Thereafter the adaptive procedure comprises two distinct phases. The first phase's objective is to meet the specified element target and the second phase's objective is to keep the number of elements approximately constant while optimizing the error distribution. During the first phase of the procedure refinement or coarsening is performed based on simple mesh assumptions and two parameters are introduced. The number of elements that a parent is sub-divided into during refinement is denoted by n_refine. The number of elements that are grouped together to form one new element during coarsening is denoted by n_coarsen. In the case of structured meshes it is known that refining an element will create four `children' elements n_refine^struct = 4 and coarsening one element patch will, most often, combine four smaller elements into one larger element n_coarsen^struct = 4. Therefore, if the current number of elements n_el is less than the target number of elements and n_el^targ / n_el≥ n_refine then all elements are marked for refinement. Alternatively, if n_el < n_el^targ and n_el^targ / n_el < n_refine the number of elements to mark for refinement is n_el^ref = [ n_el^targ - n_el] / [ n_refine - 1 ] and elements are marked based on their local energy error approximation e_i_ℋ^1 in descending order. If the current number of elements is greater than the target number of elements and n_el / n_el^targ≥ n_coarsen then all eligible element patches are marked for coarsening. Alternatively, if n_el > n_el^targ and n_el / n_el^targ < n_coarsen then the number of element patches to mark for coarsening is n_patch^coarsen = [ n_el - n_el^targ] / [ n_coarsen - 1 ] and element patches are marked for coarsening based on their energy error prediction e_p_i_ℋ^1 in ascending order. In the case of Voronoi meshes the procedure is the same albeit with n_refine^vrn = 5 and n_coarsen^vrn = 3. The procedure is repeated iteratively until the resource usage is sufficiently accurate. In the case of a target number of elements the resource usage is sufficiently accurate if the current number of elements n_el is within 1% of the target number of elements n_el^targ Similarly, in the case of a target number of nodes the resource usage is sufficiently accurate if the current number of nodes n_v is within 1% of the target number of nodes n_v^targ. Once the resource usage is sufficiently accurate the first phase is complete. During the second phase elements are marked for refinement or coarsening based on their local errors in a similar manner to that described in Section <ref>. From the current energy error and number of elements an element-level error target is computed in the same manner as (<ref>). Thereafter, and as described in Section <ref>, upper and lower error bounds are computed, elements are identified for refinement, and element patches are identified for coarsening. Before the refinement and coarsening can be performed consideration must be made to keep the number of elements approximately constant. Specifically, the number of elements added by refinement must equal the number of elements removed by coarsening. Therefore, the lists of elements identified for refinement and element patches identified for coarsening must be trimmed. The number of elements added to the system if all identified elements are refined is computed as n_add = [ n_refine - 1 ] n_el^refine where n_el^refine denotes the number of elements identified for refinement. Similarly, the number of elements removed from the system if all identified element patches are coarsened is computed as n_rem = [ n_coarsen - 1 ] n_patch^coarsen where n_patch^coarsen denotes the number of element patches identified for coarsening. The total number of elements to modify is then computed as n_mod = min(n_add, n_rem). Updated numbers of elements to refine and element patches to coarsen are then computed as n_el^refine = n_mod / [ n_refine - 1 ] and n_patch^coarsen = n_mod / [ n_coarsen - 1 ] respectively. Elements are then marked for refinement based on their local energy error approximation e_i_ℋ^1 in descending order and element patches are marked for coarsening based on their energy error prediction e_p_i_ℋ^1 in ascending order. The procedure is repeated iteratively until the same stability criteria as described in Section <ref> are met. §.§ Target number of nodes Since the adaptive remeshing process involves refining and coarsening elements the number of nodes in a mesh cannot be directly controlled, rather it is a consequence of the number of elements in, and type of, the mesh. As such, the procedure for meeting a target number of nodes n_v^targ is based upon approximate relations between the number of elements and the number of nodes for a given mesh type. While generating the results for error target and element target based computations a database was built-up comprising the number of elements and nodes in a mesh. This database comprises entries for a broad range of error and element targets, uniform and adapted meshes, and a variety of problem types not presented in this work (for brevity). From the data an approximate exponential relationship was determined for the number of nodes per element given by r_n/e = a ( n_v)^b. The parameters of the relationship are presented in Table <ref>. Using the approximate relationship between the number of nodes per element and the number of nodes in a mesh the procedure for a target number of elements presented in Section <ref> can be trivially modified for a target number of nodes. Since the modification is trivial, presentation of the procedure for a target number of nodes is omitted for brevity. However, an overview of the program flow is presented in Figure <ref>. § NUMERICAL RESULTS In this section numerical results are presented to demonstrate the efficacy of the proposed adaptive remeshing and quasi-optimization procedures for various targets. The efficacy is evaluated in the ℋ^1 semi-norm, i.e. the energy error norm, as described in Section <ref>. In the examples that follow the material is isotropic with a Young's modulus of E=1 Pa, a Poisson's ratio of ν=0.3, and the shear modulus is computed as μ = E /2 [1+ν]. Additionally, example problems with quite large deformations are presented. While the material parameters and large deformations may not be realistic for the linear elastic material model and small strain theory used, they are useful to demonstrate the behaviour of the various adaptive remeshing procedures, and are helpful in providing an intuition of where meshes should be refined or coarsened. Furthermore, since the material is linear elastic and small strain theory is used, larger deformations have the same effect as magnifying smaller deformations, which is useful when studying the nature of the mesh adaptation. §.§ L-shaped domain The L-shaped domain problem comprises a domain of width w=1 m and height h=1 m where the horizontal and vertical thickness of the L are w/4 and h/4 respectively. The bottom and left-hand edges of the domain are constrained vertically and horizontally respectively, with the bottom left corner fully constrained. The upper and right-hand edges are subject to prescribed displacements of u̅_y=0.5 m and u̅_x=0.5 m respectively, with the displacements of the edges unconstrained in the x- and y-directions respectively (see Figure <ref>(a)). The deformed configuration of the body is illustrated in Figures <ref>(b) and (c) with the displacement magnitude || and von Mises stress respectively plotted on the colour axis. From these figures it is clear that the more complex (i.e., more heterogeneous) deformations, high stresses, and high stress gradients are localized to the internal corner of the L where the domain's geometry induces a stress singularity. Conversely, the deformation throughout the rest of the domain is much simpler (i.e., more homogeneous) and the stresses are much smoother and lower. As such, the L-shaped domain problem represents a thorough test and ideal application for an adaptive remeshing procedure. §.§.§ Target error The mesh evolution during the adaptive remeshing process for the L-shaped domain problem is depicted in Figure <ref> for an initially uniform Voronoi mesh with an error target of e_rel^targ = 3%. The adaptive procedure generates a very sensible and intuitive mesh evolution for this problem. The areas of the domain with the highest stresses and stress gradients are increasingly refined while the regions of the domain experiencing simpler (quasi-homogeneous) deformations and lower stresses are coarsened. Furthermore, the strongest concentration of elements and the highest refinement level is generated in the internal corner of the L and coincident with the stress singularity. The results of the adaptive remeshing process for the L-shaped domain problem are depicted in Figure <ref> for initially uniform Voronoi meshes of various densities with an error target of e_rel^targ = 3%. The top row of figures depicts the initial meshes while the bottom row depicts the final adapted meshes after the error target and termination criteria have been met. The final adapted meshes exhibit the same sensible and intuitive element distribution as observed in Figure <ref>. Most notably, the final adapted meshes are almost identical for all initial uniform meshes considered. Thus, the output, or final result, of the adaptive remeshing procedure is independent of the initial mesh and depends only on the specified error target. Furthermore, the cases of the `Intermediate' and `Fine' initial meshes demonstrate that the fully adaptive remeshing procedure is able to perform coarsening from any mesh and does not require knowledge of a previously coarser state. This ability distinguishes the proposed fully adaptive procedure from other procedures surveyed in the literature. The convergence behaviour of the energy error approximation vs the number of nodes in the mesh is depicted on a logarithmic scale in Figures <ref>(a)-(c) for the L-shaped domain problem on structured meshes. The convergence behaviour is plotted for cases of several initially uniform structured meshes of varying density (denoted by `Meshes A-F') for various error targets. For readability purposes the outline of the marker denoting the first step, or initial mesh, is indicated in black and a red marker is used to indicate the final adapted mesh result for each of the initial meshes. Additionally, the black `Reference' curve corresponds to the standard convergence behaviour under uniform refinement, i.e. all elements are refined. Furthermore, for each error target considered (not all have been shown here) an average final mesh result is computed, i.e. the average position of the red markers for each error target. These averaged results are plotted in Figure <ref>(d) along with the reference uniform convergence curve. Where applicable, the markers in Figure <ref>(d) are colored to match their corresponding targets depicted in Figures <ref>(a)-(c). From Figures <ref>(a)-(c) it is clear that the fully adaptive procedure is able to meet the specified global error targets from any initial mesh. Furthermore, the final adapted meshes contain an almost identical number of nodes for a specific error target. This, again, demonstrates that the performance of the fully adaptive remeshing procedure is independent of the initial mesh. From Figure <ref>(d) it is clear that the outputs of the fully adaptive procedure for various error targets exhibits a linear convergence rate. Since the procedure aims to generate a quasi-optimal mesh, it is expected that this is the (approximately) optimal convergence rate for this problem. The convergence behaviour of the energy error approximation vs the number of nodes in the mesh is depicted on a logarithmic scale in Figures <ref>(a)-(c) for the L-shaped domain problem on Voronoi meshes. The convergence behaviour is plotted for cases of several initially uniform Voronoi meshes of varying density for various error targets. Additionally, the averaged final adapted mesh result (red markers) for all error targets considered is plotted in Figure <ref>(d). Where applicable, the markers in Figure <ref>(d) are colored to match their corresponding targets depicted in Figures <ref>(a)-(c). The behaviours exhibited in Figure <ref> are almost identical to those observed in Figure <ref> for structured meshes. The only discernible difference between the two sets of figures is that in the case of Voronoi meshes there are very small differences in the number of nodes of the final adapted mesh results (as indicated by the positions of the red markers). These differences are a result of the inherent randomness in Voronoi meshes and the randomness involved in the refinement of Voronoi elements and are not a pathology of the adaptive procedure. Thus, from Figures <ref> and <ref> it is clear that the fully adaptive procedure can meet any specified error target on both structured and Voronoi meshes. The error distribution during the mesh adaptation process for the L-shaped domain problem is depicted in Figure <ref> for various error targets on structured meshes. The left column of figures depicts the evolution of the true/absolute maximum (top curves) and minimum (bottom curves) local element errors from initial meshes of varying density. Additionally, the optimal target element error is indicated by a solid maroon line as a function of mesh density. The dashed maroon lines indicate the target upper and lower element error bounds as described in Section <ref>. Finally, three sets of red markers indicate the final adapted mesh result for each of the initial meshes. The central markers represent the average element error over all the elements in the mesh. The upper and lower markers respectively represent the 5% trimmed maximum and minimum element errors. A slightly trimmed maximum and minimum are used to improve the readability of the graph while still accurately representing the underlying data. For all error targets considered the maximum and minimum errors respectively converge to the prescribed upper and lower error bounds. Furthermore, the average element-level error (as indicated by the central red markers) closely meets the element-level target for all considered global error targets. The right column of figures illustrates the nature of the distribution of the local element-level error over all of the elements in a mesh through a classical box and whisker plot. Here, the median and quartiles are computed in the standard way (i.e., from the full set of element-level data) while the maximum and minimum whiskers correspond to the 5% trimmed data (i.e., equivalent to the red markers). Additionally, the average error of all elements in the mesh is computed and indicated on the figure. While the average is not typically considered in a box and whisker plot it is helpful in understanding the spread of the data. In these figures pairs of results correspond to the error evolution for a particular mesh. For example, the first (left-most) data corresponds to the initial uniform mesh error distribution for `Mesh A' and the second data corresponds to the error distribution of the corresponding final adapted mesh. For all error targets considered the final adapted mesh error distributions fit within the specified upper and lower error bounds. Additionally, the upper and lower quartiles indicate that the majority of the element-level errors are very close to the element-level targets. For the cases of the initial meshes the average and median error differ significantly which is a classical indicator of inequality within the dataset. Conversely, in the cases of the final adapted meshes the average and median are almost identical, thus, indicating the equality of the data and further demonstrating the narrow distribution of the element-level error. The results presented in Figure <ref> demonstrated that the fully adaptive procedure was able to meet all specified error targets on structured meshes. This is indicated by the red markers denoting the final adapted mesh lying exactly on the target error line. The results presented in Figure <ref> for structured meshes demonstrated that the average element-level error almost exactly met the element-level target as the red markers denoting the average error strongly overlap the solid maroon target line. Furthermore, the element-level errors were satisfactorily equal as they all fell within the specified target error range. Thus, the fully adaptive procedure successfully generated quasi-optimal meshes for the specified target errors on structured meshes. The error distribution during the mesh adaptation process for the L-shaped domain problem is depicted in Figure <ref> for various error targets on Voronoi meshes. The results presented here are very similar to those presented in Figure <ref> for structured meshes. The only noteworthy difference between the two sets of results is that in the case of Voronoi meshes the minimum element-level error occasionally falls slightly below the prescribed lower bound. However, the difference between the lower bound and the minimum error is small and is not indicative of a pathology in the fully adaptive procedure. The difference is most likely a result of the choice of termination criteria presented in Section <ref>. The termination criteria is based only on the stability of the global error and of the number of elements in the mesh between remeshing iterations. Thus, the criteria does not consider if all of the element-level errors lie within the prescribed error bounds and it is possible that, if continued, all errors would fall within the bounds after several more remeshing iterations. The presented termination criteria were chosen for efficiency reasons. A termination criteria based on all element-level errors falling within the error bounds was investigated but was found to be less efficient and less practical. It was found that a larger allowable error band had to be prescribed and that the number of remeshing iterations required before termination was far greater (approximately 50% more) than in the case of the presented method, particularly in the case of Voronoi meshes. As such, the minimum element-level error occasionally falling below the lower bound was deemed an acceptable consequence of improved efficiency. Furthermore, the position of the lower quartiles in the box and whisker plots strongly indicate that the vast majority of element errors are above the lower bound. Similarly to the case of structured meshes, the results presented in Figure <ref> demonstrated that the fully adaptive procedure was able to meet all specified error targets on Voronoi meshes. Additionally, the results presented in Figure <ref> for Voronoi meshes demonstrated that the average element-level error almost exactly met the element-level target. Finally, the element-level errors were satisfactorily equal as they almost all fell within the specified target error range. Thus, the fully adaptive procedure successfully generated quasi-optimal meshes for the specified target errors on Voronoi meshes. §.§.§ Target number of elements The mesh evolution during the fully adaptive remeshing process for the L-shaped domain problem is depicted in Figure <ref> for an initially uniform structured mesh with an element target of n_el^targ = 1000. The differences in the remeshing steps between those presented here and those presented in Figure <ref> for an error target are immediately apparent. In Figure <ref> both refinement and coarsening are performed from the first remeshing iteration. Conversely, here only refinement is performed during the first phase of the adaptive procedure until the element target is met (this is the expected behaviour, see Section <ref>). It is clear that remeshing step 3 satisfies the error target and signifies the end of the first phase of the procedure as from step 4 onwards refinement and coarsening are performed simultaneously. During the second phase of the procedure refinement and coarsening are performed while keeping the number of elements approximately constant. During this phase the expected distribution of elements is achieved with the areas of the domain with the highest stresses and stress gradients becoming increasingly refined while the regions of the domain experiencing simpler deformations and lower stresses are coarsened. The results of the adaptive remeshing process for the L-shaped domain problem are depicted in Figure <ref> for initially uniform structured meshes of varying density with an element target of n_el^targ = 1000. The top row of figures depicts the initial meshes while the bottom row depicts the final adapted meshes after the element target and termination criteria have been met. The final adapted meshes exhibit the same sensible and intuitive element distribution as observed in Figure <ref>. Notably, the final adapted meshes are, again, almost identical for all initial uniform meshes considered. Thus, the output, or final result, of the fully adaptive remeshing procedure is independent of the initial mesh. The convergence behaviour of the energy error approximation vs the number of elements in the mesh is depicted on a logarithmic scale in Figures <ref>(a)-(c) for the L-shaped domain problem. The convergence behaviour is plotted for cases of several initially uniform structured meshes of varying density for various element targets. Additionally, the averaged final adapted mesh result (red markers) for all element targets considered is plotted in Figure <ref>(d). Where applicable, the markers in Figure <ref>(d) are colored to match their corresponding targets depicted in Figures <ref>(a)-(c). From Figures <ref>(a)-(c) it is clear that the fully adaptive procedure is able to meet the specified element targets from any initial mesh. During the first phase of the adaptive procedure uniform refinement or coarsening is performed until the point at which a uniform refinement or coarsening would overshoot the element target. During this phase the convergence curves follow the uniform reference refinement curve. Thereafter, a partial refinement or coarsening is performed such that the error target is met. This step is the first time the curves of the adaptive procedure deviate from the reference curve and signifies the end of the first phase. During the second phase the adaptive curves move vertically as the number of elements its held constant while selective refinement and coarsening is performed to evenly distribute the element-level error. This process has the effect of reducing the global error and is performed until the termination criteria is met (see Section <ref>). The final adapted meshes have an almost identical error for a specific element target. This, again, demonstrates that the performance of the fully adaptive remeshing procedure is independent of the initial mesh. From Figure <ref>(d) it is clear that the outputs of the fully adaptive procedure for various element targets exhibits a linear convergence rate. Since the procedure aims to generate a quasi-optimal mesh, it is again expected that this is the (approximately) optimal convergence rate for this problem. The error distribution during the mesh adaptation process for the L-shaped domain problem is depicted in Figure <ref> for various element targets on structured meshes. The nature of the convergence exhibited by the left column of figures is significantly different to that exhibited in Figure <ref> for a target error. The difference is a result of the two distinct adaptive phases for the case of a target number of nodes. However, the error distributions of the final adapted meshes, as indicated by the red markers and the box and whisker plots, are qualitatively very similar to those of Figure <ref> for a target error. Specifically, the maximum and minimum errors fall within the upper and lower bounds, the upper and lower quartiles indicate a narrow distribution of error around the average, and the average error is almost identical to the target error. Additionally, the average and median errors are almost identical which, again, emphasises the narrow distribution of the element-level errors. The results presented in Figure <ref> demonstrated that the fully adaptive procedure was able to meet all specified element targets on structured meshes. This is indicated by the red markers denoting the final adapted mesh lying exactly on the target element line. The results presented in Figure <ref> for structured meshes demonstrated that the average element-level error almost exactly met the element-level target as the red markers denoting the average error strongly overlap the solid maroon target line. Furthermore, the element-level errors were satisfactorily equal as they all fell within the specified target error range. Thus, the proposed fully adaptive procedure successfully generated quasi-optimal meshes for the specified element targets on structured meshes. §.§.§ Target number of nodes The mesh evolution during the fully adaptive remeshing process for the L-shaped domain problem is depicted in Figure <ref> for an initially uniform Voronoi mesh with a node target of n_v^targ = 1000. The first four steps correspond to the first phase of the adaptive remeshing procedure in which the target number of nodes is met. Thereafter, refinement and coarsening are performed simultaneously while keeping the number of nodes approximately constant. During this phase the expected distribution of elements is, again, achieved with the areas of the domain with the highest stresses and stress gradients becoming increasingly refined while the regions of the domain experiencing simpler (i.e., more uniform/homogeneous) deformations and lower stresses are coarsened. The results of the fully adaptive remeshing process for the L-shaped domain problem are depicted in Figure <ref> for initially uniform Voronoi meshes of varying density with a node target of n_v^targ = 1000. The top row of figures depicts the initial meshes while the bottom row depicts the final adapted meshes after the element target and termination criteria have been met. The final adapted meshes exhibit the same sensible and intuitive element distribution as observed in Figures <ref> and <ref>. The final adapted meshes are, again, almost identical for all initial uniform meshes considered. Thus, demonstrating that the final result of the fully adaptive remeshing procedure is independent of the initial mesh. The convergence behaviour of the energy error approximation vs the number of nodes in the mesh is depicted on a logarithmic scale in Figures <ref>(a)-(c) for the L-shaped domain problem. The convergence behaviour is plotted for cases of several initially uniform Voronoi meshes of varying density for various node targets. Additionally, the averaged final adapted mesh result (red markers) for all node targets considered is plotted in Figure <ref>(d). Where applicable, the markers in Figure <ref>(d) are colored to match their corresponding targets depicted in Figures <ref>(a)-(c). From Figures <ref>(a)-(c) it is clear that the fully adaptive procedure is able to meet the specified node targets from any initial mesh. Furthermore, the final adapted meshes have an almost identical error for a specific element target. Thus, demonstrating that the performance of the fully adaptive remeshing procedure is independent of the initial mesh. From Figure <ref>(d) it is clear that the outputs of the adaptive procedure for various node targets exhibits a linear convergence rate which, again, is expected to be the (approximately) optimal convergence rate for this problem. The error distribution during the mesh adaptation process for the L-shaped domain problem is depicted in Figure <ref> for various node targets on Voronoi meshes. The nature of the convergence exhibited by the left column of figures is similar to that exhibited in Figure <ref> for element targets, albeit slightly more erratic due to the nature of Voronoi meshes. Additionally, the error distributions of the final adapted meshes, as indicated by the red markers and the box and whisker plots, are qualitatively very similar to those of Figure <ref> for error targets and Figure <ref> for element targets. Specifically, the maximum errors fall within the upper bounds and the minimum errors are satisfactorily close to the lower bounds. Furthermore, the upper and lower quartiles indicate a narrow distribution of error around the average, and the average error is almost identical to the target error. Additionally, the average and median errors are almost identical which, again, emphasises the narrow distribution of the element-level errors. The results presented in Figure <ref> demonstrated that the fully adaptive procedure was able to meet all specified node targets on Voronoi meshes. This is indicated by the red markers denoting the final adapted mesh lying exactly on the target node line. The results presented in Figure <ref> for Voronoi meshes demonstrated that the average element-level error almost exactly met the element-level target as the red markers denoting the average error strongly overlap the solid maroon target line. Furthermore, the element-level errors were satisfactorily equal as they (almost) all fell within the specified target error range. Thus, the fully adaptive procedure successfully generated quasi-optimal meshes for the specified node targets on Voronoi meshes. §.§.§ Comparison of targets The energy error approximation of the averaged final adapted mesh result for various remeshing target types is plotted against the number of nodes in Figure <ref> on a logarithmic scale for the L-shaped domain problem for (a) structured and (b) Voronoi meshes. From these results it is clear that the fully adaptive procedure is equally effective for error-, node-, or element-based targets on structured or Voronoi meshes. That is, the fully adaptive procedure can meet any of the prescribed targets while generating a quasi-optimal mesh. §.§ Pseudo-dynamic punch In this section a pseudo-dynamic problem is presented to demonstrate the suitability of the proposed adaptive remeshing procedure for dynamic problems. Adaptive remeshing procedures are particularly beneficial for dynamic problems because the zones of high stresses are continuously changing. An adaptive procedure allows for a high degree of refinement in a zone of high stress that can then be coarsened once the high stress has passed. This allows for a high degree of accuracy at a low computational cost. In this work, the remeshing requirements of a dynamic problem are mimicked by changing the boundary conditions of a static problem. The pseudo-dynamic punch problem comprises a rectangular body of width w=2 m and height h=2 m vertically constrained along its bottom edge (see Figure <ref>). The body is subjected to alternating punches of width w_p=0.3 m modelled as uniformly distributed loads with a magnitude of Q_p=0.675 N/m. The centres of the punches are 1.1 m apart and 0.45 m from the left- and right-hand edges of the body. During application of the punch the region of the body experiencing the distributed load is horizontally constrained. During odd numbered load cycles only the left-hand punch is active and during even numbered load cycles only the right-hand punch is active as illustrated in Figures <ref>(a) and (b) respectively. Figures <ref>(c) and (d) respectively depict the deformed configuration of the body for odd and even load cycles. The mesh evolution during the fully adaptive remeshing process for the pseudo-dynamic punch problem is depicted in Figure <ref> for an initially uniform structured mesh with an error target of e_rel^targ = 5%. Here, three load cycles are considered with each column of figures corresponding to a particular load cycle. Each load cycle begins with the application of the corresponding boundary conditions and ends once the mesh adaptation is complete, i.e. the global error target and the termination criteria described in Section <ref> have been met. The mesh evolution is sensible and intuitive for this problem with the region around the active punch being most highly refined and the rest of the domain remaining relatively coarse. Most notably, through this problem the reversibility of the proposed fully adaptive remeshing procedure, and thus its suitability for dynamic problems, is demonstrated. The energy error convergence and distribution for the pseudo-dynamic punch problem with an initially uniform structured mesh with e_rel^targ = 5% are plotted in Figures <ref>(a) and (b) respectively for six load cycles. In Figure <ref>(a) the energy error approximation is plotted against the number of nodes on a logarithmic scale. The convergence curve for each load cycle is plotted in a different colour and the outline of the marker denoting the first step in each cycle is indicated in black. Additionally, the final adapted mesh result is indicated by a red marker. During the first cycle typical convergence behaviour is exhibited as the initially uniform coarse mesh is increasingly refined in the region around the punch until the target error and termination criteria are met. Thereafter, the boundary conditions are changed, while the mesh is not, thus the mesh is completely unsuitable for the new load conditions and a very high error is exhibited for the first step of the second cycle. The mesh adaptation process then iteratively improves the mesh until the error target and termination criteria are met. This process is repeated four more times with almost identical accuracy and efficiency exhibited by the final fully adapted meshes, as indicated by the almost identical locations of the red markers. In Figure <ref>(b) the distribution of the element-level energy error is illustrated through a box and whisker plot where pairs of results correspond to the error evolution for a particular load cycle. Here, the maximum and minimum element-level errors of the final adapted meshes fall within the prescribed error bounds for each load cycle. Furthermore, the upper and lower quartiles indicate a narrow distribution of error around the average, and the average error is almost identical to the target error. The results presented in Figure <ref>(a) demonstrated that the fully adaptive procedure was able to meet all specified error target for every load cycle. The results presented in Figure <ref>(b) demonstrated that the element-level errors, on average, met the element-level target and were approximately equal as they fell within the specified target error range. Therefore, the fully adaptive procedure successfully generated quasi-optimal meshes for the specified error target for every load cycle, thus, demonstrating its suitability for dynamic problems. § DISCUSSION AND CONCLUSION In this work a novel fully adaptive remeshing procedure has been proposed for the virtual element method. The remeshing procedure comprises the novel combination of refinement and coarsening procedures for structured and unstructured/Voronoi meshes as well as novel procedures for the selection of elements to refine and element patches to coarsen. Three procedures for the selection of elements to refine and element patches to coarsen have been proposed for various adaptivity targets. Specifically, procedures were proposed for a target global error, a target number of elements, and a target number of nodes. In this work error was measured through an approximation of the well-known energy error. Additionally, all of the proposed element selection procedures were constructed to meet their respective targets while creating a mesh in which all elements have an approximately equal error. Thus, creating a quasi-even error distribution over the elements corresponding to a quasi-optimal mesh for a specific target. The proposed fully adaptive procedures were studied numerically on a well-known benchmark problem. For each of the target types the mesh evolution during remeshing was analysed along with analysis of the error convergence and the distribution of error over the problem domain. In terms of the mesh evolution, the efficacy of the proposed procedures was evident as the adapted meshes had increased refinement in the most critical regions of the domain and remained relatively coarse, or were coarsened further, elsewhere. This efficacy was demonstrated on both structured and unstructured/Voronoi meshes and was demonstrated to be independent of the initial mesh. Furthermore, the efficacy of the proposed fully adaptive remeshing procedures was studied in the energy error norm. Here, the performance of the procedures was investigated on structured and unstructured/Voronoi meshes and was compared to a reference approach comprising meshes of uniform discretization density. Additionally, the influence of the initial mesh density on the performance of the adaptive procedures was investigated. The numerical results demonstrated the high degree of efficacy of all proposed adaptive procedures. The procedures were able to meet their specified targets from all initial meshes and on both mesh types. Furthermore, it was demonstrated that the meshes generated by the adaptive procedures had a quasi-even error distribution and thus represented quasi-optimal meshes for their respective targets. Additionally, the suitability of the proposed fully adaptive procedures for dynamic problems was demonstrated through the presentation of a novel pseudo-dynamic problem in which the challenges faced during a dynamic problem were mimicked. The good performance exhibited by the proposed fully adaptive procedures over a range of target types on both structured and unstructured/Voronoi meshes demonstrates its versatility, efficacy and suitability for application to the analysis of elastic problems using the virtual element method. Future work of interest would be the extension to non-linear problems, higher-order formulations, and problems in three-dimensions would be of great interest. § CONFLICT OF INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § ACKNOWLEDGEMENTS This work was carried out with support from the German Science Foundation (DFG) and the National Scientific and Technical Research Council of Argentina (CONICET) through project number DFG 544/68-1 (431843479). The authors acknowledge with thanks this support. elsarticle-num
http://arxiv.org/abs/2407.12679v1
20240717155932
Goldfish: Vision-Language Understanding of Arbitrarily Long Videos
[ "Kirolos Ataallah", "Xiaoqian Shen", "Eslam Abdelrahman", "Essam Sleiman", "Mingchen Zhuge", "Jian Ding", "Deyao Zhu", "Jürgen Schmidhuber", "Mohamed Elhoseiny" ]
cs.CV
[ "cs.CV" ]
Goldfish Kirolos et al. King Abdullah University of Science and Technology Harvard University The Swiss AI Lab IDSIA, USI, SUPSI Kirolos Ataallah10009-0007-0495-2171 Xiaoqian Shen^*10000-0001-6284-520X Eslam AbdelrahmanEqual contribution 1Essam Sleimanwork was done during internship in KAUST20000-0002-1505-6694Mingchen Zhuge10000-0003-2561-7712Jian Ding12222–3333-4444-5555Deyao Zhu10000-0001-8014-7309Jürgen Schmidhuber1,30000-0002-1468-6758Mohamed Elhoseiny10000-0001-9659-1551 Received 04 December 2023 / Accepted 15 July 2024 ==================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Most current LLM-based models for video understanding can process videos within minutes. However, they struggle with lengthy videos due to challenges such as “noise and redundancy", as well as “memory and computation" constraints. In this paper, we present , a methodology tailored for comprehending videos of arbitrary lengths. We also introduce the TVQA-long benchmark, specifically designed to evaluate models' capabilities in understanding long videos with questions in both vision and text content.  approaches these challenges with an efficient retrieval mechanism that initially gathers the top-k video clips relevant to the instruction before proceeding to provide the desired response. This design of the retrieval mechanism enables the  to efficiently process arbitrarily long video sequences, facilitating its application in contexts such as movies or television series. To facilitate the retrieval process, we developed  that generates detailed descriptions for the video clips. In addressing the scarcity of benchmarks for long video evaluation, we adapted the TVQA short video benchmark for extended content analysis by aggregating questions from entire episodes, thereby shifting the evaluation from partial to full episode comprehension. We attained a 41.78% accuracy rate on the TVQA-long benchmark, surpassing previous methods by 14.94%. Our  also shows exceptional performance in short video comprehension, exceeding existing state-of-the-art methods by 3.23%, 2.03%, 16.5% and 23.59% on the MSVD, MSRVTT, TGIF,and TVQA short video benchmarks, respectively. These results indicate that our models have significant improvements in both long and short-video understanding.Our models and code have been made publicly available https://vision-cair.github.io/Goldfish_website/Goldfish. § INTRODUCTION The complex and detailed nature of videos provides deep insight, making them crucial for understanding and interacting with the visual world. Recent advances in large vision language models (VLMs) have progressed from image to video-centric multimodal dialogue system <cit.>, enabling these models to process and respond to inputs comprising a video, a user query and, optionally, video subtitles. Despite the progress in adapting VLMs for video, most of the previous works <cit.> focus on understanding short videos (in minutes) and struggle to deal with long videos. Recent approaches attempted to address this limitation. For example, MovieChat <cit.>use memory consolidation module and LLaMa-Vid <cit.> compress image representations into fewer tokens. These strategies improve the capacity to handle larger context windows, enabling these models to process significantly longer videos. However, this compression results in the loss of spatial and temporal visual details and leads to unsatisfactory performance in the understanding of long videos (see Tab. <ref>). We question: what factors contribute to the increased difficulty in understanding long videos compared to short videos? We approach this question by identifying several challenges: * Noise and Redundancy: As demonstrated in the “needle in a haystack” test <cit.> in the NLP domain, LLMs tend to overlook valuable information within overly extensive contexts. Similarly, long videos often contain irrelevant or redundant information, Making it challenging for the current video-centric LLM to extract meaningful content, especially with a collapsed spatial as well as temporal resolution. * Computational and Memory Complexity: The longer the video, the greater the computational and memory costs required for processing. Current video-centric Large Language Models (LLMs) <cit.> inherently always have a limitation on the maximum length of videos that they are capable of processing. * Lacking Effective Benchmarks for long video understanding: Existing benchmarks for long videos, such as LLama-Vid <cit.>, primarily generate questions by feeding movie summaries and scripts into a language model, omitting visual data. This approach leads to questions that are text-centric and may be answerable without needing access to the visual content. To address the challenges of noise and redudancy and computational and memory costs, we argue that accurate identification of video clips relevant to queries is a crucial aspect in understanding long videos. We propose , a framework for understanding videos of arbitrary lengths; see Fig. <ref>.  addresses these issues by incorporating a retrieval mechanism that selects the top-k relevant video clips before responding to queries. Specifically,   segments long videos into shorter clips, applies a Video Descriptor module to each clip to generate a detailed description of each video clip, and then executes retrieval module by comparing the similarities in the text domain between the query text embeddings and the detailed description text embeddings. Following this, the query and corresponding summaries are forwarded to an answer module to formulate responses. The Video Descriptor module is actually a short video model (), which extends MiniGPT-v2 <cit.>'s architecture to encode not just a single image, but multiple frames with their aligned subtitles. We map the frame tokens through a linear layer to language tokens. Following this, we tokenize the user query and the video subtitles, then introduce both lists of tokens to the LLM, this model not used by zero shot image level but trained on three stages by using video data to enhancing our model's ability to interpret and respond to video content, and this is one of our contribution as we achieved SOTA results for short video benchmarks. In addressing the challenge of Lacking Effective Benchmarks for long video understanding, we adapted the TVQA short video benchmark for extended content analysis by aggregating questions from entire episodes, thereby shifting the evaluation from partial to full episode comprehension. We extensively evaluated the proposed  on previous video benchmarks and our proposed long video benchmark and demonstrated superiority for long video understanding. For example,  surpasses the competitive concurrent work LLaMA-VID model <cit.> by about 15% in accuracy. The proposed  also outperforms existing state-of-the-art methods by 3.23%, 2.03%, 16.5% and 6.43% on the MSVD, MSRVTT, TGIF,and TVQA short video benchmarks. Our contributions can be summarized as follows: * We developed the  framework for long video understanding, which eased the challenges of long video understanding by introducing a retrieval design. Only top-k relevant video clips are used to answer the questions. While most previous works can only perform couple of minutes videos,  can efficiently process arbitrarily long videos. * We proposed a new TVQA-long benchmark for long video understanding. Compared to the previous long video benchmarks, TVQA-ong benchmark requires the model to understand both the visual and text content. * We developed , which extends VLM to process from single image to multiple frames. By converting frame tokens to language tokens and incorporating the user's query, we improved the model content understanding by training it for 3 stages by using video data.  can function both as a component for detailed video descriptor within  and as an independent model for short video tasks. * Our proposed  is adept at processing long video understanding, which is verified by achieving SoTA experimental results on 4 long video benchmarks, including LLama-Vid, MovieChat, Movie QA and TVQA with only the vision content and achieved SOTA results with vision and subtitles with zeroshot evaluation on TVQA as TVQA is the only benchmark can be used for zeroshot evaluation because the other models trained on the movies datasets. Apart from the long video understanding, our  also outperformed other methods on 5 short video benchmarks, including Video ChatGPT benchmark, MSVD, MSRVTT, TGIF, and TVQA. § RELATED WORK §.§ LLM-Based Short Video Understanding Recently, vision-language models such as Video-LLaMA <cit.> and VideoChat <cit.> extend the BLIP-2 <cit.> architecture for video embedding extraction and both employ two streams for audio and visual signals. Video-LLaMA employs a Video Q-Former and an Audio Q-Former for the two streams, while VideoChat has a video embedder and a perception toolkit for captioning, tags, etc. On the other hand, Video-ChatGPT <cit.> leverages a single stream where the architecture first encodes each frame and then has a spatial and temporal pooling process that is finally mapped to an LLM with a linear layer. Video LLaVA <cit.> takes advantage of the LanguageBind module to map both image and video inputs to the same embedding space. §.§ LLM-Based Long Video Understanding Understanding long videos, such as movies or TV series that exceed two hours in duration, poses significant challenges (as we discussed in Sec. <ref>) for current video-centric multimodal dialogue systems <cit.>. Recent MovieChat <cit.> attempts to address this problem with a memory module containing both long-term and short-term memory. Short-term memory consists of dense frame-wise encodings that are managed in a FIFO (First In, First Out) queue. When short-term memory is full, the contents are sent to a memory consolidation module, which combines adjacent embeddings by merging similar ones, and then stores them in long-term memory. However, the memory mechanism of this work struggles to capture meaningful information relevant to specific tasks. A concurrent work LLaMA-VID <cit.> builds a more efficient method by representing each frame with only two tokens, namely context token and content token. These two methods compress the input frame embeddings, increasing the number of frames fitting into the model context window. Both MovieChat <cit.> and LLaMA-VID <cit.> have addressed the computation and memory challenge to an extent by compressing visual features. However, their approach of using features from the entire video to predict answers has led them to face issues with noise and redundancy challenge. In , we introduce a retrieval-based framework that utilizes only the top-k relevant video clips for question answering. This retrieval approach mitigates both challenges and enables efficient processing of long videos. §.§ Retrieval Systems LLMs have recently shown promising capabilities in a wide range of different tasks, however, face challenges such as hallucination, when a model outputs a nonsensical or incorrect output typically on queries that extend outside of its training data. Retrieval-Augmented Generation (RAG) is a technique where an LLM leverages an external knowledge base through a retrieval mechanism, mitigating hallucinations while storing long context. There are multiple RAG variations introduced for language retrieval <cit.> and recently have been translated for image retrieval as well  <cit.>. Most recently there has also been some work in video retrieval  <cit.>, however, none of these methods can do robust, long-video retrieval. We draw inspiration from these works and develop a retrieval system in the domain of video-centric LLM for long-video retrieval. § §.§ Retrieval-based Long Video Understanding To understand long videos that exceed the context of a normal video large language model, we introduce a three-part system: (1) Video Descriptor empowered by a  model and a text encoder, (2) similarity-based Retrieval Module, and (3) Answer Module. An overview of our system is demonstrated in Fig. <ref>. Our system works as follows. Firstly, in our Video Descriptor,we break the long video down into smaller clips, with each clip limited by a maximum number of frames that can be supported by our  context length (4K). Then,  provides a concise detailed summary for each clip, which is further encoded to an embedding by a text encoder. Given a user query encoded to an embedding by the same text encoder, our Retrieval Module retrieves the most related k clips from the long video and sends them to the Answer Module to formulate an answer to the query. Video Descriptor. Our Video Descriptor breaks down lengthy videos into multiple non-overlapped short clips, each accompanied by textual descriptions and corresponding embeddings for the Retrieval Module. The input for the Video Descriptor is a sequence of frames, denoted as V = {v_1, v_2, …, v_T}, where v_i∈ℝ^3× H× W represents the i-th frame, and T is the sequence's length. These frames are then grouped into m chunks, with each chunk represented as C_k, k∈ [1,m], comprising at most L consecutive frames v_k,j from the video V, where v_k,j signifies the j-th frame within the k-th chunk. Here, L is determined by the maximum number of frames that can be accommodated within the context window of our  introduced later. Consequently, the video can be represented as a sequence of clips: V = {C_1, C_2, …, C_m}={(v_1,1,...,v_1,L),(v_2,1,...,v_2,L),...,(v_m,1,...,v_m,L)}. We employ our short video model () to handle the processing and generation of descriptions for each video clip. Drawing from existing LLM-based vision-language models <cit.>, we adapt this framework to the video domain, resulting in our  model. The architecture of this model is illustrated in Fig. <ref>. For the video encoding stage, we utilize EVA-CLIP<cit.>, integrating a projection layer to map visual features from the vision space to the text space of the LLM. To optimize the contextual capabilities of the LLM, we condense every four adjacent visual tokens into a single token, effectively reducing the token count per image by 75%, from 256 to 64 similar as <cit.>. Through training, the LLM learns to process these video frame features, generating comprehensive clip descriptions S_1, S_2, S_m for each clip essential for conducting visual question-answering tasks in the vision-language domain. After generating descriptions for the video clips, we proceed to encode them along with their respective subtitles using a text encoder. The set of encoded descriptions is defined as: {T_s_1, T_s_2, ..., T_s_m}, and the encoded corresponding subtitles {u_1,u_2,...u_m} are defined as {T_u_1, T_u_2, ..., T_u_m}, where T_u_i, T_s_i∈ℝ^d, i∈ [1,m], and d is the dimensionality of text encoder space. Specifically, we employ OpenAI's <cit.> model as our chosen text encoder based on table <ref> in section 4.4 . Retrieval Module. The Retrieval Module plays a crucial role in identifying video clips most pertinent to a user query, leveraging the pre-processed clip embeddings from the Video Descriptor. Upon receiving a user query Q, we initially encode it using the text encoder, resulting in the embedding T_Q ∈ℝ^d. Subsequently, we compute its cosine similarities with each candidate key K_i from the embeddings set of the clip descriptions and subtitles with K_i ∈{T_u_1, T_u_2, ..., T_u_m, T_s_1, T_s_2, ..., T_s_m} via K_i· T_Q/| K_i| | T_Q|. Next, we select the Top-k similarity scores and retrieve the corresponding descriptions or subtitle indexes, effectively eliminating irrelevant clips from the long video. Answer module. In the final stage, we provide the original user query along with our retrieved clip descriptions (and subtitles, if available) as a context to our answer module, which generates the ultimate query response. For this purpose, we utilize Llama2-chat <cit.> as our chosen Answer module instead of  in the text tasks. see the supplementary for more details and ablations. §.§ Training Pipeline Large-scale image-text pair pretraining. In the first stage, we train a linear layer, similar as <cit.>, which projects the visual feature encoded by the vision encoder (EVA-CLIP <cit.>) to the LLM's text space with captioning loss. We leverage a combined image captioning dataset that includes images from LAION <cit.>, Conceptual Captions <cit.>, and SBU <cit.> to align the visual feature with LLM's input space. To efficiently utilize the context length of LLM for video, we concatenate every four neighboring visual tokens into a single token, reducing the number of tokens per image by 75% from 256 to 64 same as in <cit.>. Large-scale video-text pair pretraining. In the second stage, we enable the model to understand short videos by taking multiple frames as input. Specifically, we sample a maximum of 45 frames from each short video. During this stage, we use the predefined prompts in the following template: <s>[INST]<Img><FrameFeature_1><Sub><Subtitle text_1>... <Img> <FrameFeature_N><Sub><Subtitle text_N><Instruction></INST> where N≤ 45. In this prompt, each <FrameFeature> is replaced by the sampled video frame encoded by the vision backbone. The <Subtitle text> represents the subtitle for the corresponding frame if applicable, and <Instruction> represents a randomly sampled instruction from our predefined instruction set containing variant forms of instruction, such as “Briefly describe this video”. We use combined video captioning data incorporating CMD <cit.> and WebVid <cit.> for large-scale video captioning training. Video question answering instruction finetuning. In this phase, we adopt the same training strategy implemented in the second stage but focus on leveraging high-quality video-question-answering datasets for instruction fine-tuning. This fine-tuning stage helps to enhance the model's ability to interpret the input video and generate precise responses to the corresponding questions. The template is the same as the second stage with <Instruction> replaced by general questions as mentioned in the Video-ChatGPT <cit.> dataset. § EXPERIMENTS §.§ Datasets §.§.§ Training Datasets The Condensed Movies Video Captions dataset (CMD) <cit.> includes around 15,938 videos, with lengths between one to two minutes. However, CMD's captions are of limited quality, featuring an average sentence length of 14 words so we used it in the pre-training stage. The Webvid dataset <cit.> contains two million videos. For our purposes, we've filtered 42K from this dataset to match CMD's video duration range, focusing on videos lasting one to two minutes and also used this dataset in the pre-training dataset. The Video Instruction Dataset <cit.> offers 100K question-answer pairs across 13,224 videos, distinguished by its high-quality annotations. Questions come with detailed answers, averaging 57 words per sentence. This data set spans various types of questions, including video summarization-based and description-based QAs that delve into spatial, temporal, relationships, and reasoning aspects, as well as creative or generative QAs. §.§.§ Short Benchmarks Our  is tested with Video ChatGPT benchmark five skills and with open-ended and MCQ video-question answering benchmarks. The Video ChatGPT benchmark <cit.>, utilizing the ActivityNet-200 dataset <cit.>, is designed to test video-based conversation models on text generation, focusing on five critical dimensions: 1) Correctness of Information: Verifies the generated text's accuracy with video content to avoid errors or misinformation. 2) Detail Orientation: Assesses the responses for thoroughness and detail, ensuring coverage of essential video elements and inclusion of specific, rather than broad, information. 3) Contextual Understanding: Gauges the model's grasp of video context, ensuring responses are contextually appropriate. 4) Temporal Understanding: Checks the model's perception of event sequences within the video. 5) Consistency: Tests output reliability through similar question comparisons. For open-ended questions, model performance is measured using established datasets like MSRVTT-QA <cit.>, MSVDQA <cit.>, TGIF-QA FrameQA <cit.>, and ActivityNet-QA <cit.>. For multi-choice question assessments utilize the TVQA dataset <cit.>, based on popular six TV shows, with a validation set of 15,253 QA pairs for evaluation. §.§.§ Long Benchmarks We have conducted comprehensive evaluations on three extensive and demanding long video benchmarks: Movie-QA <cit.>, LLama-vid <cit.>, and Movie Chat <cit.>. Additionally, we adapted the short video benchmark TVQA for long video analysis. For Movie-QA <cit.>, we assessed the overlapped movies between Movie-QA and MovieNet <cit.> because Movie-QA videos is not avaialble and it is only short clips,we ended up with 30 overlapped movies from the validation set, each lasting between 1 and 2 hours. The new validation subset encompasses 1,081 questions, primarily based on movie plot. The LLama-vid <cit.> dataset features QA pairs focusing on three domains: video summary (1k), movie plot (4k), and detailed reasoning (4k). The absence of category labels prompted us to employ GPT-4 for the classification of the questions, dividing them into two types : general questions (covering plot and reasoning) and summary questions. Due to the original dataset's training-only designation and lack of a validation set, we created a balanced validation set of 10 % of the full data comprising 800 general questions and 100 summary questions, focused solely on textual content. Movie Chat <cit.> includes 1,000 meticulously selected video clips from a variety of movies and TV shows, accompanied by 14,000 manual annotations. These videos span 15 major genres and feature a comprehensive dense caption, three global mode QA pairs, and ten breakpoint-mode QA pairs with precise timestamps. The collection predominantly consists of videos lasting between 10K to 12K frames, with 14.6% exceeding this range and only 8.6% falling short, categorized exclusively as visual content. we evaluated on only the available released training data which is about 10 % of the data because the test data not released while implementing this project. Furthermore, we introduce an enhanced benchmark based on TVQA, comprising a validation set with 15,253 QA pairs derived from 842 episodes, addressing both textual and visual queries. Originally focused on short videos with 1-minute clips, we have expanded the scope to incorporate entire episodes into the assessment, regardless of the specific video segment to which the question pertains. This adjustment, termed TVQA-Long, significantly increases the difficulty by requiring the analysis of the complete video content to locate the answers. This adjustment facilitates the measurement of retrieval accuracy, as the ground truth clip for each question is known. §.§.§ Evaluation Metrics . For open ended questions it is hard to evaluate the output of the llm with the ground truth, so following videochatGPT evaluation <cit.> we employed GPT-3.5 turbo to compare between the generated results and the ground truth. We used the same prompt as videochatgpt <cit.> to have a fair comparison with their results. §.§ Ablation Studies §.§.§ Retrieval Importance. The retrieval system is one of our core contributions, thus, before ablating its inner design we conduct a simple experiment to demonstrate its importance. To this end, given a long video as an input, we directly fed a sampled version of it. More specifically, we downsample the input video by sampling 45 frames to fit the context length of our  model, then fed it directly to our architecture as one clip. This could be seen as a vanilla approach to process a long video with our . To avoid the huge information loss that will be caused by this vanilla approach we propose our retrieval module, that given N clips it will automatically retrieve the Top-K clips that are related to the fed question Q. The performance of our model without the retrieval module is close to random, with an accuracy of approximately 25.07%. However, when the retrieval module is incorporated, the accuracy significantly improves, rising to 41.78% on the TVQA-Long benchmark. Notably, the TVQA-Long benchmark consists of 5 options per question, resulting in a random accuracy baseline of 20%. §.§.§ Retrieval Inner Design. After demonstrating the importance of retrieval design, we ablate each design choice to implement an efficient retrieval system. For each clip i, given a question Q, the subtitles and the summaries embedding, termed E^i_sub and E^i_sum, respectively, we need to determine what is the best way to retrieve the corresponding clip to the input question. To this end, we explored four possible approaches: 1) Using only E^i_sub. 2) Using only E^i_sum. 3) Concatenate both embedding E^i_sub and E^i_sum, namely “and” approach. 4) Treat each type separately, namely “or” approach. For instance, if we have 20 clips, then we will feed 40 embeddings, each 20 representing the E^i_sub and E^i_sum separately. As shown in Table <ref>, on the TVQA dataset the summary do not add any value, which could be interpreted as our generated summary is unrepresentative. However, other interpretation is that, the questions provided in the TVQA dataset mainly rely on the text clues not the vision ones. To support this claim and to truly assess our generated summaries, we exploit the TVR dataset <cit.> which is another dataset of the same videos but different annotations that used in moment retrieval tasks and this dataset has a good prosperity that the descriptions in it is labeled as text descriptions, vision descriptions and text plus vision descriptions, so based on the description type, whether it is based on the visual clues or text. As shown in Table <ref>, on the TVR-Vision, the summary achieves the best performance, which show the high quality of our generated summaries via our short video model () §.§.§ Text Encoder. As shown in Figure <ref>, the input subtitles and the generated summaries are encoded using a text-encoder to generate E^i_sub and E^i_sum, respectively. Table <ref> shows the impact of the text encoder on the retrieval accuracy and the overall accuracy, where the better retrieval is linearly correlated with the overall accuracy of the long-video model. §.§.§ Answer Module. After getting the Top-K retrieved clips, the answer module is responsible to fuse the retrieved clips grounded by the question to produce the final answer. To this end, several ways are studied, as shown in Figure <ref>: A) Feed directly the retrieved summaries Sum and subtitles Sub with the question to the LLM model to directly answer the question or say I don't know If the provided information not enough. B) Feed the selected video clips V and the question Q to  to generate a new information info_Q, which is grounded to the question. Then, feed the new information info_Q with the general input summary Sum and the question Q to the LLM to produce the final answer. C) Following the previous option,with also adding the original subtitles to the context. The table in Figure <ref> demonstrates that, option A is the best approach; feed the summaries and the subtitles directly to the LLM. In contrast, when we feed the video clips V, the accuracy drops significantly, options B and C. The reason behind this drop is the model hallucination, especially when the question is not related to the question, which leads to generate confusing information to the context info_Q. Please refer to the supplementary materials for detailed examples of the model hallucination in options B and C. To evaluate our framework's robustness with extended video lengths, we created three versions of the TVQA dataset by altering the aggregation window. This window compiles long videos from ground-truth short clips that include the answer to a question. Specifically, we combined 5, 10, and 20 clips to produce videos averaging 6, 12, and 24 minutes, respectively. Figure <ref> illustrates that our framework maintains its robustness regardless of video length, with both retrieval performance and overall accuracy remaining consistent even as video duration increases. These outcomes, detailed in Figure <ref>, are based on an analysis of 5% of the TVQA validation set. §.§ Comparison to State-Of-The-Art §.§.§ Long Video Benchmarking We evaluate the efficacy of our proposed framework, , across several well-established benchmarks, specifically the LLama-Vid <cit.>, MovieChat<cit.>, Movie QA <cit.>, and TVQA-Long <cit.> datasets. To thoroughly examine our framework's capabilities, we analyze input modalities in two configurations: vision-only (V) and vision combined with input subtitles (V+T). Our findings, detailed in Table <ref>, indicate that our framework surpasses all existing long video baselines in the vision modality.We establish state-of-the-art (SOTA) performance on these challenging benchmarks. This achievement holds true even under an unfair comparison against LLama-Vid <cit.>, which benefits from using the MovieNet dataset while training and these movies are in both LLama-vid <cit.> benchmark and Movie QA <cit.>. Despite this advantage, our results significantly outperform the competition. Incorporating both video frames and aligned subtitles into our model leads to an average performance boost of 8% across the benchmarks. As highlighted in Table <ref>, this enhanced approach enables us to outperform LLama-Vid <cit.> on the TVQA benchmark, providing a fair comparison since LLama-Vid <cit.> utilizes the other benchmarks during its training phase. §.§.§ Short Video Benchmarking On short-video understanding, we continue to secure state-of-the-art (SOTA) results, outperforming contemporaneous works, including LLama-Vid <cit.>. To validate our framework's proficiency in short-video analysis, we conducted evaluations against current SOTA methodologies across an extensive suite of five benchmarks: Video ChatGPT, MSVD, MSRVTT, TGIF, and TVQA. These benchmarks collectively offer a comprehensive platform for assessing short-video comprehension capabilities, with five focusing on open-ended questions and TVQA featuring multiple-choice questions. Our results, presented in Tables <ref> and <ref>, demonstrate our framework's superiority over competing methods by a significant margin, affirming our considerable advancements across a varied and demanding collection of benchmarks. To thoroughly evaluate our approach, we devised two variations of our framework: one analyzing purely visual elements and another incorporating subtitles. The performance enhancements achieved with these models are noteworthy, registering gains of 3.23%, 2.03%, 16.5% and 23.59% on the MSVD, MSRVTT, TGIF, and TVQA benchmarks respectively. This underscores our framework's ability to achieve SOTA results across the board, markedly elevating performance in the domain of short-video understanding. The visualization results of our method are shown in Fig. <ref>. We will show more visualization results in the appendix. =0.1cm =0.1cm =0.1cm § CONCLUSION In this paper, we identified the main challenges of the current video-centric LLMs to process long videos. Based on the analyses, we introduced the  method, which eases the noise and redundancy challenge and computational and memory challenge.  introduces a retrieval approach that focuses on top-k relevant clips, allowing efficient processing of videos of any length. In contrast, most of the previous models can only process minutes-long videos. We developed , which enhances video content interpretation from single to multiple frames, significantly improving video understanding. This model serves both as a part of  for long video summarization and as a standalone model for short video tasks. Our  achieves state-of-the-art results in long video understanding across four benchmarks with only the vision content and achieved SOTA in with vision and subtitles in zeroshot evaluation on TVQA. Notably, in the proposed TVQA-long benchmark, we outperformed the previous method by 14.94%. Our  also exceeds performance standards in short video benchmarks. We hope our proposed method and the TVQA-long benchmark can benefit future research in the long video understanding. splncs04 The supplementary material provides: * Section <ref>: Ablation on different values of k in Top-k retrieval. * Section <ref>: Model Hallucination problem. * Section <ref>: Video LLM in text tasks. * Section <ref>: Video length robustness. * Section <ref>: Prompt details. * Section <ref>: Implementation details. * Section <ref>: Qualitative results. § TOP K EFFECT In this section, we explore how performance through accuracy is affected by the value of k for top k neighbors for the retrieval design in Section 3 of the main paper. From Table <ref>, we can see that the Top 3 achieved the best results for the “Vision + subtitles” experiments. By employing the general model summary, we observed that the accuracy improved when incorporating information from various neighbors. However, when this information was excessively increased, such as including data from five neighbors, the accuracy declined due to the introduction of noise from numerous incorrect details unrelated to the question. This phenomenon is evident in the first four rows. From row 5 to 8 we can see that the accuracy decreased by increasing the number of neighbours because the related information from the wrong clips distract the model. We observe the same behavior in the “Vision Only” and “Subtitle Only” experiments. § MODEL HALLUCINATIONS The model hallucinates in our case when the VideoLLM is asked questions unrelated to the video, so the videoLLM generates incorrect information which misguides the answer module to answer the right answer. After retrieving the Top-k clips, our goal is to filter these clips to the single correct one. Theoretically, we could prompt each retrieved clip with the query and filter for which clip produces an answer. A common problem in generative models, we find that the model hallucinates and outputs an answer instead of stating it doesn't have the required information to answer. This issue particularly arises when the clips originate from the same episode. However, we do see that the videoLLM responding with its lack of information to answer the question if the clip is entirely unrelated to the question. For instance, in the multi-choice questions in TVQA, if the top three retrieved clips are guaranteed that one of them is the correct clip and the other two clips are incorrect, when using the VideoLLM with the wrong clip it will choose a wrong choice and when feeding the other wrong one, it will choose another wrong choice, and when using the correct clip, it may choose the correct choice based on the correct video content or may choose the wrong choice. In both cases the answer module will see the context information has three choices and this distracts it from answering correctly even if one of them is the correct answer as evidenced by the table <ref>. the accuracy dropped by around 14 % in the vision and subtitles and dropped by 2 % in the vision only. §  IN TEXT TASKS Here, we will see how the fine-tuned version of Llama 2 (our ) performs compared to the original Llama2 in the text tasks. We used  as an answer module in the Goldfish system. We can tell from the table <ref> that  has lost some text skills during vision tasks fine-tuning, so we decided to use the original Llama to get the best performance. § VIDEO LENGTH ROBUSTNESS. r0.3 font=scriptsize Ablation study about the video length impact on 5% of TVQA validation set. 0.55 Video Length Retrieval Acc. Overall Acc. (r)1-1 (r)2-3 5-6 Min 60.2 40.8 10-12 Min 60.2 41.3 20-30 Min 60.2 40.8 To evaluate our framework's robustness with extended video lengths, we created three versions of the TVQA dataset by altering the aggregation window. This window compiles long videos from ground-truth short clips that include the answer to a question. Specifically, we combined 5, 10, and 20 clips to produce videos that average between 6, 12, and 24 minutes, respectively. Table <ref> illustrates that our framework maintains its robustness regardless of video length, with both retrieval performance and overall accuracy remaining consistent even as video duration increases. These results, detailed in Table <ref>, are based on an analysis of 5% of the TVQA validation set. § PROMPTS DETAILS §.§.§ Evaluation prompts. We followed the same evaluation setting in videochatgpt<cit.>. The {question}, {answer}, and {pred} correspond to the question, the ground truth answer, and the model prediction, respectively, in the prompt. The System prompt is as follows: You are an intelligent chatbot designed for evaluating the correctness of generative outputs for question-answer pairs. Your task is to compare the predicted answer with the correct answer and determine if they match meaningfully. Here's how you can accomplish the task: INSTRUCTIONS: * Focus on the meaningful match between the predicted answer and the correct answer. * Consider synonyms or paraphrases as valid matches. * Evaluate the correctness of the prediction compared to the answer. User prompt: Please evaluate the following video-based question-answer pair: Question: {question} Correct Answer: {answer} Predicted Answer: {pred} Provide your evaluation only as a yes/no and score where the score is an integer value between 0 and 5, with 5 indicating the highest meaningful match. Please generate the response in the form of a Python dictionary string with keys `pred' and `score', where the value of `pred' is a string of `yes' or `no' and the value of `score' is an INTEGER, not STRING. DO NOT PROVIDE ANY OTHER OUTPUT TEXT OR EXPLANATION. Only provide the Python dictionary string. For example, your response should look like this: {`pred': `yes', `score': 4.8}. §.§.§ Summary prompts. Below is the summary prompt to obtain the vision summary of the clip: Generate a description of this video. Pay close attention to the objects, actions, emotions portrayed in the video, providing a vivid description of key moments. Specify any visual cues or elements that stand out. §.§.§ Extract the related information prompt : In the multi-choice questions, we added the choice “I don't know” as the fifth choice, and the {question} is a placeholder for the question itself in the prompt. The prompt is as follows: From this video extract the related information to This multichioce question and provide an explaination for your answer and If you don't know the answer, say 'I DON'T KNOW' as option 5 because maybe the questoin is not related to the video content. the question is: {question} your answer: § IMPLEMENTATION DETAILS Our models are trained with 4 A100 GPUs. The training process involved three distinct stages, with specific durations allocated to each. The initial stage focused on image-text training and spanned a period of two days. Subsequently, the second stage, dedicated to pre-training with video captions datasets, lasted one day, followed by the third stage, involving instruction tuning, which extended over three days. Throughout these stages, we maintained a batch size of 4 and utilized the AdamW optimizer in conjunction with a cosine learning rate scheduler, setting the learning rate to 1e-4. Our visual backbone consisted of the EVA-CLIP V1 <cit.> architecture, with the frozen weights. Notably, we trained the linear projection layer and performed efficient fine-tuning of the language model using LoRA <cit.> (Low-Rank Adaptation). Specifically, we fine-tuned the W_q and W_v components with a rank (r) of 64 and a LoRA-alpha value equal 16. The entire model was trained with a consistent image resolution of 224×224 pixels, ensuring uniformity across all stages. § QUALITATIVE RESULTS §.§ Long Video Fig <ref> and Fig <ref> shows one example of the goldfish demo. Please refer to this https://1drv.ms/u/s!ApW05sOkCBBda4QP8kNVwa9WbFE?e=XnOdJflink for more qualitative video demos. §.§ Short Video <ref> demonstrate qualitative results of our model MiniGPT4-video on in-the-wild online videos.
http://arxiv.org/abs/2407.12371v1
20240717074734
HIMO: A New Benchmark for Full-Body Human Interacting with Multiple Objects
[ "Xintao Lv", "Liang Xu", "Yichao Yan", "Xin Jin", "Congsheng Xu", "Shuwen Wu", "Yifan Liu", "Lincheng Li", "Mengxiao Bi", "Wenjun Zeng", "Xiaokang Yang" ]
cs.CV
[ "cs.CV", "cs.AI" ]
HIMO: Full-Body Human Interacting with Multiple Objects X. Lv et al. MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China NetEase Fuxi AI Lab <https://lvxintao.github.io/himo> HIMO: A New Benchmark for Full-Body Human Interacting with Multiple Objects Xintao Lv1* 0009-0005-9986-4080 Liang Xu1,2*0000-0002-6441-4443 Yichao Yan1†0000-0003-3209-8965 Xin Jin20000-0002-1820-8358 Congsheng Xu10009-0007-2619-8827 Shuwen Wu10009-0002-6601-3253 Yifan Liu10009-0008-5158-9667 Lincheng Li30000-0003-3626-4094 Mengxiao Bi30009-0007-6680-481X Wenjun Zeng20000-0003-2531-3137 Xiaokang Yang10000-0003-4848-2304 July 22, 2024 ==================================================================================================================================================================================================================================================================================================================================================================== ^*Equal contribution ^†Corresponding author § ABSTRACT Generating human-object interactions (HOIs) is critical with the tremendous advances of digital avatars. Existing datasets are typically limited to humans interacting with a single object while neglecting the ubiquitous manipulation of multiple objects. Thus, we propose HIMO, a large-scale MoCap dataset of full-body human interacting with multiple objects, containing 3.3K 4D HOI sequences and 4.08M 3D HOI frames. We also annotate HIMO with detailed textual descriptions and temporal segments, benchmarking two novel tasks of HOI synthesis conditioned on either the whole text prompt or the segmented text prompts as fine-grained timeline control. To address these novel tasks, we propose a dual-branch conditional diffusion model with a mutual interaction module for HOI synthesis. Besides, an auto-regressive generation pipeline is also designed to obtain smooth transitions between HOI segments. Experimental results demonstrate the generalization ability to unseen object geometries and temporal compositions. Our data, codes, and models will be publicly available for research purposes. § INTRODUCTION Humans constantly interact with objects as daily routines. As a key component for human-centric vision tasks, the ability to synthesize human-object interactions (HOIs) is fundamental with numerous applications in video games, AR/VR, robotics, and embodied AI. However, most of the previous datasets and models <cit.> are limited to interacting with a single object, yet neglect the ubiquitous functionality combination of multiple objects. Intuitively, the multiple objects setting is more practical and allows for broader applications, such as manipulating multiple objects for robotics <cit.>. The scarcity of such datasets mainly results in the underdevelopment of the synthesis of human interacting with multiple objects, as listed in <ref>. GRAB <cit.> builds the dataset of 4D full-body grasping of daily objects (“4D” refers to 3D geometry and temporal streams, “full-body” emphasizes the body movements and dexterous finger motion), followed by several 4D HOI datasets with RGB modality <cit.>, interacting with articulated <cit.> or sittable <cit.> objects. For text-driven HOI generation, <cit.> manually label the textual descriptions for the BEHAVE <cit.> and CHAIRS <cit.> datasets, and Li  <cit.> collect a dataset of human interacting with daily objects with textual annotations. Despite the significant development, all these datasets focus on single-object interactions. Thus, a dataset of full-body humans interacting with multiple objects incorporated with detailed textual descriptions is highly desired. Capturing 4D HOIs is challenging due to the subtle finger motions, severe occlusions, and precise tracking of various objects. Compared with a single object, interacting with multiple objects is more complicated regarding the spatial movements between human-object and object-object, and the temporal schedule of several atomic HOI intervals. For example, the process of a person interacting with a teapot and teacup could be: “Lift the teapot” → “Pour tea to the teacup” → “Drink tea”, as shown in <ref>. To facilitate the synthesis of Human Interacting with Multiple Objects, we build a large-scale dataset called HIMO. We adopt the optical MoCap system to obtain precise body movements and track the motion of objects attached by reflective markers. For the dexterous finger motions, we employ wearable inertial gloves to avoid occlusions. In total, 3.3K 4D HOI sequences with 34 subjects performing the combinations of 53 daily objects are presented, resulting in 4.08M 3D HOI frames. Various subjects, object combinations, and interaction patterns also ensure the diversity of HIMO. To facilitate the study of text-driven HOI synthesis <cit.>, we annotate the HOI sequences with fine-grained textual descriptions. Different from existing datasets, we additionally segment the long interaction sequences temporally and align the corresponding texts semantically, which allows for fine-grained timeline control, and a flexible schedule of multiple atomic HOI sequences. Revisiting the example of pouring water, after training based on our datasets, novel HOI compositions such as “Drink water” → “Lift the teapot” → “Add more tea to the teacup” could be synthesized benefited from our fine-grained temporal segments. Although temporal segmentation is widely adopted in video action domain <cit.>, we are the first to introduce the annotations to 4D HOIs. Our HIMO dataset enables two novel generative tasks: 1) Text-driven HOI synthesis with multiple objects (denoted as ); 2) conditioned on segmented texts as timeline control (denoted as ). Concretely, instead of single text as the condition for , requires a series of multiple consecutive texts as guidance. For , we generate the synchronized human motion and object motions conditioned on the language descriptions and the initial states of the human and the objects. Inspired by previous works on modeling human-human interaction generation <cit.> and human interacting with single object <cit.>, it is straightforward to design a dual-branch conditional diffusion model to generate the human and object motion, respectively. However, this naive solution suffers from the following challenges. First, the generated human motion and object motions could be spatio-temporally misaligned. Second, implausible contacts between human-object and object-object are commonly encountered. To guarantee the coordination between human motion and object motions, we further design a mutual interaction module implemented as a stack of multiple mutual-attention layers. We also add an object-pairwise loss based on the relative distance between the geometry of objects to generate plausible object movements. Experiments show that our method can synthesize realistic and coordinated HOIs. Furthermore, for the task of , each segmented HOI clip paired with the corresponding text can be viewed as a sample to train the model, while the key of lies in the smoothness between the generated clips. We introduce a simple yet effective scheme to auto-regressively generate one HOI clip at a time, conditioning the next clip generation with the last few frames of the previous clip. Experiment results show that conditioning on 10 frames achieves the best performance. Besides, we also show the generalization ability of our model to unseen objects and novel HOI compositions. Overall, our contributions can be summarized as follows: * We collect a large-scale dataset of full-body human interacting with multiple daily objects called HIMO, with precise and diverse 4D-HOI sequences. * We annotate the detailed textual descriptions for each long HOI sequence, together with the temporal segmentation labels, facilitating two novel downstream tasks of and . * We propose a dual-branch conditional diffusion model with a mutual interaction module to fuse the human and object features. We also propose a pipeline to auto-regressively generate the composite HOI sequences. § RELATED WORK §.§ Human-Object Interaction Datasets Many human-object interactions together with the hand-object interaction datasets have been proposed to facilitate the development of modeling HOIs. Several works <cit.> focusing on hand-object interaction have been proposed. Though effective, modeling hand-object interactions falls significantly short of comprehending the nature of human-object interaction, since the interaction process involves more than just hand participation. Full-body interactions <cit.> may enhance our understanding of HOIs. GRAB <cit.>, InterCap <cit.> and ARCTIC <cit.> broaden the range of 4D-HOIs to full-body interactions. However, all of these full-body datasets only involve one single object, while the combination of several different objects is common in our daily lives. The KIT Whole-body dataset <cit.> contains motion in which subjects manipulate several objects simultaneously. However, the human body is represented as a humanoid model instead of a more realistic SMPL-X model. We address the deficiency by introducing a full-body 4D-HOI dataset that involves full-body interaction with multiple objects. Comparisons with the existing HOI datasets are listed in <ref>. §.§ Text-driven Motion Generation Text-driven human motion generation aims at generating human motions based on textual descriptions. Benefiting from large-scale motion-text dataset like KIT-ML <cit.>, BABEL <cit.> and HumanML3D <cit.>, numerous works <cit.> have emerged. TEMOS <cit.> proposes a transformer-based VAE to encode motion and an additional DistillBert <cit.> to encode texts. MDM <cit.> adopts the diffusion model <cit.> for motion generation. MLDM <cit.> denoises the low-dimensional latent code of motion instead of the raw motion sequences. T2M-GPT <cit.> utilizes VQ-VAE <cit.> to compress motion sequences into a learned codebook and then generate human motion in an auto-regressive fashion. GMD <cit.> further adds keyframe conditions to synthesize controllable human motion. In this work, we base our model on the widely adopted MDM <cit.> framework. §.§ Human-Object Interaction Generation Generating plausible and realistic human-object interaction sequences has attracted much attention in recent years <cit.>. GRAB <cit.> generates plausible hand poses grasping a given object. GOAL <cit.> and SAGA <cit.> extend it to full-body grasping motion generation. However, these methods convert the task into the generation of the final pose of a human interacting with the static object. Recent works like InterDiff <cit.> tackle the problem of synthesizing comprehensive full-body interactions with dynamic objects, which leverages a diffusion model <cit.> to predict future HOI with realistic contacts. IMoS <cit.> synthesizes vivid HOIs conditioned on an intent label, such as: “use”, “lift” and “pass”. Notice that none of the aforementioned methods can generate full-body HOI involving multiple objects. §.§ Temporal Action Composition Human motion composition <cit.> with text timeline control aims to generate arbitrary long human motion sequences with smooth and realistic transitions, which can be separated as two categories. First, several auto-regressive methods <cit.> are developed relying on the availability of motion transition annotations. These methods iteratively generate the subsequent motion based on the already generated motion. Second, a few diffusion-based methods <cit.> are proposed to modify the sampling process of diffusion models with overlaps of a predefined transition duration. In our setting of , we also need to generate composite HOI sequences with smooth transitions. Thus, we adopt a simple yet effective auto-regressive pipeline to iteratively synthesize the HOI sequences. § THE HIMO DATASET We present HIMO, a large-scale 4D-HOI dataset of full-body human interacting with multiple objects, comprising accurate and diverse full-body human motion, object motions and mutual contacts. Next, we will introduce the data acquisition process in <ref> and then describe the SMPL-X fitting process and the annotation of textual descriptions and the temporal segments in <ref>. §.§ Data Acquisition Hybrid Capture System. To obtain the accurate movement of the human body against occlusions, we follow <cit.> to adopt the optical MoCap scheme instead of multi-view RGB camera system <cit.>, which guarantees much lower error <cit.>. We believe that high-quality moving sequences are more important than natural images for HOI motion data. An overview of our capturing setup can be seen in <ref>(a,c). We deploy Optitrack <cit.> MoCap system with 20 PrimeX-22 infrared cameras to capture the body motion, where each subject wearing MoCap suits with 41 reflective markers. Each camera can capture 2K frames at 120fps. To capture the dexterous finger movements, we select the inertial Noitom Perception Neuron Studio (PNS) gloves <cit.>, which can precisely record the subtle finger poses in real-time. The inertial gloves can still work well under severe occlusions of human-object and object-object. We frequently re-calibrate the PNS gloves to ensure the capture quality. Spatially, the locating boards attached to the back of the hands provide the rotation information of the wrists, thus the human body movement can be integrated with the finger movements. Temporally, we utilize the Tentacle Sync device to provide timecodes for Optitrack and PNS gloves to synchronize them. Apart from the hybrid MoCap system for full-body motion capture, we also set up a Kinect camera to record the RGB videos from the front view with the resolution of 1080×768. Capturing Objects. We choose 53 common household objects in various scenes including the dining rooms, kitchens, living rooms and studies from ContactDB <cit.> and Sketchfab <cit.>. We further reshape them into appropriate sizes and 3D print them, so that the 3D object geometry is aligned with the real printed objects. To precisely track the object motion, we attach several (3-6) reflective markers on their surface with strong glue, as shown in <ref>(b), to track the rigid body composed of the attached markers. The coordinate system of the subject and the objects are naturally aligned. One critical rule of attaching the markers to objects is to mitigate the side effects of manipulating the objects. Empirical results show that 12.5 mm diameter spherical markers achieve more robust tracking performance than smaller markers. Since the Optitrack system tracks the 6-DoF poses of the centroid of the attached markers rather than the centroid of the object, thus we employ a post-calibration process to compensate for the bias between them. The object category definition is listed in the supplementary. Recording Process. Each subject is asked to manipulate the pre-defined combinations of 2 or 3 objects, such as “pour tea from a teapot to a teacup" and “cut an apple on a knifeboard with a knife". Initially, the objects are randomly placed on a table with the height of 74 cm and the subject keeps the rest pose. For each sequence, the subject is required to involve all the provided objects into the manipulation and perform 3 times of each interaction task with different operating patterns for variability. Casual operations yet may not be relevant to the prescribed action are welcome to enhance the complexity of each interaction sequence, such as “shaking the goblet before drinking from it” or “cleaning the knifeboard before cutting food on it”. Finally, 34 subjects participated in the collection of HIMO, resulting in 3,376 HOI sequences, 9.44 hours and 4.08M frames, where 2,486 of them interacting with 2 objects and the other 890 sequences interacting with 3 objects. More details of the setting of the interaction categories can be found in the supplementary materials. §.§ Data Postprocessing Fitting SMPL-X Parameters. The expressive SMPL-X <cit.> parametric model is widely adopted by recent works <cit.> for its generality and flexible body part editing ability. We also adopt the SMPL-X model to represent the human body with finger articulations. Formally, the SMPL-X parameters consist of global orient g∈ℝ^3, body pose θ_b ∈ℝ^21× 3, finger poses θ_h ∈ℝ^30× 3, root translation t∈ℝ^3 and shape parameter β∈ℝ^10. An optimization algorithm is adopted to obtain the SMPL-X parameters for each HOI sequence based on our full-body MoCap data. We initialize the subjects' shape β based on their height and weight following <cit.>. The joint energy term 𝔼_j optimizes body joints to our MoCap data as: 𝔼_j=∑_n=0^N ∑_j=0^J P_n^j -P̂_n^j _2^2, where N is the frame number of the sequence, J is the number of human joints, P_n^j and P̂_n^j represent our MoCap joint position and the joint position regressed from the SMPL-X model, respectively. The smoothing term 𝔼_s smooths the motion between frames and mitigates pose jittering as: 𝔼_s=∑_n=0^N-1∑_j=0^JP̂_n+1^j -P̂_n^j _2^2. Finally, a regularization term 𝔼_r is adopted to regularize the SMPL-X pose parameters from deviating as: 𝔼_r=θ_b_2^2+θ_h_2^2. In total, the whole optimization objective can be summed as follows: 𝔼=α𝔼_j+λ𝔼_s+γ𝔼_r, where α=1,λ=0.1,γ=0.01. Textual Annotations. Text-driven motion generation thrives greatly thanks to the emergence of text-motion datasets <cit.>. To empower the research on text-driven HOI synthesis with multiple objects, , , we elaborately annotate fine-grained textual descriptions of each motion sequence. Different from simple HOI manipulations with a single object and a single prompt, our HIMO dataset includes complex transitions among objects and temporal arrangements among different objects. Thus, the annotators are asked to describe the entire interaction procedure, emphasizing the detailed manipulation order, , “First…, then…, finally…”, the interaction pattern, , “picking up”, “rotating” and “pouring”, and the involved human body parts, , the left hand or right hand. We exploit an annotation tool based on <cit.> to enable the annotators to rotate and resize the view to observe the subtle HOIs patterns. More visualization results of the text-HOI pairs are presented in <ref>. Temporal Segment Annotations. As aforementioned, the temporal segments of the long HOI sequences allow for fine-grained timeline control of decomposing the complex operation process into several simple ones. The granularity of the sequence splitting is small, as shown in <ref>. For example, the sequence of “pouring tea from the teapot to the teacup” can be split into “lifting the teapot”, “pouring tea into the teacup” and “putting the teacup down”. Similarly, we implement an annotation tool by <cit.> to better view the interaction details. The HOI sequence and its corresponding texts are simultaneously split into equal segments. Statistically, each sequence contains 2.60 segments on average. § TEXT-DRIVEN HOI SYNTHESIS In this section, we benchmark two novel tasks of in <ref> and in <ref>. For , we propose a dual-branch diffusion-based framework to synthesize the motion of human and objects separately, with meticulously designed mutual interaction module and optimization objectives. For , we apply an auto-regressive generation scheme. §.§ The Framework Data Representation. We denote the human motion as H∈ℝ^T× D_h and the multiple object motions as O = {O_i}_i=0^N_o∈ℝ^T× D_o, where T represents the frame number of the HOI sequence, N_o is the number of the objects, D_h and D_o represent the dimension of the human and object motion, respectively. We adopt the SMPL-X <cit.> parametric model to represent the human movements. The pose state of human at frame i is denoted as H^i, which consists of the global joint positions P^i∈ℝ^52×3, global joint rotation Q^i∈ℝ^52×6 represented by the continuous 6D rotation format <cit.> and translation t^i∈ℝ^3. We also denote the object motion at frame i as O^i (We omit the subscript of the i-th object/geometry for better expression), which comprises of the relative rotation 𝐑^i∈ℝ^6 with respect to the input object's frame and its global translation 𝐓^i∈ℝ^3. To encode the object geometries G={G_i}_i=0^N_o, we follow prior works <cit.> to adopt the Basis Point Set (BPS) representation <cit.>. We sample 1,024 points from the surface of the object meshes, and then for each sampled point, we calculate the directional vector with its nearest neighbor from the basis points set, resulting in G∈ℝ^1024×3 for each object. Problem Formulation. Given the textual description L, the initial states of H^0 and O^0 as the first frame human motion and object motions and the object geometries G. Our model aims to generate the spatio-temporally aligned human motion H together with the object motions O with plausible contacts. Dual-branch Diffusion Models. Our proposed framework is demonstrated in <ref>, which consists of two parallel diffusion-based branches for human and object motion generation and a mutual interaction module for feature fusion. Given a textual description L like “drinks tea with his both hands”, we utilize the CLIP <cit.> as text encoder followed by a linear layer to obtain the semantic embeddings and then concatenate them with the features of the sampling timestep t extracted by an MLP process. Take the human motion generation branch as an example, the initial state H^0 including the initial position and the human pose serves as the conditions. Following <cit.>, we adopt the masked representation by zero padding H_0 to T frames. The masked representations are then concatenated with the noised human motion representation H_t, where t is the timestep of the forward diffusion process. For the object motion generation branch, the object geometry G is embedded by a linear layer and concatenated with the masked representation of the initial pose of objects and the noised representation of object motions. We separately feed the obtained human and object representations into a linear layer to obtain the motion embeddings and then pass them into the denoising mutual interaction module with positional encodings. After that, two linear layers are adopted to obtain the denoised motion representations of human and objects, respectively. Mutual Interaction Module. To model the interaction between human and objects, we fuse the features of the two branches via a mutual interaction module as depicted in <ref>(c). The two branches share the same Transformer architecture with ℓ_enc blocks without sharing weights. Each Transformer block is comprised of two attention layers and one feed-forward layer. The first self-attention layer embeds the aggregated human/object features H^(i) and O^(i) into embeddings 𝐞_h^(i) or 𝐞_o^(i). The second attention layer functions as a mutual attention layer, where the key-value pair of the human branch Transformer is provided by the object hidden embedding 𝐞_o^(i), and vice versa, which can be formulated as: H^(i+1) = FF(softmax(𝐐_h𝐊_o^T/√(C)𝐕_o)) ; O^(i+1)=FF(softmax(𝐐_o𝐊_h^T/√(C)𝐕_h)), 𝐐_h=𝐞_h^(i)𝐖_h^Q_h, 𝐊_h=H^(i) 𝐖_h^K_h, 𝐕_h=H^(i)𝐖_h^V_h, 𝐐_o=𝐞_o^(i)𝐖_o^Q_o,𝐊_o=O^(i) 𝐖_o^K_o, 𝐖_o=O^(i)𝐖_o^V_o, where the subscript h and o denote the human branch and the object branch, respectively, 𝐖_𝐡 and 𝐖_𝐨 denote the trainable parameters of the two branches. Losses Formulation. To model the spatial relation between objects, we designed a novel object-pairwise loss. Our insight is that the relative distance between the interacted objects changes constantly with certain patterns, and explicitly adding constraints on the relative distance between the interacted objects is beneficial. For example, when cutting an apple with a knife, the knife gradually approaches and stays on the surface of the apple for a while, then moves away. We adopt the 𝕃2 loss as ℒ_dis to keep the consistency between the generated results and the ground truth. Besides, we also adopt the widely adopted geometric loss in the field of human motion, including joint position loss ℒ_pos and the joint velocity loss ℒ_vel. To ensure the authenticity of our synthesized motion, we additionally utilized the interpenetration loss ℒ_pen between human and objects. Here, we summarize our ultimate optimization objective as: ℒ=λ_velℒ_vel+λ_posℒ_pos+λ_penℒ_pen+λ_disℒ_dis, and we empirically set λ_vel=λ_pos=λ_pen=1, and λ_dis=0.1. More details of the formulations of the losses will be illustrated in the supplementary materials. §.§ The Framework Problem Formulation. The only difference with lies in the input text prompts, where takes one single text description L as a condition, yet is conditioned on a series of several atomic text prompts as L={l_1, l_2, ⋯, l_m}. The key of is to ensure smooth and realistic transitions between the generated HOI clips. Generation Pipeline. Our generation pipeline is demonstrated in <ref>, which is simple yet effective. We segment the long HOI sequences into smaller motion-text pairs so that the natural transitions of consecutive HOIs are reserved. We first train the model to empower the ability to generate more fine-granularity HOIs. Different from the vanilla model which is only conditioned on the initial state of the first frame, we modify it to conditioned generation on the past few frames. During the inference time, we take the last few frames of the previous generation result as the condition for the next generation, to iteratively obtain the composite HOI results. § EXPERIMENTS In this section, we elaborate on the dataset and implementation details, the baseline methods and the evaluation metrics. Then we present extensive experimental results with ablation studies to show the effectiveness of our benchmark. Dataset Details. We follow <cit.> to split our dataset into train, test, and validation sets with the ratio of 0.8, 0.15, and 0.05. The maximum motion length is set to 300 for and 100 for . The maximum text length is set to 40 and 15, respectively. To mitigate the influence of the order of several objects, we randomly shuffle the objects when loading the data. Implementation Details. For both and , the denoising network consists of ℓ_enc=8 layers of mutual interaction modules with 4 attention heads. We train the network using Adam <cit.> optimizer with a learning rate of 0.0001 and weight decay of 0.99. The training of and takes about 9 and 12 hours on a single A100 GPU with a batch size of 128. Baseline Methods. We adopt the prominent text-to-motion methods MDM <cit.>, priorMDM <cit.> and recent intent-driven HOI synthesis method IMoS <cit.> as our baselines. To be specific, we re-implement each model to support the condition input of object geometry and the initial states of human and objects. More details of the baselines are provided in the supplementary materials. Evaluation Metrics. Following <cit.>, we train a motion feature extractor and a text feature extractor in a contrastive manner for evaluation. Since we are evaluating the generation quality of HOIs, the motion features here include both human and object motion. Then we evaluate over the set of metrics proposed by <cit.>: R-precision and MM-Dist determine the relevance between the generated motion and the input prompts, FID measures the dissimilarity of the generated and ground-truth motion in latent feature space, Diversity measures the variability of the generated motions, and MultiModality shows the mean-variance of generated motion conditioned on a single text prompt. §.§ Results and Analysis Quantitative Comparisons. Quantitative results on the 2-objects and 3-objects partitions of HIMO are demonstrated in  <ref> and  <ref>. In the 2-objects setting, our method HIMO-Gen outperforms the baseline methods on all metrics, which indicates our method has better generation quality over baseline methods. The quantitative results of HIMO-SegGen outperform HIMO-Gen in MM-Dist and MModality, which may be attributed to its fine-grained control of generated motion since HIMO-SegGen takes more concise descriptions as input conditions. In the 3-objects setting of HIMO, our method also outperforms baselines on the R-precision and MM-Dist metrics, which further shows the effectiveness of our method. Qualitative Results. Qualitative results of our method are depicted in <ref>, which show that our method can synthesize plausible human-object interaction sequences. More generation results of and visual comparisons will be presented in the supplementary materials. Generalization Experiments. We find that our trained model can also apply to unseen object meshes. Besides, with our elaborate annotate temporal segments, our trained model can generate novel HOI compositions. Due to the space limit, more results are listed in the supplementary materials. §.§ Ablation Study We conduct extensive ablation studies on and , as depicted in  <ref> and  <ref>. For , we experiment with three settings: 1) Replacing the mutual interaction module with the original transformer layer (w/o IM); 2) Removing the object-pairwise loss (w/o ℒ_dis); 3) Generating human motion conditioned on the generated objects motion as <cit.>. From <ref>, we can derive that the removal of the mutual interaction module results in a huge drop of R-precision, which verifies the effectiveness of the feature fusion between human and objects. The object-pairwise loss helps a lot in constraining the generated motions. Consecutive generation scheme brings huge performance drop to generate the human and object motions simultaneously. For , we experiment on the conditioned past 1, 5, 10 and 20 frames. Given the results in <ref>, “10-frames” achieves the best performance in R-precision, FID and MM-Dist. More past frames (“20 frames”) provide no additional performance gain, which may result from less attention to the predicted frames. § CONCLUSION In this paper, we propose HIMO, a dataset of full-body human interacting with multiple household objects. We collect a large amount of 4D HOI sequences with precise and diverse human motions, object motions and mutual contacts. Based on that, we annotate the detailed textual descriptions for each HOI sequence to enable the text-driven HOI synthesis. We also equip it with elaborative temporal segments to decompose the long sequences into several atomic HOI steps, which facilitates the HOI synthesis driven by consecutive text prompts. Extensive experiments show plausible results, verifying that the HIMO dataset is promising for downstream HOI generation tasks. Limitations: Our work has the following limitations: 1) Large objects: Our HIMO dataset is built mainly based on small household objects, without human interacting with large objects such as chairs and suitcases. 2) RGB modality: Though we set up a Kinect camera to capture the monocular RGB sequences, the human and object appearances are unnatural since humans are wearing tight MoCap suits and objects are attached with reflective markers. 3) Facial expressions: We neglect the facial expressions since our dataset mainly focuses on the manipulation of multiple objects, , the body movements, finger gestures and the object motions, which have little correlation with facial expressions. § ACKNOWLEDGEMENTS This work was supported in part by NSFC (62201342, 62101325), and Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102). splncs04 HIMO: A New Benchmark for Full-Body Human Interacting with Multiple Objects *Appendix** § LOSS FORMULATIONS OF In order to train the framework, we adopt four types of losses specifically designed for our task of text-driven human-object interaction (HOI) synthesis. To generate authentic human motion, we adopt the widely used geometric loss in recent text-to-motion works <cit.>, including joint position loss ℒ_pos and joint velocity loss ℒ_vel. Formally, we denote the ground truth j-th joint position of human of the n-th frame as P_n^j, and the generated result as P̂_n^j. The joint position loss ℒ_pos is formulated as: ℒ_pos=1/N∑_n=1^N∑_j=1^JP_n^j-P̂_n^j_2^2, where N is the length of human motion and J is the number of joints. Similarly, the joint velocity loss ℒ_vel is formulated as: ℒ_vel=1/N-1∑_n=1^N-1∑_j=1^J(P_n+1^j-P_n^j)-(P̂_n+1^j-P̂_n^j)_2^2. Following <cit.>, a signed distance field (SDF) based interpenetration loss ℒ_pen is also adopted to prevent the collision between human and objects. Let ϕ be a modified SDF of the human body defined as follows: ϕ(x,y,z)=-min(SDF(x,y,z),0). According the definition above, points inside the human body have positive values of ϕ proportional to the distance from the surface, otherwise it equals 0 outside of the human body. Here we define ϕ on a voxel grid of dimensions N_h× N_h× N_h, where N_h=32. For the object o, its i-th sampled point on the surface is denoted as v_o^i. Thus, the interpenetration loss of object o colliding with the human is defined as : P_o=∑_i=1^Sϕ̃(v_o^i), where S is the number of sampled points on the surface, and ϕ̃(v_o^i) samples the ϕ value for each 3D point v_o^i in a differentiable way from the 3D grid via trilinear interpolation. Then the interpenetration loss for all objects is formulated as: ℒ_pen=∑_o∈ OP_o, where O is the set of the involved objects. To model the spatial relation between objects, we further use an object-pairwise loss ℒ_dis. Formally, we denote the sampled points set of object i and j during the whole HOI sequence as V_i^1:N and V_j^1:N, and the points set transformed by the generated motion as V̂_i^1:N and V̂_j^1:N. We then have the distance between the two objects as Δ V_ij=V_i^1:N-V_j^1:N_2^2. Our insight is that this distance between them follows certain pattern, so that we adopt an 𝕃2 loss to keep the consistency between the generated results and the ground truth. The object-pairwise loss is then formulated as : ℒ_dis=∑_i≠ jΔ V_ij-ΔV̂_ij_2^2. § IMPLEMENTATION DETAILS OF BASELINES We re-implement MDM <cit.>, PriorMDM <cit.> and IMoS <cit.> to support the condition input of of object geometries and initial states of human and objects. Below we detailed the implementation of each model. MDM. <cit.> We extend the original feature dimensions of the input and output in MDM <cit.> from D_h to D_h+D_o, where D_h denotes the dimension of human motion representation and D_o denotes that of object motion representation. To embed the condition input of object geometry, we feed it into a linear layer and concatenate it with initial pose of the object. Then all conditions are concatenated with the noised input into the motion embedding. PriorMDM. <cit.> The original PriorMDM <cit.> is intended for dual-person motion generation with two branches of MDM and one singular ComMDM to coordinate the two branches. We modify the two human-motion branches into a human-motion branch and an object-motion branch. Also we place the ComMDM module after the 4-th transformer layer of each branch to enable the communication between the two branches. IMoS. <cit.> IMoS <cit.> is an intent-driven HOI synthesis model with the architecture of VAE <cit.>. We modify the input action label into our text prompt and integrate the arm and body synthesis module into one. Additionally, the movement of objects is also generated through the VAE instead of the original optimization module. § OBJECT CATEGORIES AND INTERACTION SETTINGS §.§ Object Categories We list all 53 daily-life objects of our HIMO dataset in <ref>. §.§ Interaction Settings We list our interaction settings in <ref>. Here A, B and C denote objects in <ref>. §.§ Object Combinations We visualize the distribution of object combinations in  <ref> § GENERALIZATION EXPERIMENTS Unseen object geometries. We experiment on the generalization ability to unseen object geometries of our model. We choose several object meshes that have the same category as our dataset but different geometries. We feed their geometries as input conditions to our model and synthesize corresponding human and objects motion. Visualization results are shown in <ref>. We can observe that, to some extent, our model can generalize to unseen object meshes despite of some flaws of human-object contact, which may result from the deficiency of their geometric information in our model. Novel HOI compositions. We experiment on the ability of our to generate novel HOI compositions. We choose several HOI combinations that never appear in our training set, for instance, “Knocks on the beer with a hammer” → “Grabs the beer to observe” → “Puts them down on the table”. Then we feed the texts to our consecutively to auto-regressively generate HOI clips. Visualization results are depicted in <ref>. § MORE VISUALIZATION RESULTS Visualization of Dataset. We present more samples from our HIMO dataset in <ref> and <ref>. Note that we segment both the motion and text descriptions into several clips. Additional visualization results are presented in the supplementary video. § SOCIETAL IMPACTS AND RESPONSIBILITY TO HUMAN SUBJECTS. Our dataset can be leveraged to generate plausible human interactions with multiple objects, which may lead to the creation and spread of misinformation. For privacy concerns, all performers signed the agreement on the release of their motion data for research purposes. We focus on the pure motion rather than RGB, thus no RGB videos are released and their identity will not be leaked. Visualization of HIMO-SegGen. The generated results of are shown in <ref>. Each text prompt is sent to and generates motion clips auto-regressively conditioned on the last 10 frames of the previous clip. Visualization of HIMO-Gen. We show qualitative comparisons of our methods with baseline methods in <ref>. From the figure we can see that our methods keep both consistency and semantics over the baseline methods.
http://arxiv.org/abs/2407.12748v1
20240717171000
$\mathfrak{k}$-structure of basic representation of affine algebras
[ "Benedikt König" ]
math.RT
[ "math.RT", "hep-th" ]
-structure of basic representation of affine algebras Benedikt König Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut) Am Mühlenberg 1, DE-14476 Potsdam, Germany This article presents a new relation between the basic representation of split real simply-laced affine Kac-Moody algebras and finite dimensional representations of its maximal compact subalgebra . We provide infinitely many -subrepresentations of the basic representation and we prove that these are all the finite dimensional -subrepresentations of the basic representation such that the quotient of the basic representation by the subrepresentation is a finite dimensional representation of a certain parabolic algebra and of the maximal compact subalgebra. By this result we provide an infinite composition series with a cosocle filtration of the basic representation. Finally, we present examples of the results and applications to supergravity. empty 1.5pt § INTRODUCTION Affine Kac-Moody algebras, which play an important role in mathematics and physics, are by definition complex, but allow for different real slices from real forms <cit.>. One of these real forms is the split real form in which the Chevalley-Serre generators are real, for example the resulting algebra is the real span of the Chevalley-Serre generators and it is called split real Kac-Moody algebra. The affine (split real) Kac-Moody algebras possess standard highest and lowest-weight representations which are fairly well studied <cit.>. Especially, for the basic representation with fundamental weight Λ_0 associated to the affine node 0, there exists a realization in terms of vertex operators <cit.>. Additionally, every split real Kac-Moody algebra is equipped with a Cartan-Chevalley involution that defines the maximal compact subalgebra of the Kac-Moody algebra as its fixed point subalgebra. For finite split real Kac-Moody algebras , the maximal compact subalgebra is a not necessarily indecomposable Kac-Moody algebra and the associated Lie group is compact. These algebras are very well understood. However, the scenario is completely different for affine split real Kac-Moody algebras , for which the infinite dimensional maximal compact subalgebra is not of Kac-Moody type, but admits infinitely many non-trivial ideals and does not have a Borel decomposition, but rather a filtered structure. Nevertheless, in <cit.> defining relations of generators for are developed and in <cit.> certain unfaithful finite dimensional representations, motivated by physics, are constructed, which is quite surprising since infinite dimensional Kac-Moody algebras do not have non-trivial finite dimensional representations. This result is generalized to a comprehensive understanding of finite dimensional representations of in terms of a (double) parabolic algebra and a Lie algebra homomorphism ρ mapping to in <cit.> and Appendix <ref>. The (double) parabolic algebra has a parabolic grading and therefore it contains infinitely many ideals which allow for an algorithmic construction of finite dimensional representations. The pull back of ρ then defines ideals and representations of , which preserve the parabolic grading. Thus, the finite dimensional representations of the maximal compact subalgebra are quite different from the infinite dimensional representations of the affine split real Kac-Moody algebra . Despite it, understanding the action of the maximal compact subalgebra on a representation of a split real Kac-Moody algebra reveals fascinating additional structure of the representations and has important applications.[ Lie algebra structure and fineness. For a Lie algebra 𝔩 with a representation V, a 𝔩-structure of V is a family of 𝔩-representations V_i⊂ V. For two 𝔩-structures A and B, we call A finer than B if B⊂ A. If for a 𝔩-structure A holds B⊂ A for all 𝔩-structures B, we call A the finest 𝔩-structure. ] For finite split real Kac-Moody algebras , the complete information about the action of the maximal compact subalgebra on a -representation is contained in the decomposition into irreducible -representations, also known as branching. This is a standard technique of representation theory, because is a finite Kac-Moody algebra and its representations are completely reducible. However, this is not the case for affine split real Kac-Moody algebras. Actually, due to the intriguing structure of the maximal compact subalgebra and its representations, we do not even expect that a -representation can be decomposed in a direct sum of simple or semisimple -representations. Therefore, the best one may achieve is to find all -subrepresentations of a -representation and if possible to provide a (possibly infinite) -composition series of the -representation. But to our knowledge, very little is known about proper -subrepresentation of highest-weight representation of . Nonetheless, from physics applications (in particular supergravity in two dimensions), we expect that there exists proper non-trivial -subrepresentations of the basic representation. In this context, this work is a first step towards a comprehensive understanding of the -subrepresentations of highest weight representations of and towards understanding possible infinite -composition series of the highest weight representations. Our main results is to reveal the complete/finest -structure of the basic representation (level one representation, central charge one) of a split real untwisted simply-laced affine Kac-Moody algebra , which is organized in three parts. First, for each N∈_0 we provide an inequivalent -subrepresentation W_N⊂ with finite co-dimension as the kernel of a surjective -homomorphism _N from the basic representation to the finite dimensional -representation (N), such that commutes with _N. Therefore, _N is a projection onto (N)≃/W_N, where (N) is truncated Verma module of a certain `parabolic Heisenberg algebra' with generators . We provide the representations (N) and the projection _N explicitly. Second, we show that all -subrepresentations of the basic representation, for which the quotient of the basic representation by the subrepresentation is equivalent to a finite dimensional -representation are equivalent to W_N or a subrepresentation of W_N for some N∈_0. Third, we provide the cosocle filtration (definitions are provided in Appendix <ref>) of the basic representation under and we show that the cosocle filtration is an infinite composition series. It implies that as a vector space the basic representation is equivalent to a sum of infinitely many finite dimensional semisimple -representations. For the convenience of the reader, we provide these results in form of a theorem here, but for which it is necessary to refer to concepts in the main text. For a split real untwisted simply-laced affine Kac-Moody algebra with maximal compact subalgebra and basic representation the statements a, b and c hold. * For all N∈_0, there exists a finite dimensional -representation (N) given in Proposition <ref> and a surjective -homomorphism _N: →(N) in (<ref>), which commutes with and projects onto (N). The kernel of G_N is a proper non-trivial -subrepresentation W_N=Ker(G_N)⊂. * If projects onto a finite dimensional -representation which is also a -representation by ρ in (<ref>), then there exists an N∈_0 such that this representation is equivalent to (N) or to a quotient of (N) by a subrepresentation. * The family (W_N)_N∈ of -invariant subspaces in is the infinite composition series of with a cosocle filtration (definitions in Appendix <ref>) . These results find fascinating applications in physics. (Gauged) supergravity theories in various dimensions are an important class of theories describing fundamental interaction and their mathematical structure is governed by Kac-Moody algebras <cit.>. While the bosonic fields of the theories take values in representations of the split real Kac-Moody algebra, the fermionic fields take values in representations of its maximal compact subalgebra. Therefore, supersymmetry, a symmetry between fermions and bosons, maps between representations of the split real Kac-Moody algebra and its compact subalgebra. This makes it necessary to understand how representations of the Kac-Moody-algebras project onto representations of the maximal compact subalgebra. Supergravity theories in D≥ 3 have finite Kac-Moody symmetries and therefore the representation theory and the branching rules necessary to understand these theories are known. This makes it also possible to extend these supergravity theories to gauged supergravity theories. However, in D=2 dimensions, the supergravity theory has an (on-shell) affine Kac-Moody symmetry <cit.>, under which the scalar fields transform in the adjoint representation and the gauge fields transform in the basic representation <cit.>. While the adjoint representation is not a highest weight representation and the -subrepresentation are understood, it is necessary to develop the -subrepresentation of the basic representation to understand the supersymmetry of the gauge fields. Therefore, the full gauged supergravity theory in D=2 dimensions has not been constructed yet. Nevertheless a specific example is given in <cit.> and the purely bosonic sector is derived in <cit.>. The result of this work is the next step to derive the full gauged supergravity theory in two dimensions. In a related context, the results find applications in teleparallel gravity in two dimensions <cit.>. This work is a starting point for further considerations in this direction in mathematics and physics. First, using the vertex operator construction for non-simply-laced affine Kac-Moody algebra <cit.>, we expect our construction and results to extend to the basic representation of non-simply-laced affine Kac-Moody algebras. Second, in a more sophisticated development, our construction may be generalized to all highest-weight representations of affine Kac-Moody algebras. This may be achieved using the DDF construction <cit.> for further highest-weight modules of affine algebras, which is similar to the vertex operator realization. Third, understanding the -subrepresentations of highest weight representations of an affine Kac-Moody algebra paves the way to understand hyperbolic Kac-Moody algebras under the action of its maximal compact subalgebra. This is because a hyperbolic Kac-Moody algebra decomposes into sums of highest-weight representations of the associate affine Kac-Moody algebra <cit.>. However, this is clearly quite a challenging undertaking since there remains a lot to discover for hyperbolic Kac-Moody algebras <cit.> but the result has important applications in physics. For instance, it is expected that hyperbolic Kac-Moody algebras are the underlying mathematical structure of space-time <cit.> governed by physical theories with hyperbolic symmetry <cit.>. However, to fully describe these theories, the action of the maximal compact subalgebra on the hyperbolic Kac-Moody algebra must be understood. But even before this is fully achieved, our results have applications to these models, because (gauged) supergravity in two dimensions may serve as a toy model to understand the mechanism of how space time emerges from an indefinite Kac-Moody symmetry. Outline. Section <ref> follows <cit.> closely to introduce the affine Kac-Moody algebras and the vertex operator realization for the basic representation. The maximal compact subalgebra of the Kac-Moody algebra is then defined in Section <ref>, which splits in two subsections. While Subsection <ref> provides the definition and notation of the maximal compact subalgebra, in Subsection <ref> we extend the construction of <cit.> to a double parabolic algebra and apply it to prove an identity for ideals and a certain group action on the double parabolic algebra. Prepared with these identities, we use Section <ref> to analyze the action of the maximal compact subalgebra on the basic representation. This includes new notations and an important Proposition, which allows us to provide the finest -structure of the basic representation and the infinite composition series in Section <ref>. In Subsection <ref> we provide the infinite filtered set of -subrepresentations of the basic representation and the projection of the basic representation onto finite dimensional -representations. In Subsection <ref> we show that the infinite set of -subrepresentations is indeed the infinite -composition series of the basic representation with cosocle filtration. In the final section <ref>, we prove an identity to efficiently evaluate the projection of the basic representation onto finite dimensional -representations. This identity allows us to analyze explicit examples of how to project the basic representation onto -representations and about the -subrepresentations of the basic representation. Finally, we emphasis on = and its application to supergravity in two dimensions. In Appendix <ref> we provide a construction of representations of the double parabolic algebra and Appendix <ref> defines a generalizations of sums over polynomials. In Appendix <ref> we provide frequently used definitions and notations as for example the definition of cosocle, cosocle filtration and infinite composition series. In Appendix <ref> we provide concrete expressions for abstract objects used in this work and, for the convenience of the reader, we give supplementary details of certain proofs in Appendix <ref>. Acknowledgment. I would like to thank in particular Hermann Nicolai and Axel Kleinschmidt as well as Niklas Beisert, Martin Cederwall, Mattia Cesaro, Franz Ciceri, Alex Feingold, Matthias Gaberdiel, Jakob Palmkvist, Siddhartha Sahi and Henning Samtleben for useful discussions. Also, I like to thank Bernardo Araneda, Serena Giardino, Axel Kleinschmidt and Hermann Nicolai for helpful comments to and on the manuscript. Furthermore, I would like to thank the Konrad-Adenauer-Stiftung for founding and supporting me on this work and the IMPRS for Mathematical and Physical Aspects of Gravitation, Cosmology and Quantum Field Theory for the support. § AFFINE ALGEBRA AND BASIC REPRESENTATION This section introduces the infinite dimensional untwisted affine Kac-Moody algebras and its simplest highest-weight representation, the basic representation. Details of definitions and proofs are in <cit.>. Let A = (a_ij)_0≤ i,j ≤ r be an untwisted symmetric affine generalized indecomposable Cartan matrix of rank r over and let = (a_ij)_1≤ i,j ≤ r be the finite Cartan matrix of the same type and rank r. Let be the split real form of the affine Kac-Moody algebra (A) and the split real form of the finite Kac-Moody algebra (). Then is isomorphic to the loop algebra of the Lie algebra with a central extension K and a derivation . This allows us to parametrize the affine algebra in terms of the generators of , K and . Therefore, we first introduce the finite Lie algebra . Finite Lie algebra . Let (,Π,Π^∨) be are realization of . Then, Π = {α^i}_1≤ i≤ r is the set of simple roots of , Π^∨ is the set of dual simple roots, which is a basis of the Cartan subalgebra and the -span of the simple roots is the root lattice = ∑_1≤ i ≤ rα_i. The root lattice admits a natural bilinear form (α_i|α_j) = _ij. In terms of the bilinear form, the set of roots of is Δ = {α∈ (α|α) = 2}. From now on, we usually use the Greek letters α,β for roots and γ, δ for elements of the root lattice. On the root space exists a cocycle <cit.>, also called asymmetric function, ϵ×→±1, which satisfies ϵ(γ + γ',δ )ϵ(γ,δ ) ϵ(γ',δ ) ϵ(δ, γ+ γ')ϵ(δ ,γ ) ϵ(δ,γ') ϵ(γ,γ) (-1)^(γ|γ) . The Lie bracket can be defined in terms of the cocycle. To carry this out, let _α⊂ be the root space of α and let E^α∈_α be an element of this root space, where we define for convenience E^γ=0 for γ∉Δ. Let H^γ∈ be the dual of γ∈ with respect to the bilinear form (·|·), for example α(H^γ) = (α|γ). Then, in terms of the two roots α,β∈Δ, [H^α,H^β] 0 [H^α,E^β] (α|β)E^β [E^α,E^β] -δ_α,-β H^α + ϵ(α,β)E^α+β defines a Lie bracket on . The Lie algebra is equipped with the Killing form κ which is symmetric, bilinear and invariant under the Lie bracket κ(H^α,H^β) (α|β) κ(H^α,E^β) 0 κ(E^α,E^β) -δ_α,-β . This sets up the split real finite Lie algebra , from which it is convenient to define the affine Kac-Moody algebra . Affine algebra . The split real affine Kac-Moody algebra can be parametrized in terms of the centrally extended loop algebra of with an additional derivation. Henceforth, we use this parametrization for the affine Kac-Moody algebra. Therefore, if [t,t^-1] are the Laurent polynomials in t, K the central element and the derivation, then the loop parametrization of is ⊗[t,t^-1] ⊕ K ⊕ , where the first summand is the loop algebra. For the loop generators we use the notation x_n = x⊗ t^n with x∈ and n∈. The index n of the loop generators is called the loop index. In this parametrization, the Lie bracket of the affine Kac-Moody algebra is [x_m,y_n] [x,y]⊗ t^m+n + mδ_m,-nκ(x,y) K [,x_m] -m x_m [K,x_m] [K,] 0 . The affine algebra has two subalgebras which are important for the rest of the paper. The first subalgebra is spanned by the loop generators of index 0. This subalgebra is isomorphic to the finite Lie algebra , which motivates to use the same symbol for it. The second important subalgebra is the Heisenberg subalgebra. The Heisenberg subalgebra is the centrally extended loop algebra of the Cartan subalgebra[The Heisenberg subalgebra is not the Cartan subalgebra of .] ⊗[t,t^-1] ⊕ K = ⟨H_n, K n∈,H∈⟩. The affine algebra has no non-trivial finite dimensional representations. However, it has standard highest and lowest-weight representations. The `simplest' highest-weight representation is the basic representation, which is the subject of the next paragraph. Basic representation. The basic representation of is the infinite dimensional highest-weight representation with fundamental weight Λ_0, where 0 is the index of the affine node. A realization of is given in terms of vertex operators which is essential for the rest of this work. Therefore, this paragraph sets up the vertex operator representation and notation, following <cit.>. In the vertex operator realization of the basic representation, the representation space is [ ] ⊗ S(⊕_j<0(⊗ t^j)) , where S is the symmetric algebra and [ ] is the group algebra of the root lattice with the embedding γ↦ e^γ for γ∈. Thus, the elements of the basic representation are parametrized in a tensor product of the root lattice and polynomials in roots with an additional negative subscript = ⟨ e^γ⊗ f γ∈ , f ∈[{α∈Δ}_-] ⟩ . Here and in the following, using the notation in (<ref>), f=f({α∈Δ}) is always a polynomial of the roots with an additional negative index. To introduce the action of in the vertex operator realization of the basic representation we introduce a representation of the indexed roots. For each α∈Δ and n ∈, the negative indexed root α_-n acts by multiplication on f and the positive indexed root α_n acts as a derivative on f with α_n·β_-m nδ_n,-m(α|β) . Using this action, the vertex operator acts on the states in the basic representation by Γ^α (z) (e^γ⊗ f) = z^(α,γ) ϵ(α,γ) e^γ + α⊗(e^∑_k≥ 1z^k/kα_-k e^-∑_k≥ 1z^-k/kα_k) f . This vertex operator action, allows to define the basic representation →End(). For all α∈Δ, γ∈ Q and n∈ the basic representations is given by E^α_n ↦ Γ^α(z)|_z^-n-1, H^α_n(e^γ⊗ f) ↦ (α,γ) (e^γ⊗ f) for n=0 e^γ⊗α_n f for n≠ 0 , K ↦ 1 ↦ ∑_i,j=1^r( H_0^α^i^-1_ijH_0^α^j + ∑_n≥ 1 H_-n^α^i^-1_ijH_n^α^j) where |_z^n is the projection of a polynomial in z, z^-1 on the coefficient of z^n in the monomial basis (z^k)_k∈. The central element multiplies with the central charge one, also known as level one, and the eigenvalue of the derivation on a state is the loop level of this state. This fixes our notation of the basic representation. However, at the end of the section, we add some useful comments and frequently used notation on the vertex operator action. The vertex operator action takes a particular simple form in terms of Schur polynomials on the states e^γ⊗ 1∈. Therefore, let us introduce the notion of these states and the conventions of Schur polynomials in this work. Schur polynomials and maximal states. For each N∈ and the set notation {α_-k k∈}, the Schur polynomial S_N() is defined by the expansion [In most literature, the Schur polynomials S_λ(y_1,y_2, … ) are symmetric polynomials in the variables y_i and are defined in terms of a partition λ of N. This is related to our definition of Schur polynomials. Setting λ = (N) and rewriting the Schur polynomial S_λ(y_1,y_2, … ) in terms of symmetric power polynomials α_-j = ∑_i y_i^j gives the Schur polynomials S_N() .] ∑_N≥ 0 z^N S_N() e^∑_n≥ 1z^n/nα_-n , S_n≤-1({α}) 0 , S_0({α}) 1, S_1({α}) α_-1,… The first few Schur polynomials are provided in (<ref>). Two important identities for Schur polynomials derive from (<ref>) by differentiation. Differentiating (<ref>) w.r.t. α_-n and w.r.t. z gives for all n∈, N∈_0 ∂ S_N()/∂α_-n = 1/n S_N-n() , N S_N() = ∑_n≥ 1α_-n S_N-n() . The Schur polynomials substantially simplify the action of the vertex operators on maximal states. Therefore, let us define these states. A state of the basic representation is called maximal state if it vanishes under the action of the positive indexed Heisenberg generators. This set of maximal states is ⟨ e^γ⊗ 1∈γ∈⟩ . By the definition of maximal states, the right exponential in the vertex operator acts trivial and therefore, for m∈, the action of E^α_m on a maximal state e^γ⊗ 1 takes a closed simple form in terms of Schur polynomials E^α_m e^γ⊗ 1 ϵ(α,γ) e^γ+α⊗ S_-m-(γ,α)-1() . On these states it is convenient to evaluate the action of the Heisenberg algebra. For each n∈, two often used relations for the action of the Heisenberg algebra are H^α_n ( e^γ⊗ S_N({α}) ) = 2 e^γ⊗ S_N-n({α}) , e^γ⊗ S_N({α}) = 1/N∑_n=1^N H^α_-n e^γ⊗ S_N-n({α}) , which derive straightforward from (<ref>) and (<ref>). In this section we introduced the affine algebra and the basic representation in terms of the vertex operator realization. We present the maximal compact subalgebra of the affine algebra in the next section. § MAXIMAL COMPACT SUBALGEBRA AND REPRESENTATIONS Here, we introduce the maximal compact subalgebra of an affine Kac-Moody algebra and the parabolic algebra <cit.>. The maximal compact subalgebra of an affine algebra is not of Kac-Moody type, for example it admits infinitely many ideals. These ideals allow for finite dimensional representations of the maximal compact subalgebra, which where developed in <cit.>. This section provides the necessary identities and notation to analyze the action of the maximal compact subalgebra on the basic representations. In Subsection <ref> we define the maximal compact subalgebra as the fixed point set of the Cartan-Chevalley involution. In Subsection <ref> we extend the results of <cit.> to a double parabolic algebra and a Lie algebra homomorphism from the maximal compact subalgebra to the double parabolic algebra. This Lie algebra homomorphism allows us to prove a generating identity for these ideals and derive the action of a certain group element on the parabolic algebra. §.§ Maximal compact subalgebra In this section we introduce the maximal compact subalgebra of an affine Kac-Moody algebra and the notation of the maximal compact subalgebra for this paper. The maximal compact subalgebra of a split real Kac-Moody algebra is the fix point subalgebra of the Cartan-Chevalley Lie algebra involution. For the finite split real Lie algebra and α∈Δ, the Cartan-Chevalley involution acts by (H^α) H^-α -H^α , (E^α) E^-α . The maximal compact subalgebra ⊂ is the +1 eigenspace of and has strictly negative signature of the killing form. The -1 eigenspace is the non-compact orthogonal complement ⊂ with strictly positive signature of the killing form. Let us turn to the maximal compact subalgebra of the affine Kac-Moody algebra . The Cartan-Chevalley involution τ of the affine algebra acts on the loop generators with the Cartan-Chevalley involution of the finite algebra and on the loop index it acts by inversion. On , the involution is given by τ(H^α_n) -H^α_-n , τ(E^α_n) E^-α_-n , τ(K) -K , τ() - . The maximal compact subalgebra ∈ is the +1 eigenspace subalgebra of τ and the -1 eigenspace is called the non-compact orthogonal complement ⊂. In particular, the central element K and the derivation are not in the maximal compact subalgebra, but for every α∈Δ and n∈, the generators ^α_n E_n^α+E_-n^-α ,_n^α(H_n^α-H_-n^α) ∈ . are in the maximal compact subalgebra. Not all of these elements are linearly independent, because _n^α = -_-n^α and ^α_n = _-n^-α and for a fixed n there are r independent _n^α. However, it is useful to have introduced the notation of _n^α and _n^α for all combinations of roots and integers, which also span the maximal compact subalgebra ⟨_n^α, _n^αα∈Δ, n∈⟩ The Lie bracket is induced from the Lie bracket of and for α,β∈Δ, n,m∈ it is [^α_m,^β_n ] 0 [^α_m,^β_n] (α|β)(^β_n+m -^β_n-m) [^α_m,^β_n] - 2(δ_α,-β+δ_α,β)^α_m+n + ϵ(α,β)(^α+β_m+n+^α-β_m-n). Two comments on the structure of the Lie bracket are appropriate. First, the projection of the Heisenberg algebra to the maximal compact subalgebra is an abelian subalgebra, called the compact Heisenberg algebra ≡⟨^α_nα∈Δ, n ∈⟩ . By the Lie bracket relation (<ref>) a product of compact Heisenberg generators on the right of _m^α is in the universal enveloping algebra equivalent to a finite sum of products of elements of the compact Heisenberg algebra U_j∈() to the left of different elements _n_j^α _m^α U = ∑_j U_j _n_j^α∈() . The explicit form of U_j is not important for the purpose of the paper. However, we sketch the evaluation of U_j in Appendix <ref>. Second, the Lie bracket of the maximal compact subalgebra is not additive in the loop index of the generators and does not provide a Borel subalgebra, but admits a filtered structure. For example, the maximal compact subalgebra is not of Kac-Moody type but admits infinite many ideals. These ideals are essential to construct finite dimensional representations of the maximal compact subalgebra in terms of the parabolic algebra <cit.>. §.§ Parabolic algebra In this section we extend the (single) parabolic algebra in <cit.> to a double parabolic algebra and a Lie algebra homomorphism from the maximal compact subalgebra to this double parabolic algebra. We prove a proposition on the form of the ideals of the maximal compact subalgebra and elaborate on a certain group action on the parabolic algebra. To construct representations of the maximal compact subalgebra of an affine algebra, <cit.> constructs a (single) parabolic algebra together with two Lie algebra homomorphisms from the maximal compact subalgebra to this parabolic algebra. The parabolic algebra allows for an algorithmic construction of representations. By the pull back of the Lie algebra homomorphism these are also representations of the maximal compact subalgebra. However, these representations do not exhaust the set of all representations of the maximal compact subalgebra. In particular, the basic representation cannot be projected to these representations. To construct a larger set of representations of the maximal compact subalgebra, we generalize the (single) parabolic algebra of <cit.> to a double parabolic algebra together with a Lie algebra homomorphism from the maximal compact subalgebra to the double parabolic algebra. The general construction of representations of the double parabolic algebra is in Appendix <ref>, these representations are pulled back with the Lie algebra homomorphism to representations of the maximal compact subalgebra. For the double parabolic algebra we prove two important propositions. Let us define the double parabolic algebra. For the finite Lie algebra with compact subalgebra and non-compact orthogonal complement , for u^2 the formal power series in u^2 and for 𝒞_2={1,r} the group of order 2, the double parabolic algebra is (⊗ u^2⊗𝒞_2 ) ⊕(⊗ u u^2⊗𝒞_2). The Lie bracket is multiplicative in the second and third factor of the tensor product and induced from on the first factor [x⊗ u^k⊗ r^m, y⊗ u^l ⊗ r^n ] [x,y]⊗ u^k+l⊗ r^m+n . For k∈_0, n∈ and α∈Δ, a parametrization of the double parabolic algebra is in terms of the generators[The generators which differ only by an even number in n are equal, however, this definition allows to drop many 2 symbols.] P_k,n^α(E^α+(-1)^k E^-α)⊗ u^k⊗ r^n , ^α_2k+1,n H^α⊗ u^2k+1⊗ r^n with the Lie bracket in (<ref>). The abelian subalgebra of the generators ≡⟨^α_2k+1,nα∈Δ, k∈_0, 0≤ n≤ 1⟩ is called the parabolic Heisenberg algebra, but analogous to the compact Heisenberg algebra it is not a Heisenberg algebra. We give some comments on the single factors of the double parabolic algebra. The 𝒞_2 factor induces a natural _2 grading on the double parabolic algebra. The level 0 subalgebra with respect to this grading has elements x⊗ u^m⊗ 1, which is the single parabolic algebra in <cit.>. It allows to extend most results for the single parabolic algebra in <cit.> to the double parabolic algebra. The second factor u^2, u u^2 has a graded structure in the power of u. This graded structure extends to the parabolic algebra and its universal enveloping algebra ⊕_l=0^∞_l , () ⊕_l=0^∞()_l . The graded structure allows to identify ideals of the double parabolic algebra. For all N∈, the double parabolic algebra and its universal enveloping algebra have the ideals _N ⊕_l=N+1^∞_l , ^_N ⊕_l=N+1^∞()_l . The cosets of the double parabolic algebra and its universal enveloping algebra by these ideals are the quotient parabolic algebras ^N /_N , ()^N ()/^_N . Now, let us produce the Lie algebra homomorphism from the double parabolic algebra to the maximal compact subalgebra. In <cit.> a Lie algebra homomorphism from the (single) parabolic algebra to the maximal compact subalgebra is constructed. We extend this construction to a Lie algebra homomorphism from the maximal compact subalgebra to the double parabolic algebra. Therefore, we use that the generators of the maximal compact subalgebra (<ref>) are a subset of the loop generators of with variable t on which we define a Möbius transformation t(u) = 1-u/1+u∈ u. This Möbius transformation extends to a Lie algebra homomorphism ρ from the maximal compact subalgebra to the parabolic algebra <cit.>.[ In <cit.> Lie algebra homomorphisms ρ_± from the maximal compact subalgebra to the single parabolic algebra are constructed from the Möbius transformation t_± (u) = 1∓ u/1± u. Both homomorphisms map the maximal compact subalgebra differently to the single parabolic algebra. A tensor product of two representation of the single parabolic algebra which are pulled back by the different Lie algebra homomorphisms ρ_+ and ρ_- to -representations, is not a -representation. The double parabolic algebra together with one Lie algebra homomorphism ρ solves this.] For each n∈ and α∈Δ, the map ρ : → ρ(^α_n) ∑_k= 0^∞ a^(n)_k P^α_k,n , ρ(^α_n) ∑_k= 0^∞ a^(n)_2k+1^α_2k+1,n . is an injective Lie algebra homomorphism. The coefficients a_k^(n) are the coefficients of the power series expansion in (<ref>) and are evaluated in <cit.> a_2k^(n)= 2∑_ℓ=0^n2n2lk-ℓ+n-1k-ℓ, a_2k+1^(n)= -2 (n) ∑_ℓ=0^n-12n2l+1k-ℓ+n-1k-ℓ. The homomorphism ρ respects the even/odd grading of the loop level of . The coefficients a_k^(n) satisfy two identities. For fixed n ∈, the coefficients a_k^(n) are polynomials in k of degree n-1. For fixed k∈_0, the coefficients a_k^(n) are polynomials in n of degree k. The expansions of the first few elements in _n^α is in (<ref>) For each N∈_0, the codomain of ρ can be quotiented by the ideal _N. This defines the surjective Lie algebra homomorphisms ρ^N: → ^N by (<ref>) but the summation from k=0 to k=N resp. k=⌊N/2⌋ <cit.>. The homomorphism ρ respects the even-odd grading of the loop index of the maximal compact subalgebra because r≠1, but r^2=1. Setting r=± 1 reduces the homomorphism ρ to the injective Lie algebra homomorphisms ρ_± from the maximal compact subalgebra to the single parabolic algebra (see footnote <ref>). Because ρ_± are injective Lie algebra homomorphisms, also ρ is an injective Lie algebra homomorphisms. For each N∈_0, the Lie algebra homomorphism ρ allows to identify ideals in the maximal compact subalgebra as the inverse image of ideals ^N in the parabolic algebra. For these ideals, we use the same symbol _N. While the ideals of the parabolic subalgebra derive immediately from the parabolic grading, the form of the ideals in the maximal compact subalgebra is more subtle. A generating identity for the ideals is subject of Proposition <ref>. In the proof of the proposition and also later in this work, we use an identity for sums of polynomials with binomial factors. Therefore, let us first give this identity globally and then use it to prove the proposition. For p∈[m]_N-1 a polynomial in m of degree less then N, the binomial sum ∑_m=0^N(-1)^m Nm p(m) 0 vanishes. This follows by differentiating k times for 1≤ k≤ N-1 the binomial expansion of (1+x)^N w.r.t. x and then setting x to -1. With this identity, we prove the next proposition about ideals of the maximal compact subalgebra. A different formula for the same ideals is proposed in <cit.>. For all N∈_0, α∈Δ, a∈, b,n∈ the elements ∑_m=0^N+a(-1)^mN+am ^α_n+2bm∈_N , ∑_m=0^N+a(-1)^mN+am ^α_n+2bm∈_N of the filtered algebra are in the ideal ℐ_N. For every N∈_0, a∈, b,n∈ by identity (<ref>) and Remark <ref>, the elements (<ref>) are in the kernel of ρ^N ρ^N(∑_m=0^N+a(-1)^mN+am ^α_n+2bm) ∑_k= 0^N P^α_k,n∑_m=0^N+a(-1)^mN+am a^(n+2bm)_k 0 The same holds for the second element in (<ref>). This identity of ideals in the maximal compact subalgebra is important to investigate finite dimensional -representations. But before we can do so, let us prove a proposition about the action of certain group elements on the parabolic algebra at the end of this section. For all N∈ and α,β∈Δ, the group elements ω_α,Nexp(∑_k≥ 022k+1^α_2k+1,0) ∈ ()^N act on the parabolic algebra ^N by conjugation and satisfy ω_α,Nρ^N(^β_n)ω_α,N^-1ρ^N(^β_n-(α|β)) , ω_α,N ρ^N(^β_n)ω_α,N^-1 ρ^N(^β_n) . First, the exponential series in (<ref>) truncates at parabolic level N. Second, the conjugation group action is the exponentialized adjoint Lie algebra action , which is used in the proof. Third, the Lie algebra homomorphism ρ resp. ρ^N acts group like on the universal enveloping and thus on the group exponentiation of the maximal compact subalgebra. Then, the group element ω_α,∞ (defined properly later) may be interpreted as the Lie algebra homomorphism acting on the group element w_α of which acts by w_α^β_nw_α^-1 ^β_n-(α|β) , w_α^β_n w_α^-1 ^β_n . The relation of ω_α,N to these group elements provides further inside in our construction but will not be used in this work. We use that conjugation is the group action associated to the adjoint representation of the Lie algebra. The parabolic Heisenberg algebra is abelian which proves the action of ω_α,N on ρ^N(_n^β). If (α|β) = 0, ^α_2k+1,0 commutes with P^β_l,a which proves the action of ω_α,N on ρ^N(^β_n) for (α|β)=0. For (α|β)≠ 0 the adjoint map of the double parabolic algebra acts by (_2k+1,0^α)^m P_l,a^β = (α|β)^m P_l+(2k+1)m,a^β . For n=0 this gives exp(∑_k≥ 022k+1(^α_2k+1,0)) P^β_0,0∑_k=0^∞ c^α,β_k P^j_k,0 where c^α,β_k = 1/k!(/ x)^k e^∑_l≥ 02 (α|β)/2l+1x^2l+1|_x=0 . The coefficients c^α,β_k can be solved for (α|β) = ± 1,± 2. For (α|β) = ± 2 the coefficients are c^α,β_0 =1 c^α,β_k = (± 1)^k 4k . For (α,β) = ± 1 the coefficients c_k^α,β are c_0 = 1 c_k = (± 1)^k 2 . By (<ref>), this proves the Proposition for n=0. For n=-1 and (<ref>), the left hand side evaluates to exp(∑_k≥ 022k+1(^α_2k+1,0))( P^β_0,1 +2∑_k=0^N (-1)^k P^β_k,1) ∑_k=0^∞ c^α,β_k P^β_k,1 with c^α,β_k = 1/k!(/ x)^k(1+2∑_l=1^Nx^l) e^∑_l≥ 02 (α,β)/2l+1x^2l+1|_x=0 . For (α,β) = ± 2, ± 1 the coefficients are (α,β) = + 2 → c^α,β_0 = 1, c^α,β_k = 2(1+2k^2), (α,β) = + 1 → c^α,β_0 = 1, c^α,β_k = 4k, (α,β) = - 1 → c^α,β_0 = 1, c_k = 0, (α,β) = - 2 → c^α,β_0 = 1, c^α,β_k = (- 1)^k 2 . With (<ref>), this proves the Proposition for n = 0 and n=-1. Assuming the Proposition holds for n≤ N∈_0, then by induction exp(∑_k≥ 022k+1(^α_2k+1,0)) ^β_N+1[^β_1, exp(∑_k≥ 022k+1(^α_2k+1,0)) ^β_N] + exp(∑_k≥ 022k+1(^α_2k+1,0)) ^β_N-1 ^β_N+1-(α|β) the proposition holds for all N∈_0. The same induction argument applies to -N∈_0. In this section we introduced the maximal compact subalgebra, the double parabolic algebra and a Lie algebra homomorphism form the maximal compact subalgebra to the double parabolic algebra. Using the double parabolic algebra, we proved Proposition <ref> on the generating structure of the ideals _N and for the double parabolic algebra we provided the action of a certain group element. § COMPACT SUBALGEBRA ON BASIC REPRESENTATION The previous two sections put us in a good place to discuss the action of the maximal compact subalgebra on the basic representation. Therefore, we prove a number of propositions in this section, which are important to analyze the -structure of the basic representation in the next section. The maximal compact subalgebra acts on the basic representation by the induced action from the affine algebra. On maximal weights, the action of _n^α is given by _n^α e^γ⊗ 1 ϵ(α,γ) e^γ+α⊗ S_-n-(γ,α)-1()+ϵ(α,γ) e^γ-α⊗ S_n+(γ,α)-1({-α}) . Proposition <ref> states two important properties for the action of the compact subalgebra on the basic representation. The action of the compact subalgebra on a maximal state generates the entire basic representation. The action of the compact Heisenberg algebra on a maximal state generates all states with the same weight: a.) ∀ X ∈, γ∈ ∃ U∈() : X = U( e^γ⊗ 1) b.) ∀ γ∈, f∈[{α∈Δ}] ∃ ^_f∈() : e^γ⊗ f= ^_f( e^γ⊗ 1). First, we show that every maximal state can be mapped to the highest state by repeated action of ∀ γ∈ ∃ U'∈(): e^0⊗ 1 = U'(e^γ⊗ 1). W.l.o.g. assume γ = ∑_i=1^r k_i α^i with k_i≥ 0. Choose 1≤ j≤ r s.t. k_j≠ 0 and n -(γ|α^j)+1<0. Such j exists because (·|·) is positive definite. Then, by equation (<ref>) _n^α^j e^γ⊗ 1 = ϵ(γ,α^j) e^γ-α^j⊗ 1. The procedure repeats for γ-α^j and iteratively maps e^γ⊗ 1∈ onto the highest-weight e^0⊗ 1. It works analogous for k_i∈. This shows (<ref>). By induction in the loop level of a state X∈, we show that the basic representation is generated by the action of on the highest-weight = () e^0⊗ 1. Assume that for all X∈ with X = N'X for N'≤ N∈ holds X∈() e^0⊗ 1. Let Y∈ s.t. Y = (N+1)Y. Because is generated by the action of {T_n∈ n≤ -1} on the highest state it exists an Y'∈, n∈ with Y'=(N-n+1)Y' s.t. Y = T_-n Y'. Then, by assumption Y (T_-n+τ(T_-n)) Y' - (T)_nY' ∈ () e^0⊗ 1 This proves a. To prove b, we use that H_n^α n≤ -1, α∈Δ on a maximal state spans all states with the same _0. Then, the same induction proves b. By Proposition <ref>, for a state e^γ⊗ f({α∈Δ}) exists generating elements ^_f∈(), which generates this state from the action on the maximal state e^γ⊗ 1. By the very definition of the generating elements, they satisfy for each n∈ and α∈Δ _n^α^_f ^__n^α f . Let us also introduce a short notation for the generating element ^_S_n of Schur polynomials, because the Schur polynomials are essential in the vertex operator algebra. The generating elements of the Schur polynomials are called -Schur polynomials and are defined by S^_n({^α}) = ^_S_n({α})∈(). The action of a -Schur polynomial on a maximal state generates the state with the same weight and the respective Schur polynomial. The -Schur polynomials satisfy a recursion identity S^^α_N({^α}) 2/N∑_n=1^N (_N-2n({^α})-_n S_N-n({^α}) ) which derives from adding and subtracting H^α_n to (<ref>) and which is used for proofs by induction. We prove the next lemma by induction on this identity. The lemma provides a closed expression for the -Schur polynomials and will be used to prove the last proposition in this section. For every α∈Δ, the -Schur polynomials are given in the expansion S_n^({^α}) ∑_m=0^⌊n/2⌋exp(∑_l≥ 1 -2l_l^α x^l)|_x^n-2m . Evaluating the generating series in Lemma <ref> gives the explicit form in terms of the sets 𝒦_n (k_1,… k_n) ∈^n_0∑_l=1^n lk_l = n S_n^({^α}) ∑_m=0^⌊n/2⌋∑_k ∈𝒦_n-2m∏_l=1^n-2m1/k_l!(-2/l^α_l)^k_l ∑_M=0^n ∑_m_1=1^n∑_m_2=1^n-m_1…∑_m_M=1^n-m_1-…-m_M-1 r_n(m_1,…,m_M) ∏_i=1^M(-2/m_i_m_i^α) where r_n(m_1,…,m_M) is the factorial factor from the exponential. If k_n is the multiplicity of the integer n appearing in the set {m_1,…, m_M} then, r is given by r_n(m_1,…,m_M)∏_i≥ 11/k_i! if n+∑_i=1^M m_i is even 0 if n+∑_i=1^M m_i is odd . We prove the expansion in (<ref>) by induction using (<ref>). For n = 0,1 the identity holds since for γ∈ S_0^({^α})e^γ⊗ 1 = e^γ⊗ 1 S_1^({^α})e^γ⊗ 1 = -2_1^α e^γ⊗ 1 = e^γ⊗ S_1({α}) . For k∈_M set d_k = ∏_l=1^M1/k_l!(-2l)^k_l and assume (<ref>) is true for N'≤ N∈_0. We find _N+1({^α }) 2/N+1∑_n=1^N+1(_N-2n+1({^α})-^α_n_N-n+1({^α}) ) ∑_p≥ 0∑_k∈_pδ_(p+N+1)2,0∑_n=1^N+12/N+1(Θ(N-2n+1-p) -nk_n/2)d_k∏_l=1^p(^α_l)^k_l ∑_p≥ 0∑_k∈_pδ_(p+N+1)2,0Θ(N+1-p) d_k∏_l=1^p(^α_l)^k_l , where Θ is the Heaviside function. This proves the Lemma. A detailed calculation is in Appendix <ref>. Lemma <ref> provides an explicit formula for the -Schur polynomials. The Lie algebra homomorphism ρ^N maps the -Schur polynomials to elements in the universal enveloping of the parabolic Heisenberg algebra. We call these elements the -Schur polynomials. For the -Schur polynomials we prove an important proposition. However, in order to do so, we first prove a lemma and then define the -Schur polynomials properly. For each N∈_0, the elements p^α_N,0 and p^α_N,1 defined by 2 p^α_N,0(n) 2 → () n ↦ -2/nρ^N(^α_n) , p^α_N,1 2+1 → () n ↦ -2/nρ^N(^α_n) , are polynomials in n taking values in the parabolic Heisenberg algebra. At n=0, the polynomial p^α_N,0(0) satisfies p^α_N,0(0) ∏_l=0^⌊N/2⌋4/2l+1_2l+1,0^α . Therefore, for every N∈_0, the group element (ω_α,N)^2 in (<ref>) is (ω_α,N)^2∑_m≥ 01/m!(p^α_N,0(0))^m . By Remark <ref>, ρ^N(^α_m) is a polynomial on the domain m∈ taking values in the parabolic algebra. Because ^α_0=0, the polynomial ρ^N(^α_2m) has a zero at m=0. Therefore (<ref>) are polynomials as well. To evaluate p^α_N,0 at zero, the Möbius transformation (<ref>) is useful p^α_N,0(0)|__2l+1,0^α= -1/mρ^N(_2m)|__l - 1/m(t(u)^2m-t(u)^-2m) |_m=0|_u^2l+1 = 4/2l+1 . Now, let us define the -Schur polynomials and the generating elements _f^N,. For each N∈_0 and f∈[{α∈Δ}], the generating elements _f^ are mapped by the Lie algebra homomorphism ρ^N group like to the quotient universal enveloping of the parabolic algebra defined analogous to (<ref>) ^N,_f() = ρ^N(^_f()) ∈()^N . These elements are important to understand -subrepresentation of the basic representation which justifies the additional notation. The image of ρ^N on the -Schur polynomials are called -Schur polynomial S^N,_n({^α}) = ρ^N(_n({^α})))∈()^N . By Remark <ref>, Lemma <ref> and Remark <ref>, the coefficients of ()^N in S^N,_2n and S^N,_2n+1 are polynomials in n∈_0 of degree equal to the parabolic level of the product of ^α_l,a. In particular, the maps 2 S̅_α,0^N, 2_0 → ()^N n ↦ S^N,_n({^α}) , S̅_α,1^N, 2_0+1 → ()^N n ↦ S^N,_n({^α}) are polynomials in n of degree N. These polynomials extend naturally to polynomials with the domain extended from _0 to . While for n∈ and a∈{0,1}, the -Schur polynomials with negative index S^N,_-n=0 vanish, the extension S̅^N,_a(-n) can be non zero. We use this definition of -Schur polynomials together with Lemma <ref> and Lemma <ref> to prove the next proposition about the extended -Schur polynomials. For all n,N≥ 0 and α∈Δ, a∈0,1, the extended -Schur polynomials S̅_α,a^N,-∈[n] satisfy S̅_α,a^N,-(-n) = S̅_α,a^N,(n-2) ω^2_α . The statement is independent of α∈Δ, a∈{0,1} and N∈_0. Therefore, for the proof, let us drop these indices on the elements of the parabolic Heisenberg algebra, on the -Schur polynomials and on the group element ω_α,N. Here, let us prove the statement in detail for n even, the proof works the same for n odd and is described at the end. For the proof, we evaluate the coefficients of elements of () in S̅^-(-2n). To specify these elements in () let L,L' be two finite sets of odd natural numbers and let ^L = ∏_l∈ L_l',0 ^L' = ∏_l∈ L'_l',1 . In this notation, we evaluate the coefficient of the product ^L^L' in S̅(-2n). The element S̅(-2n) is the polynomial extension from the -Schur polynomials with n∈ to n∈. Therefore, by the identity for generalized sums in Appendix <ref>, S̅(-2n) has the form of S(2n) but in terms of the generalized sums ∑→ and n→ -n. Therefore, with Lemma <ref> and Lemma <ref> and s = |L|, t=|L'| where t is even since n is even, the coefficient of ^L^L' in S̅(-2n)^- is S̅^-(-2n) |_^L^L' (-1)^sm_1=1-nm_2=1-n-m_1…m_s+j=1-n-m_1-…-m_s+j-1 + ⌊j/2⌋…m_s+t=1-n-m_1-…-m_s+t-1 + t/2 r(2m_1,…,2m_s,2m_s+1+1,…,2m_s+t+1) ∏_i=1^sp(2m_i)∏_i=s+1^s+tp(2m_i-1) . In the summation ⌊j/2⌋ accounts for the j-1 number of sums with s+1≤ i≤ s+t-1. Next, we replace the generalized sum with upper limit higher than the lower limit by (<ref>). Therefore, first we focus on the sums 1≤ i≤ s. The replacement also inverts m_i→ -m_i, but because p(-2m_i) = p(2m_i) it is sufficient to replace the sums by m_i=1-n-m_1-… -m_i-1 → - ∑_m_i=1^n-m_1-… -m_i-1-1 - δ_m_i,0 . Second, we replace the sums s+1≤ i≤ t pairwise with the pairs i=s+2j-1 and i=s+2j by (<ref>). Doing this for all sums simultaneously implies also m_i→ -m_i in the summands and provides m_s+2j-1=1-n-m_1-… -m_s+2j-2+j-1 m_s+2j=1-n-m_1-… -m_s+2j-1+j p(2m_s+2j-1-1)p(2m_s+2j-1) → ∑_m_s+2j-1=0^n-m_1-… -m_s+2j-2-j+1 ∑_m_s+2j=0^n-m_1-… -m_s+2j-1-j p(-2m_s+2j-1-1)p(-2m_s+2j-1) . In the first sum in the last expressions, the summand m_s+2j-1 = n-m_1-…-m_2+2j-1-j+1 gives for the second sum the limits 0≤ m_s+2j≤ -1 which vanishes by (<ref>) and this term can therefore be removed from the first sum. Shifting all sums with s+1≤ i≤ s+t by m_i→ m_i -1 to start at m_i=1 and using p(-m) = p(m) results in the replacement of (<ref>) by ∑_m_s+2j-1=1^n-m_1-… -m_s+2j-1+j-2∑_m_s+2j=1^n-m_1-… -m_s+2j+j-1 p(2m_s+2j-1-1)p(2m_s+2j-1) . We insert the replacements in (<ref>) and (<ref>) in (<ref>). The sum in (<ref>) and the sum in (<ref>) will contribute to S̅^(2N-2), the Kronecker delta terms in (<ref>) sum up to ω^2. To see this, we collect all terms with the same number of Kronecker delta terms δ_0,m_i. The terms with s'≤ s number of Kronecker delta terms δ_m_i,0 for i≤ s appear with multiplicity choose s' out of s. Therefore, this rearrangement of sums provides us with S̅^-(-2n)|_^L^L'∑_s'=1^sss'∑_m_s'+1=1^n-1…∑_m_s+j=1^n-1-…-m_s'+j-1+⌊j/2⌋ …∑_m_s+t=1^n-1-…-m_s'+t-1+t/2 r(m_s'+1,…,m_s+t)∏_i=s'+1^sp(2m_i) ∏_i=p+1^s+tp(2m_i-1) 1/s'!∏_i=1^s'p(0)|_^L^L' where a second factor of (-1)^s appears from (<ref>). The factorial 1/s! comes from the s' number of arguments with same m=0 in the argument of r. The overall projection to ^L^L' is the sum of all possible projections of the different factors. Because p(2m) is a series in _l,0 and p(2m+1) is a series in _l,1, the sum over all different projections is the sum over all subset decompositions of L = L(s')∪ L(s-s'), where L(s') has s' number of elements fixed ∏_i=s'+1^sp(2m_i)∏_i=s+1^s+tp(2m_i-1) 1/s'!∏_i=1^s'p(0)|_^L^L' ∑_L(s-s')∪ L(s')(∏_i=s'+1^sp(2m_i))|_^L(s-s')(∏_i=s+1^s+tp(2m_i-1))|_^L'1/s'!(p(0))^s'|_^L(s'). By Lemma <ref> and Remark <ref>, the last product with factors p(0) projected on ^L' is ω^2 projected on ^L'. For the other factors of p(m) with m≠ 0 we use Lemma <ref> and pull ρ in front of the entire expression. Then, by (<ref>) equation (<ref>) becomes S̅^- (-2n)|_^L^L' ∑_s'=1^p ss'∑_L(s-s'), L(s')ρ^N[ ∑_m_s'+1=1^n-1…∑_m_s+j=1^n-1-m_s+j-1-⌊j/2⌋…∑_m_s+t=1^n-1-m_1-… -m_s+t-1+t/2 r(m_s'+1,…,m_s+t)(∏_i=s'+1^p2/2m_i_2m_i∏_i=s+1^s+t2/2m_i-1_2m_i-1) |_^L(s')^L'] ω^2 |_^L(s') ∑_s'=1^pss'∑_L(s-s'), L(s')S̅(2(n-1))|_^L(s-s')^L'ω^2|_^L(s')S̅(2(n-1))ω^2|_^L^L' . The proof works analogously for S̅^-(-2n-1). Then t is odd, and one uses the pair wise replacement of all sums in (<ref>) for s+2≤ i≤ s+t. The sum s+1 is replaced individually by (<ref>) where the upper limit of the sum reduces by 1 because of (<ref>). Then the same steps as for the even case provides S̅^-(-2n-1) =S̅^-(2n-1)ω^2. The extended -Schur polynomials S̅^N,_α(n) vanish at n=-1 S̅^N,_α (-1) = 0 . This follows from (<ref>) for odd n and (<ref>). It is consistent with Proposition <ref>. With the results of this section we are well equipped to analyze the -structure of the basic representation, which is the subject of the next section. § -STRUCTURE OF BASIC REPRESENTATION In this section we prove Theorem <ref>. For instance we give all -subrepresentations in the basic representation such that the quotient of the basic representation by the subrepresentation is a finite dimensional -representation. This is the finest -structure of the basic representations and it allows us to provide the cosocle filtration as an infinite composition series of infinite dimensional -subrepresentations of the basic representation. First, we provide an infinite set of finite dimensional -representations (N)[We chose the name since the modules are truncated Verma module of the parabolic Heisenberg algebra with generators .] and surjective projections _N : →(N) ≃/W_N, which commute with the action of and project the basic representation on the finite dimensional -representations (N). This provides an infinite set of -subrepresentation W_N=Ker(G_N) of the basic representation as the kernel of the projections.[Embedding and projection. For two vector spaces V,W and a homomorphism ': V→ W we say that V is embedded in W if ' is injective and if ' is surjective we say that W ≃ V/(Ker(') is a projection of V. If the vector spaces are representations of a Lie algebra 𝔩 and ' respects the action of 𝔩 we call the embedding (resp. projection) also a 𝔩-embedding (resp. 𝔩-projection). If it is clear from the context we also avoid the symbol 𝔩 and simply write embedding (resp. projection).] Second, we prove that if the basic representation projects -covariant on a finite dimensional -representation, there exists an N∈_0 such that the -representations is equivalent to (N) or to a quotient of (N) by a subrepresentation of (N). This is equivalent to the statement that every -subrepresentation W⊂, for which the quotient /W is a finite dimensional -representation by ρ, is W_N for some N∈_0 or a subrepresentation of W_N. Therefore, we may call this set of subrepresentations the finest -structure (and probably the finest -structure). Third, we show that the infinite family of -subrepresentations (W_N)_N∈_0 is an infinite composition series with cosocle filtration (see Appendix <ref>). In particular, the finite dimensional quotients W_N/W_N+1 are maximal semisimple resp. W_N+1 is the radical of W_N and the inverse limit of the chain of -subrepresentations is trivial. In Subsection <ref> we derive the infinite chain of -subrepresentations with finite co-dimension which is the cosocle filtration of and we derive the -projection of the basic representation onto infinitely many finite dimensional -representations. In Subsection <ref> we show that the inverse limit of the -subrepresentations is trivial and therefore the cosocle filtration is an infinite composition series of under . §.§ -subrepresentations in basic representation In this section we provide all infinitely many projections of the basic representation onto finite dimensional -representations (See Footnote <ref> or Appendix <ref>). Equivalently, we provide all infinitely many -subrepresentations of , for which the quotient of by the subrepresentation is a finite dimensional -representation. This set of -subrepresentations also contains the cosocle filtration of the basic representation. First, for all N∈_0 we construct a finite dimensional -representation (N), which is also a representations of by pulling back with the Lie algebra homomorphism ρ^N. Second, we prove that the basic representation can be projected -covariant onto (N) for each N∈_0 and we give the explicit -projection map _N. Third, we show that this infinite set of -representations and the quotient of these representations by subrepresentations are all finite dimensional -representations on which the basic representation projects with the pull back of ρ. Fourth, in a corollary we give the infinite descending chain of -subrepresentations in the basic representation with cosocle filtration. For each N∈_0, we define a representation (N) as the tensor product of the universal enveloping algebra of the parabolic Heisenberg algebra () quotiented by the ideal ^_N in (<ref>) and the group algebra of the root lattice with the inclusion α↦ e_α (N) ()/^_N⊗[ ] . This definition is reminiscent to the definition of the basic representation but in parabolic generators. In particular, for a fixed _α∈[ ], the representation (N) restricts to a finite dimensional quotient of a Verma module of the parabolic Heisenberg algebra. The exponential series ω_α,N in Proposition <ref> acts by multiplication on (N). This is well-defined, because first, only finitely many terms in the exponential series in (<ref>) act non-trivially and second, they are given in terms of the parabolic Heisenberg algebra. Next, we consider the group which is generated by ω_α,N^2 for α∈Δ. This induces an equivalence relation on (N) which is generated by (ω_α,N)^2 _β∼_β+2α and extends straightforward to all other elements of (N), because the parabolic Heisenberg algebra commutes with ω_α,N. For each N∈_0, this equivalence relation defines a quotient space (N) from (N) by (N) (N)/ ∼ . From here on, we always work with the quotient space (N). The construction of (N) as a quotient space is particularly fruitful to extend (N) to a representation of the parabolic algebra . This is subject of the next Proposition. For all N∈_0, the representation (N) extends to a representation of the parabolic algebra ^N. The element _n^α∈ acts on _γ for γ∈ Q by ρ^N(_n^α) _γϵ(α,γ) S̅_α^N,(-n-(α|γ)-1) _α+γ and the action extends to (N) by the commutation relations of . By this proposition (N) extends from a representation of to a representation of the parabolic algebra ^N, for which we use the same name (N). The pull back of ρ^N maps this representation to a representation of , for which we still use the same symbol (N) and therefore, we simply write _n^α for ρ^N(_n^α), when _n^α acts on a parabolic representation. The state _0∈(N) is a singlet of the maximal compact subalgebra of the Lie algebra because S̅_α(-1)=0 by Corollary <ref>. The proof contains three parts. The first part shows, that the action (<ref>) is well defined on the coset space. The second part shows that (<ref>) defines a representation of the parabolic algebra. For example, that ρ^N(^α_n) can be written in terms of the parabolic generators and ρ^N. The third part shows that (<ref>) obeys the Lie bracket relation. Then, (N) extends to a representation of the parabolic algebra. We show that (<ref>) is well defined on the quotient space. By Proposition <ref> and β∈Δ ρ^N(_n^α)(ω_β)^2 _γ (ω_β)^2ρ^N (_n+2(α|γ)^α) _γϵ(α,γ)S̅_α^N,(-n-(α|2β+γ)-1)_α+2β+γ S_α^N,(-n-(α|2β+γ)) _α+2β+γρ^N(_n^α) _γ+2β . Therefore, the action respects the equivalence relation. Second, we show that (<ref>) defines an action of the parabolic algebra ^N consistently. For all α∈Δ and 0 ≤ n ≤ 2N+1 the equation (<ref>) defines the action of ^N uniquely. Then, the action of all generators _m^α for m∈ is given in terms of ρ^N and the action of ^N. Therefore the action of _m^α is uniquely given by the action of the elements _n^α with 0 ≤ n≤ 2N+1 and the ideal relations in Proposition <ref>. Thus, it remains to show that the ideal relations hold ∑_k=0^N+1 (-1)^kN+1kρ^N(_m+2k^α) _γ ϵ(α,γ) ∑_k=0^N+1(-1)^kN+1kS̅_α^N,(-m-2k-(α|γ)-1) _α+γ 0 . The last equation holds because S_α^N, is a polynomial of degree N up to elements in _N^ which act trivial by definition of (N). This shows that (<ref>) defines an action of the parabolic algebra on the states _β for β∈. Finally, we show that this action of is indeed a representation and respects the Lie bracket. For all m,n∈, α,β∈Δ, γ∈ and () the action of the generator _m^α on U__β is uniquely defined by the action of _m^α on _γ and the commutation relations of ^α_m with the compact Heisenberg algebra. Therefore, the action of the compact Heisenberg algebra and ^α_m respect the commutation relations. It remains to show that ^α_m and ^β_n respect the commutation relations on _γ. Inserting Proposition <ref> in (<ref>) we find ρ^N(_m^α) _γϵ(α,γ)S^N,_-n-(α|β)-1({^α}) _γ+α + ϵ(α,γ)S^N,_n+(α|β)-1({-^α}) _γ-α ϵ(α,γ)S^_-n-(α|β)-1({^α}) _γ+α + ϵ(α,γ)S^_n+(α|β)-1({-^α}) _γ-α . But this action is exactly the action on the maximal states of the basic representation and because _m^α and the compact Heisenberg algebra satisfy the commutation relations, this action necessarily satisfies the commutation relations between _m^α and _n^β. This is illustrated in detail in Appendix <ref>. Now, we are equipped with infinitely many -representations (N) and the quotients of (N) by subrepresentations of (N). In the theorem below we show that the basic representation can be projected onto these finite dimensional -representations. For all N∈_0, there exists a -homomorphism _N:→(N) given by _N(e^γ⊗ f({α∈Δ})) _f^N,() _γ which is surjective and commutes with . By this theorem, for all N∈_0, the map _N projects the basic representation onto (N) and on quotients of (N) by subrepresentations of (N). This is equivalent, to the fact that the kernel of _N is an -invariant subspace W_N = Ker(_N)⊂ . The map _N together with the action _m^α on the basic representation, allows to conveniently evaluate the action of _m^α on the states of (N) by evaluating _m^α on the basic representation and then applying _N. Proposition <ref> gives an efficient formula to evaluate _N on arbitrary states of the basic representation. First, we show commutation of _N with the compact Heisenberg algebra. For all n∈, α∈Δ, γ∈, f∈[{β∈Δ}] holds by (<ref>) _N(_n^α e^γ⊗ f ) __n^α f^() _γ_n^α_f^() _γ_n^α _N( e^β⊗ f) . Therefore, the homomorphism _N commutes with the compact Heisenberg algebra. Next, we show commutation of _N with _n^α for all n∈ on the maximal states. Therefore, let us evaluate _N∘^α_n on a maximal state by using (<ref>) _N(_n^α e^γ⊗ 1) _N(ϵ(α,γ) e^γ+α⊗ S_-n-(α,γ)-1()+ϵ(α,γ) e^γ-α⊗ S_n+(γ,α)-1({-α})) By Proposition <ref> the Schur polynomials S_n({α}) are generated by the -Schur polynomials, which allows to replace S_n({α}) by S^_n({^α}). Because the compact Heisenberg algebra commutes with _N, the -Schur polynomials commute with _N. Then, from (<ref>) follows _N(_m^α e^γ⊗ 1) ϵ(α,γ) S^_-m-(α,γ)-1() _γ+α + ϵ(α,γ) S^_m+(γ,α)-1(-) _γ-α ϵ(α,γ) S̅^N,_α(-m-(α,γ)-1) _γ+α_m^α _γ by Proposition <ref>. This proves commutation of ^α_n with _N on all maximal states. It remains to show commutation of _m^α and _N on all states. Therefore, we write a state e^γ⊗ f for a polynomial f∈[{β∈Δ}] as generated from the maximal state by _f^. For U=_f^ we apply the commutation identity (<ref>), which provides the appropriate sets of U_j and n_j to commute _n^α with _f^. It allows us to derive _n^α_N(e^γ⊗ f) _n^α_N(_f^ e^γ⊗ 1) _n^α_f^_N(e^γ⊗ 1) ∑_j=1^JU_j _n_j^r _N( e^γ⊗ 1) ∑_j=1^JU_j _N(_n_j^α e^β⊗ 1) ∑_j=1^J_N(U_j _n_j^r e^γ⊗ 1) _N(_n_j^α_f^ e^γ⊗ 1) _N(_n^α e^γ⊗ f) , which proves commutation of and _N. The homomorphism _N is surjective since ρ^N is surjective. This theorem shows that for every N∈ the surjective homomorphism _N projects the basic representation onto (N) and quotient of (N) by subrepresentation of (N). The theorem also implies the existence of infinitely many filtered invariant subspaces as the kernel of _N, which we give in a corollary after the next theorem. In the next theorem we show that if and only if the basic representation projects onto a finite dimensional -representation, which is also a representation of , then the representation is equivalent to (N) or to the quotient of (N) by a subrepresentation. If for an N∈_0 the basic representation projects by ρ^N onto a finite dimensional representation Ψ of ^N, then Ψ is equivalent to (N) or to the quotient of (N) by a subrepresentation of (N). The theorem also implies that a -invariant subspace in the basic representation for which the quotient of the basic representation by the subspace is a finite dimensional -representation, the subspace is equivalent to W_N or to a subrepresentation of W_N for some N∈_0. All known finite dimensional -representations are also representations of the double parabolic algebra . Yet, it is not proven, but we expect that indeed all finite dimensional -representations are also -representations. Then, for all finite dimensional -representations which embed in the basic representation, there exists an N∈_0 such that the representation is equivalent to W_N or to a subrepresentation of W_N. Then, this is the finest -structure of the basic representation. If Ψ is the trivial representation, it is equivalent to the quotient of (0) by (0). Let us assume Ψ is non-trivial. Because Ψ is a projection of the basic representation, there exists a surjective homomorphism ':→Ψ which commutes with . By Proposition <ref> and because Ψ is non-trivial by assumption and ' is surjective, ' does not vanish on the maximal states. By commutation of ' and the Heisenberg algebra necessarily Ψ and ' satisfy '(e^γ⊗ f)_f^N,'(e^γ⊗ f) . Therefore, Ψ as a representation of the parabolic Heisenberg algebra is equivalent to (N) or to the quotient of (N) with a subrepresentation. To show that Ψ is a quotient space of (N) w.r.t. the equivalence relation in (<ref>) and that '=_N, we perform several derivations explained below '(e^γ+α⊗ 1) ϵ(α,γ)^α_-(α|γ)-1'( e^γ⊗ 1) -ϵ(α,γ)∑_m=1^N+1(-1)^m N+1m'(^α_-(α|γ)-1+2m e^γ⊗ 1) -∑_m=1^N+1(-1)^m N+1m S_2m-2({-^α}) '( e^γ-α⊗ 1) S̅^N,-_α(-2) '( e^γ-α⊗ 1) ω^2_α'( e^γ-α⊗ 1) . In the first line, we used that ' commutes with ^α_-(α|γ)-1. In the second line, we used that Ψ has parabolic level N, then we applied the ideal relation in Proposition <ref> and used again commutation of ' and _m^α. For the third line, we evaluate _m^α on the maximal state with weight γ, then we used commutation of the ' with the compact Heisenberg algebra and applied the Lie algebra homomorphism which provides the -Schur polynomials. In the fourth line, we applied (<ref>) and the fact that S̅^N,_α(n) are polynomials for the first equation. The last equation derives from Proposition <ref> for n=2. Therefore Ψ is a quotient space w.r.t. the equivalence relation in (<ref>). Commutation of ' and _m^α implies by Proposition <ref>, that _m^α acts on Ψ by (<ref>). Thus, Ψ is equivalent to (N) or to the quotient of (N) with a subrepresentation. By these two theorems, we can derive the cosocle filtration of the basic representation, but therefore, it is beneficial to add some notation. For k≤ N let (N)_k be the projection of (N) onto the -representation at parabolic level k (N) ⊕_k=0^N (N)_k . Then, due to the parabolic grading, for M∈_0 also ⊕_k=M^N (N)_k are -representations. Using this notation, we provide the cosocle filtration of the basic representation in the next corollary. For all N∈_0, with (N) and _N:→(N) as in Proposition <ref> and Theorem <ref>, the following identities hold. * The kernel of W_N Ker(_N)⊂ is a non-trivial proper -invariant subspace and we set W_-1=. * The -invariant subspaces of W_N, for which the quotient of W_N by the subrepresentation is a finite dimensional -representation are W_M and subrepresentations of W_M for M>N. Equivalently, if W_N projects onto a finite dimensional -representations, which is also a -representation by ρ, then there exists a M∈, M≥ N+1 such that the -representation is equivalent to ⊕_k=N+1^M (M)_k or to a quotient of this representation by a subrepresentation. * For M∈_≥ -1, the quotients of the invariant subspaces W_M/W_M+1≃(M+1)_M+1 are finite-dimensional semisimple -representations, on which the generators with positive parabolic level act trivial. * For M∈_≥ -1, the cosocle and the radical (see Appendix <ref>) of W_M are rad(W_M) W_M+1 , cosoc(W_M) ≃ (M+1)_(M+1) . * The -cosocle filtration (see Appendix <ref>) of is (W_N)_N∈ ⊃ W_0⊃ W_1⊃ W_2⊃… This corollary provides the cosocle filtration of the basic representation under . In the next subsection, we show that the cosocle filtration is indeed an infinite composition series (see Appendix <ref>) of the basic representation. §.§ Infinite composition series of basic representation We prove the quite remarkable fact that the basic representation has an infinite composition series with cosocle filtration under . That means, we prove that the basic representation has a descending chain of infinitely many invariant subspaces such that the quotient of two adjacent subspaces in the chain is finite dimensional semisimple. As vector spaces, the sum of all these finite dimensional quotients is equivalent to the basic representation. First, we derive the infinite completion (∞) of the family ((N))_N and of the cosocle filtration of the basic representation (<ref>). Conversely to (N), which is a projection of the basic representation, the infinite completion is not a projection of the basic representation but the basic representation embeds in the infinite completion. This will be used to show that the cosocle filtration (<ref>) is indeed also an infinite composition series (define in Appendix <ref>). Clearly, these statements must be properly defined, so first let us define the infinite completion (∞) and then show that the basic representation embeds in it. For each N∈_0, the quotient universal enveloping algebra of the parabolic Heisenberg algebra ()^N is defined analogous to ()^N. The inverse limit of the family (()^N)_N∈ and ((N))_N with the morphism given by the natural projection are called ()^∞lim_⟵ N→∞()^N , (∞) lim_⟵ N→∞(N) ∈ ()^∞⊗[] . The map ω_α,∞ on the space (∞) is defined uniquely from ω_α,N and the universal property of the inverse limit ω_α,∞exp(∑_k≥ 022k+1^α_2k+1,0) ∈ ()^∞ . The equivalence relation (<ref>) extends to an equivalence relation of the completions. For every α∈Δ, and _γ^∞∈(∞) (ω_α,∞)^2_γ^∞≃_γ+2α^∞∈(∞) , which defines the quotient space (∞) ((∞)/∼) ≡lim_⟵ N→∞(N) . The action of and the generating elements is uniquely given by the universal property of the inverse limit and (<ref>) _f^∞,R = ρ(_f^) ∈ ()^∞ . These elements allow to give the map _∞ explicitly, which embeds the basic representation in (∞). By the universal property of the inverse limit, the map _∞ → (∞) e^γ⊗f({α∈Δ}) ↦ _f^∞,() _γ , is uniquely defined, such that for every N∈_0 the projection of the codomain ()^∞ of _∞ to ()^N is the map _N. The map _∞→(∞) is injective but not surjective. By this proposition the map _N and _∞ between the modules as a -module and (N), (∞) as a -module behave similar to the Lie algebra homomorphism ρ^N and ρ from to ^N resp. . By Proposition 9 and 11 in <cit.>, the map ρ^N is surjective but not injective, while its completion ρ is injective but not surjective. In particular the elements in which contain only finitely many terms in the formal power series are not in the image of ρ. The basic representation does not project onto the infinite completion (∞), but rather embeds in it. First, we show that _∞ is injective. Because the map ρ is injective, the generating element, as a function ^∞,_f: f∈[{α∈Δ}]↦() is injective. Therefore, for γ∈ Q, the map _∞ is injective on the states {e^γ⊗ f f∈[{α∈Δ}]}. The elements _γ and _γ' are independent if γ-γ'∉ 2Δ and therefore it remains to show that the images of _∞ on the states {e^γ+2lα⊗ f f∈[{β∈Δ}, α∈Δ, l∈]} are linearly independent. The image of _∞ on e^γ+2lα⊗ f and on e^γ+2kα⊗ f differ by a factor of (ω_α,∞)^l-k. Therefore, it remains to show that the images of _∞ on the states {e^γ⊗ f f∈[{β∈Δ}]} are linearly independent to the images of _∞ on the states {e^γ+2lα⊗ f f∈[{β∈Δ}, α∈Δ]} for l∈. The remaining independents relations of the image of the states in (<ref>) follow by division with ω_α,∞. To show that the image on the states in (<ref>) and (<ref>) are independent we show that the generating elements _f^∞, are linearly independent from the elements _f'^∞,(ω_α,∞)^2l for l≠ 0. By the series expansion (<ref>), it is sufficient to show that there is no elements in which is mapped to ∑_k≥ 04l2k+1_2k+1,0^α by ρ. It is clear, that this is true for all elements ^β_m∈ with β≠±α or m odd. Suppose now 0ρ(∑_i=1^Iv_i^α_2n_i) + v_0∑_k≥ 04l2k+1_2k+1,0^α ∑_i=1^Iv_i ∑_k≥ 0∑_s=1^2n_i-1c^(2n_i)_s (2k+1)^s _2k+1,0^α + + v_0∑_k≥ 04l2k+1_2k+1,0^α . where c^(n)_s∈ are the polynomial coefficients. While 12k+1 has no finite series expansion, the first term in the equation are polynomials in 2k+1 of finite degree. But since this equation must hold for all k∈_0, the second and first term must cancel independently. We show that _∞ is not surjective. By Proposition 9 in <cit.>, the element _1,0^α is not in the image of ρ. Therefore, there exists no polynomial f∈[{β∈Δ}] and γ∈ Q such that _∞(e^γ⊗ f) _1,0^α_0. The cosocle filtration (W_N)_N∈_≥ -1 in Corollary <ref> is an infinite composition series of the basic representation under the action of lim_⟵ N→∞ W_N {0} . By the definition of an infinite composition series in Appendix <ref> it remains to show (<ref>). For (-1)={0}, the family ((N)_N∈_≥ -1 together with the natural projection (N+1)→(N) provides the family (W_N=Ker(_N))_N∈_≥ -1 with /W_N ≃(N) and implies the natural projective embedding W_N+1→ W_N⊃ W_N+1. Then, by the universal property of the inverse limit applied to _N which provides _∞, the inverse limit of the family (W_N=Ker(_N))_N is W_∞ = Ker(_∞) = {0} by Proposition <ref>. In this section we derived infinitely many -subrepresentations of the basic representation as the kernels of infinitely many different -projections of the basic representations on finite dimensional -representations. We proved, that these are all finite dimensional -representations onto which the basic representation can project by the pull back of ρ. This allowed us to provide the cosocle filtration and to show that it is an infinite composition series of the basic representation. In particular, as vector spaces or -representations, the basic representation is isomorphic to a direct sum of infinitely many finite-dimensional semisimple -representations. In the next section we provide examples and applications of the results. § EXAMPLES AND APPLICATIONS TO SUPERGRAVITY In this section we provide some explicit examples of the projection of the basic representation onto finite dimensional -representations and we give applications of our results to supergravity in two dimensions. First, in Subsection <ref> we prove a Proposition which allows to evaluate the generating elements ^ to any state of the basic representation in a closed form and which simplifies the evaluation of ^ significantly. Then, we add some comments on the finite dimensional compact subalgebra to further analyze the -representations as -representations. Second, in Subsection <ref> the results of the previous section are applied to obtain explicit examples of invariant subspaces and about the projection of the basic representation onto -representations of parabolic level zero and parabolic level one. We use the proven proposition to give the projection homomorphism explicitly. Third, in Subsection <ref> we apply the results in the previous section to the basic representation of the split real algebra with maximal compact subalgebra . We find the -representations of parabolic level zero, one and two which are projections of the basic representation and compare these representations with tensor products of representations of the single parabolic algebra constructed in <cit.>. This provides an (16)-covariant formulation of the -representations on which the basic representation projects. At the end, we discuss applications to supergravity in D=2 dimensions. §.§ Evaluation of projection To understand the -subrepresentations of the basic representations explicitly, the homomorphism _N must be evaluated on the states e^γ⊗ f of the basic representation (<ref>). By Theorem <ref> it remains to evaluate the generating elements ^_f = ρ(^_f) for polynomials f∈[{α∈Δ}]. The next Proposition provides an explicit expressions of the generating elements ^_f. Let {v^i ∈𝔥^∨ 1≤ i≤ r} be an orthonormal basis of 𝔥^∨ and let {H^v,i∈𝔥 1≤ i≤ r} be the dual basis with ^v,i_n = H^v,i(t^n-t^-n), then a.) ^_f_1({v^i})f_2({v^j})^_f_1({v^i}^_f({v^j}) for i≠j, b.) ^_(v_-l^i)^n ∑_k=0^n c_l,n,k (^v,i_l)^k e^γ⊗1 c_l,n,k (-1)^n(2l(n-k-1))!_(4l)(2n)!_(2)/k!(2n-2k)!_(2) for n-k even 0 otherwise . where for n,N∈, N!_(n) = N(N-n)!_(n) for N≥ n, N!_(n) = N for 1≤ n≤ N, 0!_(n) = (-N)!_(n) = 1. By (<ref>) it is straightforward to evaluate the generating elements ^_f for any f∈[{α∈Δ}]. The Lie algebra homomorphism ρ^N maps group like the generating elements ^ to the parabolic generating elements ^∈(). Because of the parabolic grading, and because every element of the parabolic Heisenberg algebra has at least parabolic level one, the Lie algebra homomorphism ρ^N is easy evaluated for low N. The loop generators of the dual roots H^v,i_n commute and therefore H^v,i_n does not act on e^γ⊗ (v^j_-l)^n for i≠ j and l,n∈, which proves a. We prove b by induction in n. For n=0 the statement is true and for n=1, c_l,1,k = -2δ_k,1 and _v^i_-l = -2_l^v,i , the statement is also true. Assuming b is true for n≤ N and applying the commutation [H^v,i_-l,(^v,i_-l)^k] = -kl(^v,i_-l)^k-1 results in _(v^i_-l)^N+1 H^v,i_-l∑_k=0^n c_l,N,k (_-l^v,i)^k -lc_l,n,1 + ∑_k=1^N -((k+1)lc_l,N,k+1+2c_l,N,k-1) (_-l^v,i)^k - 2c_l,N,N(_-l^v,i)^N+1 . From the explicit form of c follow the identities c_l,N+1,0 -lc_l,N,1 , c_l,N+1,N+1 -2c_l,N,1 c_l,N+1,k -l(k+1)c_l,N,k+1) - 2c_l,n,k-1 , which, inserted in (<ref>) prove b. The Proposition allows to evaluate the homomorphism _N, which projects onto (N). To analyze the -representations (N), they may be decomposed into -representations. Therefore, let us first consider the quotient algebra ^0 which is also a finite dimensional subalgebra of the double parabolic algebra ^0 { P_0,a^αα∈Δ, a∈{0,1}} . By definition these are the only elements of the double parabolic algebra which preserve the parabolic level of states in the parabolic representation. The Lie bracket of these elements is [P^α_0,a,P_0,b^β] ϵ(α,β) (P_0,a+b^α+β+ P^α-β_0,a+b) . Therefore, ^0 is a _2 graded algebra ^0 _0⊕_1, where _0≃ as Lie algebras and _1≃ as vector spaces. In particular _0 is a subalgebra as well and equivalent to the semisimple Lie algebra . This analysis helps to analysis the decomposition of (N) in -representations in Subsection <ref>. However, first, let us provide general examples of the projections _N in the next subsection. §.§ General examples In this section we provide the projection of the basic representation on the parabolic level zero and parabolic level one representations (0) and (1) and give the projections _0 and _1 explicitly. We analyze the representations (0) and (1) in more detail and add some comments on the decomposition into -representations. It is straightforward to generalize this to any parabolic level N∈_0 and representation (N). §.§.§ Parabolic level zero The simplest -representation onto which the basic representation can be projected is of parabolic level zero. By Theorem <ref> this representation is (0) given in Proposition <ref>. Let us analyze this representation in more detail. Because the representation has parabolic level zero, all parabolic generators of higher parabolic level than zero act trivial. In particular ω^2_α -1 ∈_0 acts trivial and implies that for α∈Δ and γ∈, _γ+2α - _γ∼ 0 ∈(0). Therefore, the representation has dimension dim((0)) = 2^r, where r is the rank of the finite algebra . The action of the parabolic generators of level 0 is given in Proposition <ref> in terms of the extended -Schur polynomials S^0,(2n) = 1 and S^0,(2n+1)=0 (<ref>) P^α_0,a_γϵ(α,γ) 0 if (α|γ) + a even _γ+α if (α|γ) + a odd. This action allows us to analyze the decomposition of (0) into -representations. The representation is generated from the action of P_0,1^α and P_0,1^(α P_0,1^β) for different α∈Δ on _0, because for α,β∈Δ, and α+2β∈Δ, then β = -α. Explicitly we obtain _0 ⟶ 1_ P_0,1^α_0 _α ⟶ _ P_0,1^αP_0,1^β_0 {P_0,1^α,P_0,1^β} _0 ⊂ (_⊗_sym _) . The first line is a singlet under . The second line is the adjoint representation of and the last line takes values in the symmetric product of two adjoint representations of . The singlet in the last line is proportional to _0 because P_0,1^β P_0,1^α = η^α,β_0. All together, (0)⊂()⊗(). This provides some inside in the decomposition of (0) into -representations. The basic representation is projected onto (0) with the homomorphism _0 in Theorem <ref>. By Proposition <ref> and the Lie algebra homomorphism ρ^0 we can evaluate the projection in terms of the generating elements ^0,_f efficiently. For the case of parabolic level zero, all Heisenberg generators act trivial and one evaluates (<ref>) for k=0. Then the projection _0 acts on the basic representation by _0(e^γ⊗∏_i=1^r (v^i_-l_i)^n_i) (∏_i=1^r(2l_i(n_i-1))!_4l_i)_γ for all n_i even 0 otherwise . By Corollary <ref>, the kernel of _0 is a -invariant subspace. For instance all elements of the basic representation which contain in vertex operator realization an odd power of a indexed simple root are in the kernel of _0. §.§.§ Parabolic level one Let us decompose the parabolic level one representations (1). The states of the representation are (1) = {_γ , _1,a^α^i_γγ∈ Q , α^i∈Π, a∈{0,1}} . Therefore, the representation has dimension dim((1)) = 2^r + 2· r· 2^r. The states _1,a^α^i_γ are strictly of parabolic level one while the states _γ have contributions from parabolic level zero and one. Thus, to parabolic level zero, the representation (1) has 2^r number of states contained in _γ. The remaining states are ^i_1,a_γ, which accounts for 2· r· 2^r number of states at parabolic level one. These are all the states of the representation and therefore, parabolic level one contribution in _γ are necessarily linear combinations of the 2· r· 2^r states ^α^i_1,a_γ. The generators of parabolic level higher than one act trivial and the action of the other generators is given in (<ref>) and in terms of the extended -Schur polynomials S_α^1,(2n) 1 + 4n ^α_1,0, S_α^1,(2n+1) 4(n+1)^α_1,1 . The action of the parabolic generators is then evaluated from (<ref>) P^α_0,a_γϵ(α,γ) 1-2((α|γ)+2)^α_1,0 if (α|γ)+a is odd -2(α|γ)^α_1,1 if (α|γ)+a is even. P_1,a^α_γϵ(α,β)^α_1,a+1+(α,β)_α+γ . On the states at level one, the generators P^α_1,a act trivial while the action of P^α_0,a is given by the commutation relations. The homomorphism _1 from Theorem <ref> can be evaluated by Proposition <ref>. For parabolic level one, products of more than one Heisenberg generators act trivial. Therefore, (<ref>) must be evaluated at k=0 and k=1 and thus, the homomorphism _1 of (1) is _1(e^γ⊗∏_i=1^r (v^i_-l_i)^n_i) (∏_i=1^r(2l_i(n_i-1))!_4l_i)_γ for all n_i even (∏_j≠ i=1^r(2l_i(n_i-1))!_4j_i)4n_j (2l_j(n-2))!_(4l_j)^α^j_1,l_j_γ for n_j odd, n_i even for i≠ j 0 otherwise . It is straightforward to evaluate the projection of the basic representation onto even higher parabolic representations by applying Proposition <ref>, Theorem <ref> and Proposition <ref>. To obtain a better understanding of the -representations in Proposition <ref> on which the basic representation projects, we decompose these representations into -representations. In general, this does not lead to new obstacles but may be achieved with standard techniques of representation theory, but it requires a suitable basis change of the generators in (<ref>) to a -covariant basis. Therefore it seems useful to restrict to a specific affine Kac-Moody algebra which we do in the next subsection. §.§ Projections of 𝔢_9 basic representation Certainly, one of the most interesting split real simply-laced affine Kac-Moody algebras is with maximal compact subalgebra . In this section we decompose the finite dimensional -representations (0), (1) and (2) on which the basic representation projects under the finite compact subalgebra k() = (16). This is an (16)-covariant form of the -representations which has the additional benefit, that it is significantly easier to obtain subrepresentations of (N). Instead of performing the explicit (16)-decomposition of (N), we rather apply the construction of (16)-covariant -representations in <cit.> to build up (16)-covariant -representations which are equivalent to (0), (1) and (2). This is equivalent to a direct decomposition of these -representations. In this work we restrict the analysis to dimensional arguments, because the explicit analysis to show that the (16)-covariant -representations are equivalent to (N) for some N∈_0 would exceed the scope of this paper. However, the explicit analysis finds important application in supergravity in two dimensions as it describes the yet unknown supersymmetry of vector fields and the decomposition of the embedding tensor into fermion bilinears. Therefore, we present it in a subsequent paper. First, we introduce an (16)-covariant notation of the algebras of interest. Second, we construct (16)-covariant -representation along <cit.> which are equivalent to (N) for N≤ 2. Third, we apply these concrete results to supergravity in two dimensions. §.§.§ (16)-covariant formulation A thorough introduction of and the single parabolic algebra in (16)-covariant formulation is provided in <cit.> section 5.1 and 5.2. Here, only the necessary notation and identities for this work are introduced. The finite dimensional Lie algebra has dimension ()=248, its maximal subalgebra is (16) with 120 generators X^IJ = -X^JI for 1≤ I,J≤ 16. The non-compact orthogonal complement transforms as the 128_(16) dimensional (16)-spinor representation with basis elements Y^A for 1≤ A≤ 128. In this (16)-covariant formulation, the Lie bracket and the Killing form are 2 [ X^IJ , X^KL] = 4 δ^JK X^IL , [ X^IJ , Y^B ] = -Γ^IJ_AB Y^B , [ Y^A, Y^B ] = 14Γ^IJ_AB X^IJ , κ(X^IJ,X^KL) -2δ^IKδ^JL , κ(Y^A,Y^B) δ^AB . For with positive roots α∈Δ_+, we formally write the basic change from E^α and H^α to the (16)-covariant basis by X^IJ∑_α∈Δ_+x_α^IJ(E^α + E^-α) Y^A∑_α∈Δ_+y_α^A(E^α - E^-α) + ∑_i=1^Ry_i^A H^α^i For the inverse basis change one might use the coefficients x^α_IJ and y^α_A, y^i_A. To explicitly branch the finite dimensional -representations (N) under (16) many useful relations of the coefficients of the basis change may be derived. This explicit analysis is performed in a subsequent paper while we restrict here on dimensional arguments. The Lie bracket of the affine algebra is given by (<ref>) and (<ref>). Therefore, this basis change extends straightforward to the affine algebra and the maximal compact subalgebra . On the parabolic algebra () the basis change is A^IJ_2k,a X^IJ⊗u^2k⊗r^a ∑_α∈Δ_+ x_α^IJ P^α_2k,a S^A_2k+1,a Y^A⊗u^2k+1⊗r^a ∑_α∈Δ_+y_α^A P^α_2k+1,a + ∑_i=1^r y_i^A ^α^i_2k+1,a . Setting r=1 the double parabolic algebra restricts to the single parabolic algebra of in <cit.> section 5. This sets up the (16)-covariant notation of , and the parabolic algebra (). Next, let us also give the and (16)-decomposition of the first few loop levels of the basic representation (1_)_0 ⊕ ( 248_)_1 ⊕ (1_ ⊕ 248_⊕ 3875_)_2 ⊕ … ( 1_(16))_0 ⊕ (120_(16) ⊕ 128_(16))_1 ⊕ (1_(16) ⊕120_(16) ⊕ 135_(16) ⊕ 1820_(16) ⊕ 128_(16) ⊕ 1920_(16) )_2 ⊕… The decomposition to higher levels is in (<ref>), it provides some intuition about the (16)-representations to expect in the (16)-branching of (N) on which the basic representation projects -covariant. For example, the basic representation does not contain at any level the 16_(16) vector representation nor the conjugate spinor representation 128_(16). More generally, the basic representation only contains (16)-representations which are also in the root lattice of . We use these constraints about possible (16)-representations in the basic representation to construct the (16)-decomposition of (N) for N≤ 2 in the next subsection §.§.§ Embedding of (16)-covariant representations and applications The plan of this subsection is to build up (16)-covariant -representations along section 5.3 in <cit.> which have the same dimensions as (N) for N=0,1,2 at every parabolic level. This is a strong indication that the different (16)-covariant representations are indeed equivalent to the different (N). Therefore, first we argue that the basic representation cannot project on a pure representation of the single parabolic algebra but only on a tensor product. Then, we sketch the construction of representations of the single parabolic algebra in <cit.> and we build up the representations (N) for N=0,1,2 from tensor products of representations of the single parabolic algebra. By the different action of P_0,0^α and P_0,1^α of the double parabolic generators on (N) in (<ref>) it is evident that the basic representation cannot project onto pure representation of the single parabolic algebra. Therefore, the representations onto which the basic representation can be projected must be at least a tensor product of two representations of the single parabolic algebra which are pulled back to representations of by the two different Lie algebra homomorphisms ρ_± (for details see also Footnote <ref>). Henceforth, we consider the tensor product of two representations Φ and Ψ of the single parabolic algebra and it remains to build up Φ and Ψ such that the tensor product is equivalent to (0), (1) and (2). Because the representations decompose in the different parabolic levels, the analysis can be carried out for the different levels individually. Therefore, we branch Φ and Ψ in (16)-representations φ^X_k and ψ^X_k where X is the index of the (16) representation and k is the parabolic level. Now, let us describe how to construct the representations Φ and Ψ of the single parabolic algebra. One start with an (16)-representations φ^X_0 and ψ^Y_0 at parabolic level zero. The next parabolic levels of the representation are constructed as a Verma module of the parabolic generators with positive parabolic level (<ref>) acting on φ^X_0 and ψ^Y_0. From the Verma module, finite dimensional representations are obtained by quotienting with subrepresentations. Here, an (16)-covariant basis of the generators (<ref>) already implies that the full representation will be (16)-covariant. Parabolic level zero. For N∈, the elements of (N) at parabolic level zero are {_γ∈(0)γ∈} . These are 2^8 = 256 inequivalent elements by the equivalence relation (<ref>). Thus, the tensor product of the two representation φ_0 and ψ_0 must have dimension 256 and the decomposition of the tensor product into (16)-representations takes weights in the weight lattice. The general pattern of (0) in (<ref>) also predicts the tensor product to decompose into an (16)-singlet, an (16)-adjoint representation and an (16)-representation in the symmetric tensor product of two (16)-adjoint representations. These constraints seem to be uniquely solved by φ_0 and ψ_0 the vector representation of (16) with components ψ_0^I and φ_0^I for I=1,…,16. The action of the generators A^IJ_0,0 is given by (16)-covariance, while the action of the generators A^IJ_0,1 is A^IJ_0,1 φ_0^I A^IJ_0,0 φ_0^I A^IJ_0,1ψ_0^I -A^IJ_0,0ψ_0^I . In fact, the tensor product of these two representation has the right dimension 16· 16 = 256 and decomposes into representations which also appear in (<ref>) 1616⊗1616116⊕12016⊕13516 . The 12016 is the adjoint representation and the 13516 representation appears in the symmetric product of two adjoint representations 12016⊗_sym12016116⊕13516⊕182016⊕540316. The singlet is ψ_0^K⊗φ_0^K ⊗ 1 → 116 and the double parabolic generators (<ref>) map this singlet into the 12016 representation and the 13516 representation A^IJ_0,1( ψ_0^K⊗φ_0^K) 4 ψ_0^[J⊗φ_0^I] → 12016 A^KL_0,1( ψ^[J⊗φ^I] ) 4 δ^LI ψ_0^(K⊗φ_0^J) → 116⊕13516 . as it is predicted in the general analysis in (<ref>). This is the (16)-decomposition of (0). Parabolic level one. Next, for N≥ 1 we analyze (N) at parabolic level one with elements {_γ, ^i_1,0_γ, ^i_1,1_γ∈(1) γ∈, 1≤ i≤ 8 } There are 256 + 2 · 8 · 256 inequivalent elements. However, 256 of these elements are of parabolic level zero and therefore, 2 · 8 · 256 of these elements are of of parabolic level one. This implies restrictions on the possible (16)-representations at level one in Φ and Ψ. By the construction in <cit.>, the level one representations of Φ and Ψ is the Verma module action of S^A_1,0 on φ^I_0 and ψ^I_0 S^A_1,0 φ^I_0 Γ^I_A,Ȧφ_1^Ȧ + φ_1^AI S^A_1,0 ψ^I_0 Γ^I_A,Ȧψ_1^Ȧ + ψ_1^AI , where φ_1^Ȧ and ψ^Ȧ_1 are the conjugate spinor representations 12816, φ_1^IA and ψ^IA_1 are the 192016 representations and Γ^I_AȦ are the (16) gamma matrices. Then, at level one, the tensor product of Φ and Ψ has dimensions 2 · 16 · (128 + 1920)>2 · 8· 256 which is clearly larger than the dimension of (N) at level one. However, the 192016 representations φ_1^IA and ψ^IA_1 can be quotiented from Φ and Ψ and therefore they can be set to 0 <cit.>. Hence, the representations Φ and Ψ are up to parabolic level one Φ^I,Ȧ,…(φ_0^I, φ_1^Ȧ, …) Ψ^I,Ȧ,…(ψ_0^I, ψ_1^Ȧ, …) and the action of the double parabolic algebra is uniquely given by (16)-covariance, (<ref>) and (<ref>). On the tensor product, up to parabolic level one, the double parabolic algebra acts by S^A_1,0( ψ_0^I⊗φ_0^J) Γ_A,Ȧ^J ψ_0^I⊗φ_1^Ȧ + Γ_A,Ȧ^I ψ_1^Ȧ⊗φ_0^J S^A_1,1( ψ_0^I⊗φ_0^J) Γ_A,Ȧ^J ψ_0^I⊗φ_1^Ȧ - Γ_A,Ȧ^I ψ_1^Ȧ⊗φ_0^J. Indeed, these representations on the right hand side are exactly the representations to expect in (N) at parabolic level one because 2·( 1616⊗12816) 2·(12816⊕192016) ≃ 2· 2048 ≃ ((1))-((0)) . This fixes Φ and Ψ up to parabolic level one and together with parabolic level zero determines the (16)-decomposition of (1). Parabolic level two. The elements in (N) for N≥ 2 which have contribution to parabolic level 2 are {_γ, ^i_1,0_γ, ^i_1,1_γ, ^i_1,0^j_1,0_γ, ^i_1,1^j_1,0_γ, ^i_1,1^j_1,1_γ, ^i_1,1_γγ∈, 1≤ i,j≤ 8 } These are 256 + 2 · 8 · 256 + (2· 36 + 64)· 256 independent elements, however there are 256 + 2· 8· 256 elements at parabolic level zero and one, such that (2· 36 + 64)· 256 34816 independent elements remain at parabolic level 2. Let us build the representations Φ and Ψ such that the tensor product has at level 2 the same number of elements. In <cit.> Section 5.3, the second parabolic level of the Φ and Ψ is evaluated and it consists of the (16)-vector representation and the three form 56016, such that Φ and Ψ are Φ^I,Ȧ,J,[KLM](φ^I_0,φ^Ȧ_1,φ^J_2,φ^[KLM]_2) , Ψ^I,Ȧ,J,[KLM](ψ^I_0,ψ^Ȧ_1,ψ^J_2,ψ^[KLM]_2) . The tensor product of these two representations at parabolic level 2 has the components φ^I_0 ⊗ψ^J_2 , φ^I_0 ⊗ψ^[KLM]_2 , φ^Ȧ_1⊗ψ^Ȧ_1 , φ^J_2⊗ψ^I_0 , φ^[KLM]_2⊗ψ^I_0 which are indeed the number of elements of (N) at parabolic level 2 16· 16 + 16· 560 + 128· 128 + 16· 16 + 560· 16 34816 . Together with the decomposition of parabolic level zero and one above this is the (16)-covariant form of (2). This is a quite remarkable result and shows how the basic representation projects onto a tensor product of two different `spinor'-representations of the single parabolic algebra. However, this has even more consequences to obtain quotients of subrepresentations of (N) which are also projections of the basic representation. Usually, if these quotients by subrepresentations do not correspond to a representation (M) with M<N, they are very hard to identify since they must the subrepresentations must be subrepresentations of the full double parabolic algebra and not only of the single parabolic algebra. However, since we identified (N) for N≤ 2 with the tensor product of Φ and Ψ, the tensor product of subrepresentations of Φ and Ψ are also subrepresentations of (N). The advantage is that the subrepresentations of Φ and Ψ are much simpler to identify, because these are representations of the single parabolic algebra. Let us discuss possible subrepresentations of interest for applications to supergravity in two dimensions. §.§.§ Application to Supergravity in two dimensions Different subrepresentations of (2) and (4) find applications in maximal supergravity in two dimensions. One application is the supersymmetry of the gauge fields of the theory while another application is the decomposition of the embedding tensor in fermion bilinears. We first introduce the and representations for the fields of interest. For each of the two space time indices, the gauge fields of supergravity in two dimensions transform in the basic representation <cit.>. The fermions of the supergravity theory are Φ⊗𝒞_2 with Φ in (<ref>) but without the three form at parabolic level 2 and 𝒞_2 accounts for the two chiralities of spinors in two dimensions <cit.>. The supersymmetry parameter of the theory is ϵ^I≃ψ_0^I ⊗𝒞_2, which is Ψ⊗𝒞_2 in (<ref>) but keeping only the (16)-vector representation at parabolic level zero. Then, schematically the supersymmetry of the vector fields is associated to the homomorphism _2 in (<ref>) which maps onto the tensor product of ϵ^K ⊗Φ^I,Ȧ, J⊗𝒞_2 . Only one factor of 𝒞_2 remains because the two factors of 𝒞_2 are multiplicative. Another interesting object necessary for gauged maximal supergravity in two dimensions is the embedding tensor. On the one hand, the embedding tensor takes values in the basic representation of <cit.> on the other hand it decomposes into fermion bilinears. This decomposition is the homomorphism _4 in (<ref>) mapping on the tensor product of the fermions Ψ^K,Ḃ, L⊗Φ^I,Ȧ, J⊗𝒞_2 but with the same (16)-components ψ_k^X = φ_k^X. This is a subrepresentation of (4). In a subsequent paper, we extend this schematic realization of supersymmetry and the decomposition of the embedding tensor to a full description. § REPRESENTATIONS OF DOUBLE PARABOLIC ALGEBRA The representations of the single parabolic algebra are constructed in <cit.> from a Verma module starting as a -representation at parabolic level zero. By the parabolic grading, every -representation in this Verma module generates a subrepresentation. These subrepresentations can be quotiented from the module to obtain finite dimensional representations. This procedure extends to the double parabolic algebra (⊗[u^2n] ⊕ ⊗ u[u^2n])⊗𝒞_2 but in both parameters u and 1≠ r∈𝒞_2. However, r has only a _2 grading and requires additional considerations. The Lie bracket is multiplicative in u and r∈𝒞_2 and has therefore a graded structure in u and a _2 graded structure in r. However, the universal enveloping algebra is not multiplicative in u and r, which equips the universal enveloping algebra with a natural graded structure in u and in r. Therefore, representations of the double parabolic algebra are graded with respect to u and r. To construct these representations we extend the construction in <cit.> to the double parabolic algebra. Therefore, let us use for the single parabolic algebra, which is obtained from by setting r=1 and the universal enveloping algebra of the single parabolic generators with positive power in u is (_+) in <cit.>. This extends for the double parabolic algebra to the universal enveloping algebra of double parabolic algebra with positive power in u or in r (_+) = ( _+· r ⊕ _+ ⊕ Sym( · r )) . A basis of (_+) is given by products of elements of _+· r to the left of _+, which are to the right of symmetric products in · r. Let 𝔅__+ be a PBW basis of (_+) and let 𝔅̃__+ be the set 𝔅__+ but with each generator of _+ multiplied by r. Let 𝔅̃_ be the PBW basis of but with each generator multiplied by r, then a PBW basis of (_+) is given by the ordered products of 𝔅__+𝔅̃__+×𝔅__+×𝔅̃_ . By the construction in <cit.>, the representations of the double parabolic algebra are constructed from the action of (_+) on a -representation at level u^0 and r^0. The parabolic grading in r and u allows to identify infinite many ideals by restricting to a maximal level in u and r of a representation. These ideals allow for finite dimensional representations of , by setting the action of the ideals to zero. For example, restricting to r^0 reduces the representations of the double parabolic algebra to representations of the parabolic algebra . Finite dimensional representation of the parabolic algebra are obtained for example by considering a maximal level u^N. § GENERALIZED SUMMATION It is useful to have a notation of generalized sums over polynomials p(k), where the upper limit can be lower than the lower limit, but the sum remains finite. For a meaningful definition of generalized sum Faulhaber's formula is useful. Let n,N∈ and let p∈[k]_N be a polynomial of degree N, then the sum P(n) ∑_k=1^n p(k) ∈ n [n]_N is a polynomial of degree N+1 on the domain n∈ with a zero at n=0 <cit.>.[The full formula for sums over monomials is called the Faulhaber formula ∑_k=1^n k^p 1/p+1∑_r=0^p p+1rB_r n^p+1-r where B_r are the Bernoulli numbers. It was proven by L. Tits in 1923.] The polynomial P extends uniquely to the domain n∈ as a polynomial P̅∈ n[n]_N of degree N+1. For this polynomial we derive an identity, therefore, let n_0∈, 1≤ n≤ n_0, then in P(n) P(n_0) -∑_k=n+1^n_0 p(k), the right hand side is a polynomial P'∈[n]_N+1 on the domain n∈, n≤ n_0. The unique polynomial extension of P' from the domain n∈, n≤ n_0 to n∈ defines P̅', but because n_0 is arbitrary, P̅'(n) P̅(n) for all n∈ and therefore P̅' = P̅. For all n∈_0 this implies the identity P̅(-n) -∑_k=0^n-1p(-k) . Now, we define the generalized sum over the polynomial p of degree N m=1n p(m) P̅(n)∈ n [n]_N as the polynomial extension of ∑_m=1^n p(m) from the naturals to the integers n∈. By the above argument, the generalized sum satisfies for every n∈ k=1-n p(k) - ∑_k=0^n-1 p(-k) k=10 p(k) - k=0-1 p(-k)=0 § DEFINITIONS AND NOTATIONS We introduce some notations and frequently used definitions in this work. Notation sets and families. We write the set of elements with an additional index by {x}_I {x_n n∈ I} {x∈ X}_I {x_n|x∈ X, n∈ I} , but if I is clear from the context, then we also write {x}{x}_I. Embedding and projection. For two vector spaces V,W and a homomorphism ': V→ W we say that V is embedded in W if ' is injective and if ' is surjective we say that W ≃ V/(Ker(') is a projection of V. If the vector spaces are representations of a Lie algebra 𝔩 and ' respects the action of 𝔩 we also call the embedding (resp. projection) a 𝔩-embedding (resp. 𝔩-projection). If it is clear from the context we also avoid the symbol 𝔩 and simply write embedding (resp. projection). Lie algebra structure and fineness. For a Lie algebra 𝔩 with a representation V, a 𝔩-structure of V is a family of 𝔩-representations V_i⊂ V. For two 𝔩-structures A and B, we call A finer than B if B⊂ A. If for a 𝔩-structure A holds B⊂ A for all 𝔩-structures B, we call A the finest 𝔩-structure. Cosocle Filtration and composition series. Let us introduce the cosocle filtration and the infinite composition series in a series of definition and implications. For further insides see also <cit.>. Therefore, let 𝔩 be a Lie algebra with a representation V. * A subrepresentation V≠ W⊂ V is maximal if for any other subrepresentation U∈ V hold that W ⊂ U implies U=W or U=V. ⇔ If W⊂ V is maximal then, V/W is simple. * The radical rad(V) of V is the intersection of all maximal subrepresentations of V. ⇔ The quotient V/rad(V) is maximal semisimple. * The cosocle of V is the quotient by the radical cosoc(V) = V/rad(V). ⇔ The cosocle is the maximal semisimple quotient of V. * The cosocle filtration resp. (radical filtration) of V is inductively defined. V_-1 = V and for i∈_0, V_i is the kernel of the projection of V_i-1 to its cosocle. ⇔ V_0 is the radical of V and for i∈_0, V_i+1 is the radical of V_i. The filtration has finite length if J∈ exists, such that V_j=0 for j>J. Then V_J is semisimple. * A composition series of V is a family of subrepresentations (V_i)_0≤ i≤ n for an n∈ such that V=V_0⊃ V_1⊃…⊃ V_n={0} and V_i/V_i+1 is semisimple. If V_n-1≠{0}, then n is the length of the composition series. * An infinite composition series of V are subrepresentations (V_i)_i∈ such that * V_0 = V , V_i⊃ V_i+1 , V_i≠{0}. * The inverse limit of (V_j)_j∈ with the natural embedding V_i+1⊂ V_i is properly defined and lim_⟵ j→∞ V_j 0 * The quotients V_i/V_i+1 are semisimple. This is the proper generalization of <ref>. The cosocle filtration is an infinite composition series if i. and ii. hold. § EXPANSIONS IN FIRST ORDERS We break down commonly used expressions into explicit expansions to make further calculations easier and to gain some intuition about the underlying concepts. Lie algebra homomorphism. The Lie algebra homomorphism from the maximal compact subalgebra to the double parabolic algebra is explicitly on the first 3 loop generators ρ(^α_0) P_0,0^α ρ(^α_± 1) P_0,1^α+2∑_k≥ 1 (∓)^k P_k,1^α ρ(^α_± 2) P_0,0^α+4∑_k≥ 1 (∓)^k k P_k,1^α ρ(^α_± 3) P_0,1^α+2∑_k≥ 1 (∓)^k(1+2k^2) P_k,1^α … ρ(^α_± 1) -2∑_k≥ 1 (±)^k _2k+1,1^α ρ(^α_± 2) -4∑_k≥ 1 (±)^k (2k+1) _2k+1,1^α ρ(^α_± 3) -2∑_k≥ 1 (±)^k(8k^2+8k+3) _k,1^α … Schur Polynomials. Up to index 4, the Schur polynomials are S_n≤-1({x}) 0 S_0({x}) 1 S_1({x}) x_-1 S_2({x}) x_-2 + x_-1^2 S_3({x}) 13 x_-3 + x_-1x_-2 +16x_-1^3 S_4({x}) 14 x_-4 + 13 x_-1x_-3 +18x_-2^2+ 14 x_-1^2x_-2+ 124 x_-1^4 … H-Schur Polynomials. The -Schur polynomials generate the Schur polynomials by action on 1. They are given by S_0^({^α}) 1 , S_1^({^α}) -2^α_1 , S_2^({^α}) 1+2(^α_1)^2 - ^α_2 , S_3^({^α}) -2^α_1-43(^α_1)^3 +2 ^α_1^α_2- 23^α_3 , … Extended -Schur Polynomials. The extended -Schur polynomials S̅^_α,a(n) are satisfy for n≥ 0 the identity S̅^_α,0(2n) = ρ(S^_2n({^α}) and S̅^_α,1(2n+1) = ρ(S^_2n+1({^α})), they are given up to parabolic level 2 by S̅_α,0^(n) 1 + 2n^α_1,0 + (2n+n^2)^α_1,1^α_1,1 + (-2n+n^2) ^α_1,0^α_1,0 + … S̅_α,1^(n) 2(n+1)^α_1,1 + (-2+2n^2)^α_1,1^α_1,0 + … Decomposition of basic representation of . The basic representation of -decomposes into -representations at the different levels. For the first five levels this decomposition is given by (1_)_0 ⊕ ( 248_)_1 ⊕ (1_ ⊕ 248_⊕ 3875_)_2 ⊕ (1_⊕ 2·248_⊕3875_⊕30380_)_3 ⊕ ( 2·1_⊕ 3·248_⊕ 2·3875_⊕30380_⊕27000_⊕147250_)_4 ⊕ … The individual -representation can be further decomposed into the maximal compact subalgebra (16) of 1_ → 1_(16) 248_ → 120_(16)⊕128_(16) 3875_ → 135_(16)⊕1820_(16)⊕1920_(16) 30380_ → 120_(16)⊕1920_(16)⊕7020_(16)⊕8008_(16)⊕13312_(16) … § DETAILS IN PROOFS For the convenience of the interested reader, this appendix provides more details and explicit calculations of various proofs, which are not given in full detail in the main text. Commutation of U∈() with _n^r. The commutator of ^i_n with ∏_l=1^N ^j_l_n_l is the group like action of ^i_n on the universal enveloping of the compact Heisenberg algebra. Therefore, the commutator of ^i_n with a product is the product of the commutators. For A_ij=0, ^i_n and ^j_m commute. It remains to repeatedly evaluate the commutator of _N^i and products of _m^j for A_ij = -1,2. Here, we evaluate the commutator for A_ij = 2. Therefore, the set _N p = (p_1,… ,p_N) ∈_2^N p_l∈{0, 1} , |p| ∑_l=1^N p_l . is useful. In this notation, the commutator is [^i_n , ∏_i=1^N ^i_n_l] (-1)^N (∑_p ∈ P_N(-1)^|p|(^i_n+∑_i=1^N(-1)^p_ln_l) + (-1)^N-1∑_k_1=1^N^i_n_k_1(∑_p ∈ P_N-1 (-1)^|p|^i_n+∑_k_1≠ i=1^n(-1)^p_ln_l) + (-1)^N-2∑_k_1=1^N∑_k_2=k_1+1^N^i_n_k_1^i_n_k_2(∑_p ∈ P_N-1(-1)^|p|^i_n+∑_k_1,2≠ i=1^n(-1)^p_ln_l) + … + (-1)^1∑_k_1=1^N∑_k_2=k_1+1^N…∑_k_N-1=k_N-2+1^N^i_n_k_1^i_n_k_2…_n_k_N-1 (∑_p ∈ P_1 (-1)^|p|^i_n+∑_k_1,2,… N-1≠ i=1^n(-1)^p_ln_l) , where k_1,2,…≠ i means i ≠ k_1, i≠ k_2, ... . The commutator for A_ij=-1 has additional factors of -. Proof Lemma <ref>. The evaluation of (<ref>) in the proof of Lemma <ref> is in detail _N (^i) 2/N+1∑_n=1^N+1(_N-2n+1-_nS_N-n+1) ∑_p≥ 0∑_k∈_pδ_(p+N+1)mod2,0∑_n=1^N+12/N+1(∑_n=1^N+1Θ(N-2n+1-deg(k)) -(nk_n/2))d_k(^i)^k ∑_p≥ 0∑_k∈_pδ_(p+N+1)mod2,0Θ(N+1-p) 2/N+1(⌊N+1-p/2⌋ +∑_n=1^N+1(nk_n/2))d_k(^i)^k ∑_p≥ 0∑_k∈_pδ_(p+N+1)mod2,0Θ(N+1-p) 2/N+1(N+1-p/2 +p/2)d_k(^i)^k ∑_p≥ 0∑_k∈_pδ_(p+N+1)mod2,0Θ(N+1-p) d_k(^i)^k . Proof Proposition <ref>. To prove the commutation of _m^α with _n^β on (N), we argued with the commutation relations of the basic representation this translates to the commutation relations on (N). For the readers convenience, we illustrate this in more detail here. Let us define a map _N from the basic representation onto the (N) by _N(e^γ⊗ f({α})) _f^N,() _γ for γ∈ and f a polynomial. By the definition of the generating elements _f^ the homomorphism _N commutes with the compact Heisenberg algebra. By the action of _m^α on _γ (<ref>) and by its action on the maximal states (<ref>), _N commutes with _n^α on maximal states. Additionally, by construction _m^α and the parabolic Heisenberg algebra satisfy the commutation relations. This is enough to prove the commutation relations of _m^α and _n^β on _γ _m^α_n^β _γϵ(β,γ)(_m^α S^_-n-(β|γ)-1_N(e^γ+β⊗ 1)+_m^α S^_n+(β|γ)-1_N(e^γ-β⊗ 1)) Here, let us just consider the first term in the sum, the other term works the same. We use the commutation relations of _m^β and the compact Heisenberg algebra in (<ref>) with J' = J(S^_-n-(β|γ)-1), U' = U(S^_-n-(β|γ)-1). Then, the first term becomes ∑_j=1^J'U'^α_j _m^α_N(e^γ+β⊗ 1) _N(∑_j=1^J'U'^α_j _m^α e^γ+β⊗ 1) _N(_m^α_n^β e^γ⊗ 1) . The same argument holds for the second term in (<ref>) as well. Therefore, for all γ∈ (_n^α_m^β - _m^β_n^α - [_n^α , _m^β])_γ_N((_n^α_m^β - _m^β_n^α - [_n^α , _m^β])e^γ⊗ 1) 0 . This proves the third step of the proof of Proposition <ref> in detail.
http://arxiv.org/abs/2407.13421v1
20240718114326
CycleMix: Mixing Source Domains for Domain Generalization in Style-Dependent Data
[ "Aristotelis Ballas", "Christos Diou" ]
cs.CV
[ "cs.CV" ]
0000-0003-1683-8433 Department of Informatics and Telematics Harokopio University Omirou 9, Tavros Athens Greece aballas@hua.gr 0000-0002-2461-1928 Department of Informatics and Telematics Harokopio University Omirou 9, Tavros Athens Greece cdiou@hua.gr § ABSTRACT As deep learning-based systems have become an integral part of everyday life, limitations in their generalization ability have begun to emerge. Machine learning algorithms typically rely on the i.i.d. assumption, meaning that their training and validation data are expected to follow the same distribution, which does not necessarily hold in practice. In the case of image classification, one frequent reason that algorithms fail to generalize is that they rely on spurious correlations present in training data, such as associating image styles with target classes. These associations may not be present in the unseen test data, leading to significant degradation of their effectiveness. In this work, we attempt to mitigate this Domain Generalization (DG) problem by training a robust feature extractor which disregards features attributed to image-style but infers based on style-invariant image representations. To achieve this, we train CycleGAN models to learn the different styles present in the training data and randomly mix them together to create samples with novel style attributes to improve generalization. Experimental results on the PACS DG benchmark validate the proposed method[Code available at: https://github.com/aristotelisballas/cyclemixhttps://github.com/aristotelisballas/cyclemix.]. <ccs2012> <concept> <concept_id>10010520.10010553.10010562</concept_id> <concept_desc>Computer systems organization Embedded systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010520.10010575.10010755</concept_id> <concept_desc>Computer systems organization Redundancy</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010520.10010553.10010554</concept_id> <concept_desc>Computer systems organization Robotics</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10003033.10003083.10003095</concept_id> <concept_desc>Networks Network reliability</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> CycleMix: Mixing Source Domains for Domain Generalization in Style-Dependent Data Christos Diou July 22, 2024 ================================================================================= § INTRODUCTION The past few years have been marked by an explosion in the use of Artificial Intelligence systems. Spanning from industry <cit.>, medicine <cit.>, academia <cit.> and even general public use <cit.>, AI systems seem to have established themselves in our daily lives. However, despite their success, widely used and state-of-the-art models still fail to exhibit generalizable attributes when evaluated on data that do not adhere to the i.i.d. assumption. During their training, prominent neural network architectures, such as deep convolutional neural networks (CNNs), often learn to infer based on spurious correlations present in the data (e.g. backgrounds, features attributed to the image style, etc.) and not truly class-representative properties <cit.>. Domain Generalization <cit.> attempts to provide insight into the above issues, by building models which are trained on multiple source data domains but are able to generalize to previously unseen data (target data domains). In our work, we aim to produce a model that maintains its performance on test image data distributions with different styles than the ones present in the training distribution. We therefore propose augmenting the styles of a model's training images and creating novel style image domains which could push a CNN to extract meaningful and domain-invariant representations. Our initial findings are validated on PACS <cit.>, a widely-used DG benchmark, which contains images from 4 different style domains. To this end, we: * Train domain translational Generative Adversarial Networks (GANs) for capturing the style attributes of each source domain, * Randomly mix the styles present in the source domains and produce images from novel style domains and * Validate our method on a widely-used publicly available DG dataset. In the sections to come, we briefly: introduce the DG problem setup along with all relevant notations, reference the most important works in DG, present the experimental setup and results and finally, conclude our paper. §.§ Domain Generalization Let 𝒟 := {𝒟_i}_i=1^S a set of S Source training domains over an input space 𝒳. We then observe n_i training data points from domain 𝒟_i, consisting of an input t 𝐱^(i)_j and label y^(i)_j, i.e. (𝐱^(i)_j, y^(i)_j) ∼𝒟_i. Similarly, let T := {T_i}_i=1^T be a set of T unknown Target domains, while we assume that there exists a single global labeling function h(x) that maps input observations to their labels, y_j^(i) = h(𝐱_j^(i)), for all domains i and samples j. The aim of Domain Generalization (DG) is to produce a model with parameters θ∈Θ which generalizes to both source domains 𝒟 and unseen target domains 𝒯. § RELATED WORK Domain Generalization has emerged as one of the most difficult problems in ML today, finding applications in multiple, varying fields. DG methods can be broadly categorized in the following groups <cit.>: * Data manipulation: applied algorithms focus on producing generalizable models by increasing the diversity of existing training data <cit.>. * Representation learning: given a predictive function h, which can be decomposed as h = f ∘ g into a representation learning function g and f, methods in this group focus on improving the feature extraction capabilities of g. The most common approach is regularizing the loss functions <cit.> or manipulating existing neural network architectures <cit.>. * Learning strategy: the problem of DG has also been studied under numerous machine learning paradigms, such as self-supervised learning, meta-learning, gradient operations, ensemble-learning, etc <cit.>. Our proposed method falls under the first category of data manipulation methods. The implementation of data augmentation methods with regards to model generalizabilty has been certainly explored in the past. Most notably the authors of <cit.> explore the benefits of applying random convolutions on training images and using them as new data points during model training. Several works have also employed adaptive instance normalization (AdaIN) <cit.> for transferring styles between data samples. For example, SagNets <cit.> aim to make predictions on the content of an image and disregard features attributed to image-style by training style-biased networks, while <cit.> trains a model to learn robust colorization techniques for improved model robustness. In another interesting work, the authors of <cit.> propose MixStyle, an algorithm that mixes the styles of training instances in each mini-batch to increase the sample diversity of source domains. Another method that produces surprisingly good results is Mixup <cit.>, which improves model robustness by training the network on convex combinations of pairs of examples and their labels. Further common augmentation methods or regularization strategies include CutMix <cit.> and Cutout <cit.>, where patches of images are respectively, either cut and pasted among training samples or dropped entirely. In our work, we propose capturing the style attributes of each domain by employing translational Generative Adversarial Networks and not relying on the features present in each separate sample. By mixing the style-attributes of each domain we are able to create completely novel samples in each mini-batch and improve the robustness of a vanilla feature extractor. § METHODOLOGY In this section we provide a brief overview of the CycleGAN algorithm which was utilized in our research for translating images between domains. Once defined, we present the proposed methodology of CycleMix for synthesizing images with novel styles . §.§ Cycle-Consistent Adversarial Networks CycleGANs <cit.> were initially proposed for learning translational image mappings between two domains X and Y, in the absence of paired examples. Formally, let D_1 and D_2 be source domains for which we aim to learn a mapping G: D_1 → D_2, such that the distribution of images drawn from G(D_1) are identical with the distribution of images from D_2. The novelty of CycleGAN's lies in the addition of an inverse mapping F: D_1 → D_2 and Cycle Consistency Loss, to the already established Adversarial Loss of GAN's. The addition of the Cycle Consistency loss enforces the model to reconstruct a translated image into its original domain. §.§ CycleMix In our work, we leverage CycleGANs to learn style-mappings between source domains, in the context of Domain Generalization. Our intuition is that by randomly mixing the styles present in source domains and thus synthesizing novel samples, a feature extractor can derive robust representations and learn the features which remain invariant across styles. Specifically, given S source domains we train S(S-1)/2 CycleGANs for learning all the possible domain mappings (a single CycleGAN also contains the trained model yielding the inverse mapping between the two domains). An indicative example of translated images between domains is presented in Fig. <ref>. Having captured the styles of each source domain with the above mappings, we use them as augmentation functions on training data. Specifically, given an image 𝐱^(i) drawn from a source domain D_i, we translate the image to the remaining source domains and then randomly blend them into a single final image. The entire augmentation operation is as follows: 𝐱^(i)' = 𝐱^(i) + ∑_j=1 j ≠ i^Sa_j · G_ij(𝐱^(i)) where G_ij the GAN for translating images from domain i to domain j and a_j a random parameter corresponding to the magnitude of style mixing for each source domain[The sum of all parameters a_j add up to 1 and are randomly sampled in each minibatch iteration.]. After this mixing operation, each image is normalized (i.e., the same preprocessing is applied as for the vanilla model input). In our experiments we only randomly augment half of the images present in a mini-batch to preserve the information provided by the initial source domains. An example of style mixed images, along with an illustration of our proposed method, is provided in Fig. <ref>. After augmentation, we pass the mixed images through a feature extractor and train a classifier to attain their final labels. § EXPERIMENTS §.§ Experimental Setup To validate our approach, we use the publicly available and widely-adopted PACS <cit.> dataset. PACS contains images from 4 separate style domains. As its name suggests, samples can either originate from the Photo, Art Painting, Cartoon or Sketch domain and can be one of 7 classes. As in standard DG experimental setups, we follow the leave-one-domain-out cross-validation protocol <cit.>, meaning that in each training iteration we train a model on 3 source domains and evaluate on a single target domain. For our experiments, we use the DomainBed <cit.> codebase and train a ResNet-50 feature extractor on a single NVIDIA A100 GPU card. §.§ Results To evaluate the effectiveness of CycleMIX as an augmentation approach, we rank its improvement against established techniques such as CUTOUT, CUTMIX and MIXUP. We also use SagNets as a baseline to demonstrate the efficacy of our method, along with a vanilla ResNet-50[Standard image augmentations are used during data loading for each method, such as random resized crop, random horizontal flip, color jitter and normalization.]. The results of our experiments are presented in Table <ref>tab:pacs. It is apparent that the proposed method has a clear advantage over the baselines, as it surpasses the second best performing model by an average of around 2%. With the exception of Photo, CycleMix yields the best results in every other target domain. § CONCLUSION In this work we propose CycleMix, a method aimed to alleviate the problems poised by style biased predictions in the DG setting. We argue that mixing the different styles present in a convolutional neural network's training data, a model can be pushed to focus on the invariant features and extract robust representations. The above claim is supported by results on PACS, a dataset containing images from 4 distinct style distributions, where our method surpasses previously proposed algorithms. However, one of the key limitations is that an increase in the number of source domains corresponds to an increased number of trained CycleGANs, which can prove computationally infeasible. In future work, we intend to train StarGAN models which were proposed for multi-domain image-to-image translation and explore our methodology on additional datasets. The work leading to these results has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 965231 project REBECCA. ACM-Reference-Format
http://arxiv.org/abs/2407.13040v1
20240717221342
Turkish Delights: a Dataset on Turkish Euphemisms
[ "Hasan Can Biyik", "Patrick Lee", "Anna Feldman" ]
cs.CL
[ "cs.CL" ]
Unsafe Impedance: Safe Languages and Safe by Design Software [ July 22, 2024 ============================================================ § ABSTRACT Euphemisms are a form of figurative language relatively understudied in natural language processing. This research extends the current computational work on potentially euphemistic terms (PETs) to Turkish. We introduce the Turkish PET dataset, the first available of its kind in the field.[The dataset is available at <https://github.com/hasancanbiyik/Turkish_PETs>]. By creating a list of euphemisms in Turkish, collecting example contexts, and annotating them, we provide both euphemistic and non-euphemistic examples of PETs in Turkish. We describe the dataset and methodologies, and also experiment with transformer-based models on Turkish euphemism detection by using our dataset for binary classification. We compare performances across models using F1, accuracy, and precision as evaluation metrics. § INTRODUCTION Euphemisms are polite or indirect words or expressions used in substitution of unpleasant or more offensive ones. They can be used to show kindness while discussing sensitive or taboo topics <cit.> such as saying between jobs instead of unemployed, or as a way to make unpleasant or unappealing things sound less harsh <cit.>, such as saying passed away, instead of died. Similar to the word died in English, Turkish makes use of many substitutions for the word öl-mek/öl-dü (to die/died), which is considered unpleasant. The substitutions for this word could be given as vefat etmek (to pass away), öbür dünyaya göçmek (to migrate to the other world), hakkın rahmetine kavuşmak (to go to kingdom come). Euphemisms can be used to conceal the truth <cit.>; for instance, if one were to use the expression enhanced interrogation techniques, one would mean torture <cit.>. Furthermore, humans may not agree on what a euphemism is <cit.>. There are various challenges regarding euphemisms. For instance, in some cases, words or expressions might develop or lose euphemistic meanings in time <cit.>. Due to the aforementioned reasons, the words and phrases in this research will be referred to as potentially euphemistic terms (PETs) <cit.>. Euphemisms pose a challenge to Natural Language Processing (NLP) due to this figurative behavior as they might also have a non-euphemistic interpretation in certain contexts. For example, while the Turkish PET mercimeği fırına vermek means to put the lentil in the oven literally, it could mean to have sex/to get someone pregnant euphemistically. In the following sentence, this PET used literally: “Günümüzde hem <mercimeği fırına vermek> daha kolay, hem de  fırında makarna yemek...” which can be translated as “Nowadays, it's easier to <put the lentils in the oven> and to eat mac and cheese…” However, it was used euphemistically in the following sentence: “Gel gör ki kasabanın yegane doktoru ile pişiren  bu kadın, zaman zaman <mercimeği fırına veriyorlarmış>” which can be translated as “However, it turns out that this woman, who is having an affair with the town's only doctor, sometimes <puts the lentils in the oven>” meaning that the doctor and woman are involved in a secretive or intimate sexual intercourse.  Conducting a euphemism detection task in Turkish has several challenges to overcome. Firstly, as far as we are aware of, there are no available datasets for automatic euphemism detection task in Turkish. Academic research, published books, articles, and other resources on this topic are very limited, making the collection of PETs difficult. In this research, we aim to identify PETs in Turkish and create a dataset of Turkish PETs by making use of native speaking Turkish annotators who have a linguistics background. We aim to fine-tune language models (LMs) such as BERTurk <cit.> and Electra <cit.> and large language models (LLMs), such as XLM-RoBERTa <cit.> and mBERT <cit.> for euphemism detection in Turkish. Therefore, the significant contributions of this paper are as follows:  * Introduction of the Turkish PETs dataset, which we plan to make publicly available later, * Overview of the Turkish PETs and how they were collected and annotated, * Comparison of the performances of XLM-RoBERTa, mBERT, BERTurk, and ELECTRA in detecting PETs in Turkish, using F1, accuracy, and precision as evaluation metrics, * This research will compare the PETs in Turkish and other languages and analyze potentially interesting patterns. Additionally, through extending euphemism detection task to a new language, we contribute to a better understanding of how euphemisms are utilized and interpreted across different linguistic and cultural contexts. § TURKISH LANGUAGE Agglutinative languages, such as Turkish, form words by adding multiple affixes to a stem, with each affix representing a distinct morphological feature <cit.>. This morphological productivity creates a vast number of possible word forms, making it difficult to develop comprehensive dictionaries or rule-based systems for tasks like euphemism detection. For instance, the PET hayata gözlerini yummak (to close one's eyes to life) can be formed as yum-du, yum-muş, yum-duğunda” and many other variations. See Table <ref> for more examples regarding morphological variations. The free word order in Turkish, where the position of words in a sentence can vary without significantly changing the meaning <cit.>, poses another challenge for euphemism detection. This flexibility makes it difficult to rely on fixed patterns or word sequences to identify euphemisms. For example, the PET uyutmak (to put to sleep) can appear in various positions within a sentence, making it harder to detect reliably. Similar to euphemisms in other languages, the meaning of words and expressions are context dependent in Turkish. While one word can be used euphemistically in one sentence, it might not have euphemistic meaning in another. For instance, the PET engelli might be used euphemistically to indicate that the person is disabled, but it might also have its non-euphemistic meaning of blocked. Moreover, Turkish is considered to be a low-resource language because of the limited availability of annotated datasets. It was also stated by various researchers that collecting data from various sources and labeling them was a challenging process <cit.>. Since there was no available dataset that contained euphemisms in Turkish with examples, it was necessary for us to build a dataset and get it annotated by native Turkish annotators. § AUTOMATIC EUPHEMISM DETECTION Euphemism detection can be viewed as a classification task in which an input text is classified as containing a euphemism or not. While this can be theoretically done at at the phrase-level or sentence-level euphemism detection, previous work has focused on classifying examples containing specific multi-word expressions, which may or may not be used euphemistically depending on the context <cit.>. A number of approaches have performed decently at the task using language models such as transformers, improving upon baselines using various techniques. For example, <cit.> use an ensemble of models each utilizing a combination of data and contextual augmentations to improve performance by 5 Macro-F1 points. <cit.> achieve similar improvements by incorporating non-euphemistic meanings and image embeddings associated with PETs. <cit.> propose a prompt-based approach for euphemism detection utilizing the language model RoBERTa, achieving an F1 score of 85.2%, demonstrating the effectiveness of prompt-based learning. Similar to our initial dataset, which contained more than 6,000 examples, the dataset they used was imbalanced and had more euphemistic examples than non-euphemistic. They noted the model's superior performance on euphemistic sentences compared to non-euphemistic ones due to this imbalance. Given the nuanced nature of these expressions in the Turkish language and the lack of previous work on figurative language processing in Turkish, this study aims to investigate how well different language models identify and categorize PETs in Turkish. We fine-tuned two large multilingual models, XLM-RoBERTa and mBERT, along with language models specifically trained on extensive corpora of Turkish text data: bert-base-turkish-cased and electra-base-turkish-cased-discriminator. These models were chosen to examine the impact of model size, training data, and architecture on euphemism detection performance. We hypothesized that XLM-RoBERTa and mBERT would provide strong general language understanding capabilities, as large multilingual models are trained on vast amounts of diverse data. On the other hand, bert-base-turkish-cased and electra-base-turkish-cased-discriminator, being specifically trained on Turkish text, were hypothesized to capture more nuanced aspects of euphemistic language in Turkish due to their exposure to a wider range of Turkish expressions and linguistic patterns. Our focus on the Turkish language addresses a gap in existing research, as most previous studies have primarily concentrated on English euphemisms <cit.>. By extending the euphemism detection task to a new language, we contribute to a better understanding of how euphemisms are utilized and interpreted across different linguistic and cultural contexts. The recent Multilingual Euphemism Detection Shared Task by <cit.> has encouraged researchers to explore multilingual and cross-lingual methods for identifying euphemisms. This research emphasizes the importance of understanding euphemisms in different languages. § DATA COLLECTION AND ANNOTATION §.§ Data Collection To find PETs in Turkish, we analyzed the PETs in other languages described in previous work <cit.>, such as American English, Mandarin Chinese, Yorùbá, and a mix of Spanish dialects to see whether there were overlapping words or expressions used euphemistically (see Table <ref>). As a result, we were able to compile an initial list of Turkish PETs. Through reviewing published articles and papers related to euphemisms in Turkish, such as those by <cit.>, we expanded our list of PETs. Another method we used to collect PETs was by posting polls on social media. Initially, we explained the concept of "PETs" and provided examples. We then utilized social media to share these polls, where Turkish native speakers could share their ideas for new PETs. As a result, our Turkish PETs list now comprises a total of 122 entries. We also included detailed information for each PET, such as euphemistic category (e.g. bodily functions), meaning, non-euphemistic meaning, literal translation, and the source it was from. The list is categorized into 10 groups with varying frequencies, which can be seen in Table <ref> . These categories were created based on the characteristics of the PETs. For example, the PET "görme engelli" (visually impaired) is related to physical attributes, and therefore it was added to the "physical/mental attributes" category. Once the PETs list was finalized, we utilized a Turkish corpus known as the TS Corpus Project <cit.>. We selected TS Corpus v2 and TS Timeline Corpus. TS Corpus v2 drew from the BOUN Web Corpus and included 491,360,398 tokens and 4,950,407 word types. TS Timeline Corpus contained more than 700 million tokens and over 2.2 million news and articles. To search for texts containing PETs for binary classification purposes, we utilized regular expressions, accounting for the agglutinative nature of the Turkish language. This approach allowed us to capture various word forms effectively. For instance, for the PET hamileliği sonlandırmak (to terminate pregnancy), we designed a regular expression to detect all variations of hamile-lik (pregnancy), hamile-liğini (her pregnancy), hamile-liğimi (my pregnancy), sonlan-dırdı (terminated/has terminated), sonlan-dıracakmış (I heard that she will terminate), sonlan-dıramadı (she could not terminate), etc., r"(hamileli\w+ sonlan\w+)". As a result, we successfully captured variations of each PET were successfully captured. These captured PETs were extracted and highlighted within their sentences using brackets, as shown: “Duyduğuma göre arkadaşı <hamileliğini sonlandırmış>.” (I heard that her/his friend will <terminate her pregnancy>.) Additionally, we included preceding and succeeding sentence(s), if available, to form the entire example context for that PET. These contexts usually consisted of four sentences at most. Not all PETs on the initial list were found in the corpus; of the 122, only 58 were found and have at least one example. These examples were then compiled for the annotation phase. §.§ Annotation Annotators were provided text examples (∼1-4 sentences) of PETs in context, as can be seen in Table <ref>. To recruit Turkish annotators, we utilized social media platforms to find volunteers with a background in linguistics or an interest in the field. After several informational meetings, the annotators were briefed about the research purpose, the annotation process, and the concept of PETs. These meetings were recorded with the consent of the annotators. They were instructed to label the examples as “1” if the highlighted word or expression was used euphemistically, and as “0” if it was not. Following the completion of all annotations, an additional meeting was held to address any disagreements. During this discussion, some labels were revised. Notably, examples that received conflicting labels from the annotators—euphemistic by two and non-euphemistic by another two—had to be excluded from the dataset. This underscored the inherent challenges humans face in consistently interpreting whether a word or expression is used euphemistically. For the annotation task, we divided the volunteers into five groups, with each group comprising three annotators. The first group annotated 975 examples, the second group annotated 1200 examples, the third group annotated 1300 examples, the fourth group annotated 1099 examples, and the fifth group annotated 1500 examples. As a result, there were 6,074 annotated examples at the end of the annotation task. Subsequently, each group's examples were annotated by one annotator from another group—for instance, an annotator from the first group annotated the second group’s examples, and so on, ensuring each example was annotated by four different people. Throughout this process, examples with discrepancies were highlighted for further discussion during a recorded meeting with the available annotators. Disagreements were resolved by majority vote to finalize the labels. However, examples receiving split decisions (two annotators labeling euphemistic and two labeling non-euphemistic) were removed from the dataset. Sample examples and their final annotated labels can be found in Table <ref>. While each example ultimately had four separate annotations, the annotators were allowed to collaborate and influence each others' opinions, nullifying potential inter-rater agreement analyses. We instead conducted inter-rater agreement analysis on a subset of 396 examples, labeled by two annotators who primarily worked separately. Cohen's kappa for these two raters was 0.696, which is rated as moderate to substantial agreement <cit.>. Interestingly, Krippendorf's alpha was 0.693, which is higher but still largely comparable to the degrees of agreement reported for euphemism datasets in <cit.>. §.§ Balanced Dataset For our text classification experiments, we sampled a portion of the main dataset. This was because some PETs had a disproportionately high number of examples compared to others, or a very skewed label imbalance (e.g., 100 euphemistic instances and 1 non-euphemistic). These factors were not ideal for text classification, and we wanted to assess models' abilities to classify texts for a variety of different PETs with different labels. Therefore, we randomly sample a maximum of 40 euphemistic and 40 non-euphemistic examples for each PET. In addition, some annotated examples, such as apartman görevlisi (apartment attendant), inme (landing), and toplu (bulk), were never used euphemistically, so we chose not to select those. The final result was a subset of 908 instances (521 euphemistic and 387 non-euphemistic) used for the euphemism detection task. §.§ Dataset Statistics We conducted a detailed statistical analysis of both the main and balanced datasets to better understand their differences and characteristics. Firstly, we provide the distribution of sensitive topics in Table <ref>. This table categorizes PETs into various groups, such as bodily functions, death, employment/finances, illness, miscellaneous, physical/mental attributes, politics, sexual activity, substances, and social topics. Each category is accompanied by the count of entries and examples of PETs within that category. Table <ref> further highlights key metrics such as average sentences per example, number of tokens, and lexical density. Notably, we also compute an "PET ambiguity" score, which measures the degree of ambiguity, or class balance, for examples of a particular PET. For each PET, this was computed as follows: 1-|N_euph-N_noneuph|/N_euph+N_noneuph where N_euph and N_noneuph is the number of euphemistic and non-euphemistic examples for that PET, respectively. Higher values indicate a higher degree of ambiguity. For example, if there were 5 euphemistic and 5 non-euphemistic examples of a particular PET, then it is maximally ambiguous (score = 1); if there were 10 euphemistic examples and 0 non-euphemistic, then the PET is not ambiguous at all (score = 0). We compute the average ambiguity score across all PETs in the main and balanced datasets for comparison. As expected, the main dataset has a significantly lower ambiguity score (0.076) compared to the balanced dataset (0.46), suggesting more consistent usage of terms in either euphemistic or non-euphemistic contexts and confirming that balanced dataset is better suited for the euphemisms detection task. § METHODOLOGY §.§ Experiments Since one of our goals were to extend the euphemism detection task to Turkish, classification experiments were conducted. Therefore, transformer-based models pre-trained on Turkish text like XLM-RoBERTa and mBERT were chosen due to their capability of capturing and understanding the linguistic nuances. The balanced dataset described in the previous section was then randomly split into training (80%), testing (10%), and validation (10%) sets, resulting in 726 examples for training and 91 examples each for testing and validation. The 80-10-10 split is a common practice in machine learning for dividing a dataset into training, validation, and testing sets. The fine-tuning process involved training each model on our prepared dataset for a maximum of 30 epochs with a learning rate of 1e-5 and a batch size of 4. We employed early stopping with a patience of 5 to prevent overfitting. No layers were frozen during fine-tuning, allowing the models to adapt fully to the euphemism detection task. Hyperparameter optimization was not explicitly performed in this initial exploration; however, the chosen hyperparameters are common for fine-tuning BERT-based models. The primary metric for evaluating model performance during training and validation was the macro-averaged F1 score, a balanced measure of precision and recall that is suitable for binary classification tasks with potentially imbalanced classes. The fine-tuned models were then evaluated on the held-out test sets, and their performance was assessed using various metrics, including accuracy, precision, recall, and F1 score.  §.§ Results We gathered the results of all the test sets of each model and calculated the average of 20 trials (different train-validation-test splits). The findings demonstrated that monolingual models (bert-base-turkish-cased and electra-base-turkish-cased-discriminator) outperformed the multilingual models (BERT-Base-Multilingual-Cased and XLM-RoBERTa). This suggests that for automatic euphemism detection in Turkish, models specifically pre-trained on Turkish text data have an advantage due to their familiarity with the nuances of the language. Additionally, the ELECTRA architecture appears to be slightly more effective for this task than the BERT architecture, as evidenced by the higher scores of electra-base-turkish-cased-discriminator compared to bert-base-turkish-cased. This could be attributed to the discriminator's ability to better distinguish between real and fake input data during training, which might be beneficial in identifying the subtle differences between euphemistic and non-euphemistic expressions. The results obtained from the models can be seen in Table <ref>. The findings of this research have several potential real-world applications. The developed models could be integrated into NLP tools for automatic euphemism detection in various types of text data, including social media posts, news articles, and other online content. This could be particularly valuable in fields such as social media monitoring to analyze the insight into public sentiment, opinions, and attitudes towards sensitive topics. For content moderation, flagging potentially harmful or offensive content that uses euphemisms to disguise its true intent could be beneficial for online platforms and communities seeking to maintain a respectful and safe environment.  Moreover, the cross-lingual capabilities of the models demonstrated in this study open up possibilities for developing euphemism detection systems for low-resource languages, where labeled data might be limited. This could contribute to a more inclusive and equitable representation of different languages and cultures in NLP research and applications. § CONCLUSION AND FUTURE WORK In this study, we created a Turkish PETs dataset from scratch and through utilizing the dataset, we investigated the effectiveness of various language models in identifying and categorizing euphemisms in Turkish. Our findings indicate that models trained on multilingual data, particularly XLM-RoBERTa, generally outperform monolingual models, suggesting the benefits of cross-lingual transfer learning in capturing euphemistic nuances. However, for the Turkish language specifically, models trained on Turkish text data, such as bert-base-turkish-cased and electra-base-turkish-cased-discriminator, demonstrated superior performance, emphasizing the importance of language-specific training for this task. Future research could investigate the impact of model size, architecture, and training data on euphemism detection performance. Additionally, exploring the use of explainability techniques could provide valuable insights into the decision-making processes of these models to better comprehend the specific linguistic features they rely on for euphemism detection. Experimenting with different model architectures or training techniques might also further improve the performance of euphemism detection systems in Turkish. Additionally, expanding the dataset to include a wider range of euphemisms and exploring their application in downstream tasks like sentiment analysis and content moderation could be useful for future work.  It is important to acknowledge that the results are based on a limited dataset and may not generalize to all types of euphemisms in Turkish. Future work could involve testing the models on a larger and more diverse dataset to confirm these findings. Lastly, exploring the cross-lingual transferability of euphemism detection models trained on Turkish data to other languages, similar to the work done in <cit.> would provide valuable insights. This could involve fine-tuning multilingual models on Turkish euphemisms and evaluating their performance on other languages. As highlighted in <cit.>, the ambiguity of potentially euphemistic terms (PETs) is a major challenge; therefore, future work could focus on developing methods to disambiguate PETs and distinguish between their euphemistic and non-euphemistic usages more effectively. Ando2005,andrew2007scalable,rasooli-tetrault-2015 § LIMITATIONS While this study highlights the potential of language models in euphemism detection in Turkish, the results are based on a limited dataset that may not encompass the full spectrum of euphemistic language usage in Turkish, potentially affecting the generalizability of our findings. § ETHICS STATEMENT The authors foresee no ethical concerns with the work presented in this paper. § ACKNOWLEDGMENTS Thanks to the annotators, whose names are Kader Teke, Devran Sarısu, Sümeyye Sena Şahin, Fitnat Filiz Bal, Kübra Aksoy, Ecem Küçükler, Azra Almira Kılıç, Özge Bilik, Mihriban Kandemir, Nazan Demir, Şüheda Nur Ünal, Özlem Özer, Salih Hamza Küpeli it was possible for us to create this dataset quickly. This material is based upon work supported by the National Science Foundation under Grant No. 2226006.