entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
18
175
authors
sequencelengths
1
1.12k
primary_category
stringclasses
114 values
categories
sequencelengths
1
8
text
stringlengths
5
364k
http://arxiv.org/abs/2407.12884v1
20240716190849
SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification
[ "Jingyi Shen", "Yuhan Duan", "Han-Wei Shen" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CV", "cs.GR", "cs.HC" ]
Spin-orbital-lattice entanglement in the ideal j = 1/2 compound K_2IrCl_6 M. Grüninger July 16, 2024 ========================================================================= § INTRODUCTION In fields like fluid dynamics, climate, and weather research, ensemble simulations have become critical for estimating uncertainty, improving decision-making, and enhancing the scientific understanding of complex systems. To fully study scientific phenomena and analyze the sensitivity of simulation parameters, scientists often need to conduct a sequence of simulations and compare the results generated from different configurations. However, scientific simulations can be computationally expensive and time-consuming. To speed up the simulation, analysis, and knowledge discovery cycle, a major research effort has been put into developing simulation and visualization surrogates to approximate the results of complex and computationally expensive simulations with much reduced costs. Currently, many deep learning-based surrogate models have been proposed in scientific visualization fields to assist parameter space exploration for ensemble simulations <cit.>. Although they have begun to gain traction, there still exist several limitations. The first limitation is the lack of uncertainty quantification for surrogate models. Surrogate models are approximations of the actual simulations and inherently contain uncertainties, it is important to convey uncertainties associated with the surrogate model's outputs so that scientists know how much they can trust the model's predictions. Uncertainty quantification also provides insights into how sensitive the surrogate models' predictions are with respect to the changes in the simulation parameters. This sensitivity information is crucial for scientists as it guides the exploration direction about where to focus more, for example, regions with high data variance, by running additional simulations. The second challenge is the lack of efficient exploration across the parameter space. Current works only support limited functionality such as predicting the data for a given simulation parameter set, but do not assist in discovering potential optimal simulation configurations across the large parameter space. Parameter space exploration based on heuristic or brute-force approaches can be computationally intensive and inefficient. A systematic and efficient way for parameter space exploration that considers scientists' interests is needed. The third challenge is reverse prediction, particularly the ability to predict the simulation parameters that can produce a specific simulation result. This ability is crucial when scientists explore the optimal outcomes by directly manipulating the simulation outputs but lack the knowledge of the underlying simulation parameters that could generate those manipulated data. Thus, there is a pressing need for effective approaches that can infer simulation parameters from observed outcomes. In this work, we propose , a surrogate model based on the normalizing flow <cit.>. Normalizing flow is a type of generative model for modeling complex probability distributions. By learning the invertible transformation between a simple base distribution (e.g., Gaussian distribution) and a complex target distribution, we address the above challenges as follows. First, is a conditional normalizing flow model combined with an autoencoder, trained on pairs of simulation parameters and corresponding simulation outcomes. As it models the distribution of simulation data conditioned on the simulation parameters, can quantify the uncertainty in the surrogate modeling process. takes simulation parameters as the conditional input, by sampling from the simple base distribution multiple times and reconstructing them in the complex conditional distribution, the variations among the reconstructed outputs can reflect the uncertainties in the surrogate prediction. Second, the proposed uncertainty-aware surrogate model can assist interactive exploration of simulation parameters guided by scientists' interests and goals. This is done by integrating the trained with a genetic algorithm, empowered by an interactive visual interface for user-guided automatic parameter space exploration. Scientists can specify their preferences and optimization objectives via the visual interface. The genetic algorithm then iteratively explores generations of simulation parameters based on scientists' inputs, while operates in the backend to efficiently predict data and uncertainties for the given simulation parameters. Once scientists set up the objectives and trade-offs among data similarity, diversity, and uncertainty interactively on the visual interface, simulation parameter space exploration can proceed automatically. Third, another essential benefit of flow-based models is that they can perform prediction in both forward and reverse directions. In the forward direction, can predict data conditioned on the input simulation parameters, allowing for the efficient generation of high-quality synthetic data. Conversely, in the backward direction, can predict the underlying simulation parameters that produce the given simulation data. This bidirectional prediction allows an effective framework for both surrogate modeling and reverse prediction, enhancing the flexibility of parameter space exploration and post-hoc analysis. In contrast to other surrogate models in the visualization field  <cit.>, has made significant advancements by combining the conditional normalizing flows with autoencoders for efficient latent space modeling, allowing uncertainty quantification of the surrogate model’s outputs. also features an improved architecture for bi-directional predictions. Additionally, it integrates a genetic algorithm and an interactive visual interface for efficient simulation parameter space exploration. Through qualitative and quantitative evaluations, we demonstrate the effectiveness of the proposed approach for automatic user-guided simulation parameter recommendation and exploration. In summary, the contributions of our work are: * First, we propose an invertible flow-based uncertainty-aware surrogate model. * Second, we utilize a genetic algorithm that considers the similarity, diversity, and uncertainty of data during the automatic parameter space exploration process. * Third, we employ an interactive visual interface for uncertainty analysis and provide scientists with effective control over simulation parameter space exploration. § RELATED WORKS We employ an invertible normalizing flow for surrogate modeling with uncertainty estimation. In this section, we provide an overview of related research on visualization surrogate models for scientific data, global sensitivity analysis techniques, and uncertainty estimation for deep neural networks. Surrogate Models for Parameter Space Exploration. In the fields of oceanography and climate science, scientists frequently run computational simulations <cit.> to explore parameter spaces. Replacing these costly and time-consuming simulation processes, numerous surrogate models <cit.> have been proposed. Among these, Gramacy et al. <cit.> construct non-stationary Gaussian process trees that adaptively sample within the input space, enabling the selection of more efficient designs. He et al. <cit.> introduce InSituNet, an image-based surrogate model designed for real-time parameter space exploration. However, its reliance on regular grid mappings limits its application to unstructured data. To overcome this limitation, Shi et al. <cit.> propose GNN-Surrogate, specifically designed for navigating simulation outputs on irregular grids. Shi et al. <cit.> also introduce the VDL-Surrogate model based on view-dependent neural-network latent representations. It employs ray casting from varied viewpoints to aggregate samples into compact latent representations, thereby optimizing computational resource use and enhancing high-resolution explorations. Similarly, Danhaive et al. <cit.> propose a surrogate model for performance-driven design exploration within parametric spaces using conditional variational autoencoders. They employ a sampling algorithm to distill a dataset that captures valuable design insights and employ the variational autoencoders as surrogates for the simulation process. However, existing learning-based works do not account for uncertainties within the surrogate model. To address this deficiency, we propose a flow-based surrogate to enable uncertainty quantification of the surrogate model's prediction process. Global Sensitivity Analysis. Global sensitivity analysis methods include variance-based, differential-based, and regression-based approaches. They are crucial for understanding how parameters influence model outcomes. Variance-based methods, also known as ANalysis Of VAriance (ANOVA), decompose the output's variance into a sum of contributions from inputs and their interactions. Based on the application, Sobol indices <cit.> or Design of Experiment <cit.> are used to evaluate inputs' effects on the output. Differential-based methods, like the Morris <cit.>, calculate input effects by varying each factor individually while keeping others fixed. This method is computationally efficient and easy to implement. Regression-based methods utilize linear approximations of the function to compute sensitivity based on metrics like Correlation Coefficients and Standardized Regression coefficients <cit.>. There are traditional surrogate models with global sensitivity analysis, such as Ballester et al. <cit.>'s Sobol tensor train decomposition. While efficient, these surrogate methods are outperformed by deep neural networks. Uncertainty in Deep Neural Networks. Deep neural networks are often viewed as black boxes due to unclear decision-making rules. For scientific visualization and analysis, uncertainty quantification helps measure the reliability of the model, offering scientists a level of confidence about the model's outputs. A large number of uncertainty quantification methods have been proposed, including probabilistic models (e.g., Bayesian neural networks <cit.>, Deep Gaussian process <cit.>), ensemble methods (e.g., DeepEnsemble <cit.>), and deep generative model-based methods <cit.>. In our work, we employ a deep generative model called normalizing flow <cit.> as the surrogate model to quantify uncertainties. § BACKGROUND: NORMALIZING FLOW is based on the normalizing flow for the conditional generation of simulation data conditioned on the simulation parameters. Normalizing flow models are a type of generative models that aim to learn a mapping from a simple probability distribution to a more complex one, allowing for the generation of high-quality data samples in the target distribution. The key idea behind normalizing flow is to learn a series of invertible transformation functions that can gradually map samples from a simple distribution into a complex target distribution of interest, as shown in <ref>. Since the mapping is invertible, one can perform the transformation in the opposite direction, i.e., encode samples in the complex distribution into latent vectors with a simple and tractable distribution. Formally, denote the observed variables as 𝐱 and latent variables as 𝐳, the mapping in the normalizing flow model, given by f_θ: ℝ^n →ℝ^n, is invertible such that 𝐱=f_θ(𝐳) and 𝐳=f_θ^-1(𝐱). The invertible mapping function f_θ consists of a sequence of bijective functions f_θ = f_K∘ f_K-1∘···∘ f_1, so that 𝐱 = 𝐳_K = f_K∘ f_K-1∘···∘ f_1(𝐳_0), where each f_i (for i=1, ..., K) is learnable and invertible. With the stacked K invertible functions, a normalizing flow can transform a latent variable 𝐳_0, which follows the simple distribution (e.g., Gaussian distribution), into the target variable 𝐳_K with a more complex target distribution p(𝐱)=p_K(𝐳_K). Using the change of variables, the log-likelihood of 𝐳_K can be computed as log p_K(𝐳_K) = log p_0(𝐳_0) - ∑_i=1^Klog| ∂ f_i(𝐳_i-1)/∂𝐳_i-1|. Given the density of 𝐳_0 and the learnable transformation functions f_θ, one can compute the log-likelihood of any 𝐱 through <ref>. The optimization goal for normalizing flows is to find the θ that maximizes the log-likelihood of training samples, computed by <ref>. The invertibility of normalizing flows is theoretically constrained by design. Various learnable and invertible functions for f_i (i=1, ..., K) have been proposed, but the most efficient and simplest approach is affine coupling layers <cit.>. The input variable 𝐳 of this coupling layer is split into two parts 𝐳_1:j and 𝐳_j+1:d at index j, where the first part is unchanged and is used to parameterize the affine transformation for the second part. The output variable 𝐳^' can be formulated as 𝐳^'_1:j = 𝐳_1:j, 𝐳^'_j+1:d = T(𝐳_1:j) + S(𝐳_1:j) ⊙𝐳_j+1:d. The function S(·) and T(·) are neural networks and are not invertible. The sum and multiplication are element-wise operations. The complexity of S(·) and T(·) is essential for the modeling capability of coupling layers. Inverting the coupling layer can be performed by element-wise subtraction and division: 𝐳_1:j = 𝐳^'_1:j, 𝐳_j+1:d = (𝐳^'_j+1:d - T(𝐳_1:j)) / S(𝐳_1:j). By using multiple invertible masking <cit.>, normalization <cit.>, and coupling layers, we can ensure the input variables are fully processed and the model has enough capacity to learn the invertible transformations, enabling the model to transform between complex and simple distributions. Normalizing flow models inherently provide a measure of uncertainty, as they can capture the complex posterior distribution of data. In our work, we employ normalizing flow models to capture the conditional distribution of simulation data conditioned on the simulation parameters, facilitating uncertainty quantification and efficient parameter space exploration for ensemble data. § METHOD Existing surrogate models lack uncertainty estimation in the prediction process and they only support predicting simulation results based on the simulation parameters specified by scientists. However, the critical issue often faced by scientists is the lack of knowledge about which simulation parameters are of interest. The exploration process can be computationally intensive and inefficient. This underscores the need for efficient user-guided exploration of the simulation parameter space. To address this gap, we propose an uncertainty-aware surrogate model and integrate it with a genetic algorithm to support efficient user-driven exploration of the simulation parameter space. §.§ Overview <Ref> shows an overview of our framework, which contains three major components. First, we propose , a normalizing flow-based surrogate model with uncertainty quantification ability. The model is trained on pairs of simulation parameters and simulation data. Once well-trained, scientists can utilize to generate data by taking simulation parameters as the conditional input. Second, to reduce the computational cost of intensive parameter space exploration for ensemble simulations, we cast exploration as a constrained optimization problem solved using a genetic algorithm. The constraints we consider are based on the preferences provided by scientists interactively. Third, we integrate and the genetic algorithm as the backend for an interactive visual exploration system. With the visual interface, scientists can specify their interests and give indications about which exploration direction they are more interested in, allowing for efficient user-guided parameter space exploration considering simulation data similarity, diversity, and uncertainty. §.§ Uncertainty-Aware Surrogate Model In this section, we discuss details about the proposed surrogate model , including (1) simulation data representation extraction, (2) conditional data generation and uncertainty quantification of the surrogate model, and (3) reverse prediction of simulation parameters for given simulation data. §.§.§ Data Representation Extraction contains two parts that contribute to efficient data generation. The first part is an autoencoder model for 3D data reduction. Flow-based models require constant data dimensionality to maintain invertibility, which ensures that there exists a one-to-one correspondence between the input and output of each transformation. Despite normalizing flows' success in density modeling, training and testing these models on high-dimensional data is computationally expensive. To reduce the computational costs, we pre-train a 3D autoencoder to project the original high-dimensional data onto a lower-dimensional latent space. Given a simulation data 𝐱∈ℝ^D × H× W, with dimensions D, H, and W representing depth, height, and width, respectively, we train an autoencoder consisting of an encoder g_e and a decoder g_d, both designed based on convolutional neural networks with residual connections <cit.>, to extract a compact latent representation 𝐳. This encoding ensures the latent representation 𝐳 captures the essential characteristics of the data 𝐱 with reduced dimensionality, allowing the decoder to accurately reconstruct 𝐱^' from latent representation 𝐳 with low loss compared to the original data 𝐱. This process can be formulated as 𝐳=g_e(𝐱), and𝐱^'=g_d(𝐳). In the later stage, the normalizing flow model will be trained on the latent representations of data instead of their original high-dimensional counterparts. During inference, the flow-generated results will be further processed by the decoder of the autoencoder so that scientists can get the reconstructed data in the original data space. §.§.§ Conditional Modeling for Uncertainty Quantification In our work, we model the relationship between simulation parameters and simulation data as probabilistic distributions instead of deterministic mappings. This decision is motivated by two main factors. First, the neural network parameterized mapping between simulation parameters and data is an approximation, which inherently contains uncertainties. These uncertainties come from the limitations of the neural network in capturing the true complexity of the simulation relationships. Second, a probabilistic model is preferred since it is in general more robust. In most cases, since we do not have a large enough training dataset to approximate the complex but unknown mapping function, probabilistic modeling can better account for the range of possible outcomes. As shown in <ref>, is a surrogate model based on a conditional normalizing flow to approximate the complex and time-consuming simulation process. learns a probabilistic mapping function of the simulation data conditioned on the simulation parameter. The learned probabilistic mapping will facilitate uncertainty quantification of the surrogate model. To reduce the computational cost, we train the normalizing flow in the latent space. This means the simulation data 𝐱 used for training are first encoded into latent representations 𝐳 by the trained encoder g_e, as described in <ref>. Given pairs of simulation parameters and the latent representations of the corresponding simulation output, our goal is to model the conditional distribution p(𝐳|𝐜), where 𝐜∈ℝ^n denotes the condition (i.e., an n-dimensional vector representing a set of simulation parameters) and 𝐳 denotes the latent representation of the corresponding simulation result. However, this conditional distribution p(𝐳|𝐜) is complex and intractable. To model this distribution, we introduce a latent variable 𝐳_0 with a well-defined Gaussian distribution p_0(𝐳_0 |𝐜), and model the unknown conditional distribution p(𝐳|𝐜) by learning a transformation function f. This function f transforms a sample 𝐳_0 from the simple, known distribution p_0(𝐳_0|𝐜) to a sample 𝐳 within the complex, target distribution p(𝐳|𝐜), formulated as 𝐳 = 𝐳_K = f(𝐳_0, 𝐜), where f is a normalizing flow, consisting of K invertible transformation layers. 𝐳_K, equal to the target complex data sample 𝐳, is the resulting output after K transformations of input 𝐳_0. This architecture is bidirectional so it can process data in both forward and reverse directions. For simplicity, we discuss how to transform data from a simple Gaussian distribution to a complex one. The latent representation 𝐳_0 follows a Gaussian distribution conditioned on 𝐜, described by p_0(𝐳_0|𝐜) = 𝒩(𝐳_0; μ_c, Σ_c), where mean μ_c and variance Σ_c of the Gaussian are predicted based on the condition 𝐜 using two neural networks h_μ and h_Σ, i.e., μ_c = h_μ(𝐜) and Σ_c = h_Σ(𝐜). Given the Gaussian distribution p_0(𝐳_0|𝐜) and the flow model f, based on the previous discussion in <ref> and <ref>, we can derive the log-likelihood of the target distribution p(𝐳|𝐜) as log p(𝐳|𝐜) = log p_K(𝐳_K|𝐜) = log p_0(𝐳_0|𝐜) - ∑_i=1^Klog| ∂ f_i(𝐳_i-1|𝐜)/∂𝐳_i-1|. This log-likelihood will be used to optimize the normalizing flow. The K invertible transformation layers in the flow f are divided into two groups, i.e., K_1 unconditional layers and K_2 conditional layers (K=K_1 + K_2). As illustrated by the blue dashed arrows in <ref>, the latent representation 𝐳_0 will first go through a conditional normalizing flow f_c with K_1 layers whose transformations depend on the conditional variable 𝐜, enabling the modeling of the distribution externally conditioned on the simulation parameter. After the conditional part, the second component is the unconditional normalizing flow f_u with K_2 invertible transformation layers. These layers take no conditional input and simply transform the input into the final complex target distribution. We formulate this process as 𝐦 = f_c(𝐳_0, 𝐜), 𝐳_K = f_u(𝐦), where 𝐦 is the transformation result of 𝐳_0 after going through the conditional flow f_c. The output of the unconditional flow f_u is 𝐳_K. Once well-trained, 𝐳_K should match the latent representation of the corresponding simulation data for a given parameter 𝐜. In other words, can transform a base Gaussian distribution into a target one that represents the actual simulation output. The explicit conditional distribution modeling allows not only efficient sampling but also uncertainty quantification of the data generation process. Uncertainty quantification is crucial for scientific visualization and analysis of surrogate models since it will provide scientists with information about the quality of results. Given a conditional input, scientists can efficiently sample the Gaussian latent space for high-quality surrogate data generation. The Gaussian distribution's variance reflects the uncertainty of the surrogate model, stemming from its approximation error of the complex simulation process. By sampling the Gaussian latent space and assessing variations in the reconstruction, scientists can quantify the uncertainties associated with the data produced by the surrogate model. We summarize this process in <ref>. §.§.§ Reverse Prediction To enable the reverse prediction of simulation parameters from simulation data, we introduce a constraint in the latent space of the flow, ensuring it can accurately predict the conditional information. Specifically, we impose a loss during training to ensure a sub-vector 𝐳_c in 𝐦 matches the simulation parameters 𝐜, where 𝐦 = <..., 𝐳_c> is the output of the conditional normalizing flow, as illustrated in <ref>. Once the model is well-trained, 𝐳_c will accurately approximate 𝐜. During inference, given a simulation outcome 𝐱, scientists first utilize encoder g_e to obtain the latent representation 𝐳_K=g_e(𝐱). Then, as illustrated by the black solid arrows in <ref>, the inverse of the unconditional normalizing flow f_u can predict the simulation parameters 𝐜 associated with encoded data as follows: <..., 𝐳_c> = f_u^-1(g_e(𝐱)), 𝐜 = 𝐳_c. In summary, enables an uncertainty-aware bidirectional mapping between simulation parameters and simulation data. Given simulation parameters, can produce the corresponding simulation data and quantify the uncertainties in the data generation process. In the reverse direction, can predict the simulation parameters from the given data. This bidirectional prediction ability is highly beneficial, especially when scientists prefer a robust model for both simulation data generation and simulation parameter prediction, a challenging task for other generative models such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs). §.§.§ Loss Functions We first train the autoencoder model as discussed in <ref>, and then train in <ref> and <ref>. Our training data are pairs of simulation parameters 𝐜 and corresponding simulation data 𝐱. The autoencoder is trained on the simulation data with a Mean Squared Error (MSE) loss between the original data 𝐱 and the reconstructed data 𝐱^': ℒ_ae = 𝔼_𝐱[𝐱-𝐱^'^2_2]. Training data for are pairs of simulation parameters and latent representations of the corresponding simulation output. Given data 𝐱, we first extract its latent representation 𝐳 via the trained encoder g_e and utilize 𝐳 for training. is trained based on a combination of two losses: ℒ_flow = ℒ_f + αℒ_c. The hyperparameter α balances these two losses. First, we aim to find model parameters that accurately describe the conditional distribution p(𝐳|𝐜), as detailed in <ref>. The first loss ℒ_f is the standard log-likelihood loss for normalizing flow training: ℒ_f = 𝔼_𝐳,𝐜[-log p(𝐳|𝐜)]. This loss minimizes the negative log-likelihood, which is mathematically equivalent to the Kullback-Leibler (KL) divergence between the distribution parameterized by the flow model and the ground truth distribution. The second loss ℒ_c minimizes the discrepancy between flow-predicted 𝐳_c and the simulation parameter 𝐜. This loss is used to ensure the simulation parameter conditions are accurately captured within the latent space of the normalizing flow: ℒ_c = 𝔼_𝐱,𝐜 [𝐳_c - 𝐜_1]. ℒ_c enables the reverse prediction of simulation parameters for a given simulation output. §.§ Genetic Algorithm for Parameter Candidate Generation offers accurate data prediction for a given parameter configuration and also the ability to quantify the uncertainties in the data generation process. In this section, we discuss how we leverage the strengths of for efficient simulation parameter space exploration for ensemble data. Specifically, we introduce a Genetic Algorithm (GA) for simulation parameter candidate generation, in which helps guide the evolutionary algorithm to identify the simulation parameters that align with scientists' interests. §.§.§ Fitness Function Design Genetic algorithms are widely used to solve complex optimization problems by mimicking the process of natural selection and genetics. The key idea behind the genetic algorithm is to operate on a set of candidates in the current generation and improve the population gradually for an optimization goal. This approach is effective for new high-quality candidate discovery. In this optimization process, a fitness function is crucial in evaluating the “fitness” of candidates in each generation. It directly relates to the objective functions of the problem and influences the direction of the evolutionary process. Initially, scientists provide several simulation parameters that they might be interested in. The genetic algorithm then identifies new simulation parameters that meet scientists' interests based on quantitative measurements such as data similarity and diversity. In this case, the simulation outcome for each candidate parameter configuration is required. However, running the complex simulation for all candidates can be prohibitively slow. To speed up the optimization process, we utilize to quickly produce the simulation outputs for candidate parameters. We also incorporate the uncertainty information produced by the surrogate model into the genetic algorithm. Incorporating uncertainties can increase the robustness and reliability of the parameter recommendation and exploration process. For example, candidates with high uncertainty will have low fitness scores, despite their data being similar to scientists' preferences. Our fitness function considers the similarity, diversity, and uncertainty of candidate solutions (i.e., parameter sets). For a candidate simulation parameter configuration 𝐜, we utilize to predict data 𝐱̅ and associated uncertainty 𝐱_var, as described in <ref>. Similarly, for a collection of simulation parameters C_usr selected by scientists, predicts the data list X_usr. For the set of selected parameters C_usr, scientists will assign preference scores S_usr ∈ [-1, 1] to each of them. A higher value indicates scientists are more interested in this parameter configuration. Our fitness function is defined as F(𝐜, C_usr, S_usr) = w_1 ·sim(𝐱̅, X_usr, S_usr) + w_2 ·div(𝐜, C_usr) + w_3 ·unc(𝐱_var), where w_1, w_2, and w_3 are weights in range [-1, 1] to balance these metrics. By adjusting these weights appropriately, the fitness function in <ref> can favor solutions with different objectives. For example, find simulation parameters that are diverse but have simulation outcomes similar to the user's preferences, while also encouraging low uncertainty in predicted data. This function can also be customized to identify simulation parameters with high uncertainty, helping training data augmentation for . The three components of the fitness function are: * Similarity Score measures how closely the predicted data 𝐱̅ of a candidate parameter 𝐜, matches the user preferred outcomes X_usr: sim(𝐱̅, X_usr, S_usr) = ∑_𝐱̅_usr∈ X_usr, s_usr∈ S_usr1/distance(𝐱̅, 𝐱̅_usr)· s_usr. Similar to content-based filtering widely used in modern recommendation systems, this metric recommends simulation parameters that have similar outputs to the user's preferences. The inverse distance is used to measure similarity. Candidates whose data are similar to the user's selection with high user-specified preference scores in S_usr will have high similarity scores. * Diversity Score evaluates how distinct 𝐜 is compared to the user preferred set of parameters C_usr: div(𝐜, C_usr) = ∑_𝐜_usr∈ N_k(𝐜)distance(𝐜, 𝐜_usr). This metric encourages the variety among the recommended simulation parameters. Formally, the diversity is measured as the total distance to k nearest neighbors of 𝐜 in the user's selection set C_usr, represented as N_k(𝐜). We set k=5. * Uncertainty Score quantifies the variance in the 's output 𝐱̅: unc(𝐱_var) = 1/n∑_i=1^n(𝐱_var)_i. It is measured as the mean of the estimated uncertainty field, where (𝐱_var)_i is the uncertainty value at the i-th index of 𝐱_var and n is the dimensionality of 𝐱_var. High variance indicates low confidence in the model's output. To speed up the similarity and uncertainty computation during optimization, the calculation is based on the latent representations of data generated by , which can be done much more efficiently than using the reconstructed data generated by the decoder. This avoids the costly and unnecessary full reconstruction of data for similarity comparison. Hence, distance(𝐱̅, 𝐱̅_usr) in <ref> is evaluated using the inverse of cosine similarity between latent representations, while distance(𝐜, 𝐜_usr) in <ref> is based on L1 differences between candidate and user-selected parameters. The associated uncertainty is assessed through the variance of these latent representations. §.§.§ Candidates Optimization Process As outlined in <ref>, the genetic algorithm begins with a set of randomly initialized candidates for the first generation. Subsequent generations proceed as follows: all current candidates are evaluated using the fitness function in <ref>, where higher scores indicate better solutions. Then, selection is performed to choose parent candidates for reproduction. Samples with higher fitness scores will have a high probability of being chosen. After selection, crossover is applied, i.e., candidate parameter vectors are divided into chunks and recombined to create new offspring. This crossover ensures that the offspring inherits characteristics from both parents. To ensure diversity and prevent local optima, the mutation is then performed at a rate of r_m by adding Gaussian noise to the candidate parameter. The Gaussian noise has a mean of zero and a standard deviation of r_σ. We set r_m = 0.2 and r_σ = 0.1. This mutation introduces controlled variability which is essential for effective exploration of the simulation parameter space. These new candidates form the subsequent generation. The optimization process terminates when it reaches the maximum number of generations. After generations of optimization, we can identify simulation parameters that align with scientists' preferences. §.§ Visual Interface for Parameter Space Exploration We integrate the trained with the genetic algorithm as the backend and develop a visual interface to assist exploration of the vast parameter space. This combination improves the efficiency of the exploration process by leveraging scientists' knowledge to identify simulation parameters of interest. The visual interface facilitates the parameter exploration process in three ways. First, it displays volume rendering images of the 's prediction results and the quantified uncertainties for user-selected simulation parameters, enabling informed decision-making. Second, the visual system enables scientists to assign preference scores to candidates after carefully inspecting the visualization results. Then, the genetic algorithm in the backend computes fitness scores for candidate simulation parameters according to scientists' preferences, guiding the optimization toward scientists' preferred directions. Third, the interface provides an overview of the genetic algorithm's optimization process across generations, offering insights into its progression. An overview of the visual exploration and optimization process is shown in <ref>. It contains three major steps. Step 1: Specify the scientist's preference. In the simulation parameter selection view of the visual interface (<ref> (1)), scientists choose multiple simulation parameters to define their interests. The trained surrogate model in the backend can efficiently generate data for the selected simulation parameters. The reconstruction and uncertainty quantification results of the selections are visualized in <ref> (2). After carefully analyzing the visualization results, scientists can assign preference scores to these selected parameters. A higher score indicates a higher interest. The user-selected parameters and corresponding preference scores are recorded in the table in <ref> (1). Step 2: Optimize through the genetic algorithm. The second step is to optimize the randomly initialized candidate parameters through the genetic algorithm running in the backend of the visual system. The optimization process is controlled and visualized in <ref> (3). In this view, scientists can interactively assign different weights for similarity, diversity, and uncertainty for different optimization objectives, as discussed in <ref>. The initialized simulation parameters will then evolve through selection, crossover, and mutation based on the calculated fitness scores in each generation. The average fitness scores over generations are displayed in the line graph, where the x-axis represents the generation index and the y-axis represents the fitness score value. By brushing on the line graph, the Sankey diagram will be updated. Nodes in the Sankey represent candidate parameters and links depict the parent-child relationship. Node color is set via a drop-down menu, allowing color by similarity, diversity, or uncertainty. Each step in the Sankey diagram represents one generation. Hovering over a link highlights its ancestors and descendants from the first to the last generation within the brushed area. The iterative optimization ultimately presents scientists with simulation parameters that closely match their interests. Moreover, if scientists identify highly interesting new parameters during optimization, they can make further selections by clicking on nodes in the Sankey diagram to refine their selected parameter set for future optimization rounds. Step 3: Cluster and reverse prediction of representative simulation parameters. Given the potentially large set of recommended simulation parameters, running all recommendations through and visually inspecting all rendering results is impractical. To address this, we extract representative simulation parameters as our final recommendation. For all candidate simulation parameters in the last generation of the genetic algorithm, we first retrieve latent representations for data samples corresponding to these parameters. K-means clustering is then applied to take cluster centers as representative latent representations. We utilize the reverse prediction ability of  to determine the simulation parameters for these cluster centers. We also project all latent representations onto a 2D plane using t-SNE (t-distributed Stochastic Neighbor Embedding) and visualize the clustering and reverse prediction results in <ref> (4). § RESULTS We evaluate 's ability as a surrogate model for data generation, uncertainty quantification, and reverse prediction. We also conduct a case study for simulation parameter recommendation and exploration with our visual interface. §.§ Dataset and Implementation The proposed model is evaluated using two scientific ensemble simulation datasets in  <ref>, which also includes the simulation parameter ranges for both datasets. MPAS-Ocean <cit.> dataset contains the simulation results from the MPAS-Ocean model for ocean system simulation, developed by the Los Alamos National Laboratory. According to domain scientists' interests, we study four input parameters: Bulk wind stress Amplification (BwsA), Gent-McWilliams Mesoscale eddy transport parameterization (GM), Critical Bulk Richardson Number (CbrN), and Horizontal Viscosity (HV). During training, 128 parameter settings and corresponding data samples with a resolution of 192 × 96 × 12 are used. For testing, 20 parameter settings are utilized. Nyx <cit.> is a cosmological simulation dataset from compressible cosmological hydrodynamics simulations by Lawrence Berkeley National Laboratory. Based on domain scientists' suggestions, we study three input parameters: the Omega Matter parameter (OmM) representing the total matter density, the Omega Baryon parameter (OmB) representing the density of baryonic matter, and the Hubble parameter (h) quantifying expansion rate of the universe. We utilize 70 parameter configurations for training and 20 for testing. The log density field of resolution 128 × 128 × 128 is used for experiments. is developed based on PyTorch[https://pytorch.org] and is trained on a single NVIDIA A100 GPU with a learning rate of 10^-4. The visual system is implemented with Vue.js[https://vuejs.org/] for the frontend and Flask[https://flask.palletsprojects.com/] for the backend server. VTK.js[https://kitware.github.io/vtk-js/] is used for volume rendering of data. §.§ Quantitative Evaluation In this section, we quantitatively evaluate 's performance in terms of surrogate modeling and reverse prediction. §.§.§ Surrogate Prediction We use two metrics to assess the data generation quality of for a given simulation parameter input. For data-level evaluation, we employ peak signal-to-noise ratio (PSNR) to quantify the voxel-level differences between surrogate-generated data and actual simulation results. For image-level evaluation, we utilize the structural similarity index measure (SSIM) to evaluate the quality of volume rendering images from generated data against the simulation outcomes. Higher values of PSNR and SSIM indicate better quality. As discussed in <ref>, contains two components: an autoencoder (AE) for data reduction and a flow-based model for distribution modeling. The flow-based model is trained on the AE's latent space. The quality of the AE's reconstructions is critical as it directly impacts the effectiveness of the flow-based modeling. Therefore, we first evaluate AE's reconstruction quality using the above metrics. As shown in <ref>, the AE model achieves high PSNR and SSIM scores for test data of all datasets, indicating minimal information loss in latent representations. This enables the to train on latent representations, reducing computational costs compared to using raw data. Then we compare 's prediction ability with one state-of-the-art baseline, i.e., VDL-Surrogate <cit.> (shorted for VDL), a view-dependent surrogate model that utilizes latent representations from selected viewpoints for fast training and interpolation during inference. We use the original implementation of VDL. The model sizes and training time for VDL and are detailed in <ref>. The quantitative results in <ref> show that can achieve comparable performance with the baseline. performs slightly better on the MPAS-Ocean dataset. For the Nyx dataset, since VDL utilizes an importance-driven loss to ensure high-density values are preserved, it performs better than . However, offers additional benefits including quantifying uncertainties in the surrogate model and predicting simulation parameters for given simulation outcomes. §.§.§ Reverse Prediction Different from other surrogate models, has the bidirectional prediction ability. In this section, we measure the ability of to predict simulation parameters for a given simulation outcome. To assess the accuracy of predicted simulation parameters against the ground truth, we employ mean absolute error (MAE) and cosine similarity as our evaluation metrics. MAE is lower the better and cosine similarity is higher the better. Given that the parameters span diverse ranges, before we evaluate them with the metrics, we apply min-max normalization to every dimension of the predicted parameter vector based on the range detailed in <ref>. This evaluation for simulation parameter prediction is conducted on 20 randomly selected simulation data for each dataset. <Ref> displays the average quantitative evaluation results on all test data of each dataset. Given the low MAE and high similarity of our reverse prediction results to the ground truth simulation parameters, it's evident that our model achieves significant accuracy in reconstructing the original simulation conditions. In <ref>, we also represent some examples of the parameter prediction results. Comparing the predicted parameters with the ground truth, we observe a close alignment, demonstrating the effectiveness of our model in simulation parameter prediction through the reverse direction of the surrogate model. §.§ Qualitative Evaluation In this section, we qualitatively compare the 's results with the baseline model, VDL-Surrogate, via the volume rendering images. The volume rendering settings are kept the same for the same dataset. <Ref> compares the volume rendering images of the ground truth data with the predicted ones on the MPAS-Ocean dataset. There are three neural network models in comparison, including an autoencoder (AE) for data reduction and two surrogate models, i.e., the baseline VDL and the proposed model . The two columns show results from two different simulation parameter configurations. In each result, we provide a zoom-in view at the bottom of volume rendering images for better comparison across models. The zoom-in region is near the region of interest for domain scientists called the Pacific Cold Tongue where the temperatures drop to about 24^∘C in this area. On the left side of the zoom-in figures, we also show the corresponding squared error map where we set the white color to low error regions. By comparing the rendering results of all models with the ground truth, we found that overall the AE used for latent representation extraction achieves the best results, with low errors and a clearer boundary of the feature in the zoom-in region. Comparing with the baseline model VDL, we found on average has a low error, and VDL can have higher errors in some feature boundaries. <Ref> shows volume rendering images of Nyx data, comparing the ground truth with images generated by the baseline model VDL, the autoencoder (AE), and . Each row corresponds to the results from one simulation parameter setting. The AE model exhibits superior reconstruction quality due to its relatively straightforward task. Although both the baseline model VDL and successfully capture crucial features, VDL shows more clear high-density details than . However, VDL tends to exaggerate high-density outputs, as shown in the zoomed-in views for both rows. Regions expected to have low-density values now have relatively high values. The high-density feature preservation ability of VDL comes from their adoption of importance-driven loss, which may amplify the density values into a higher range. The proposed does not incorporate such important information, so the results are less biased, and with uncertainty quantification ability, is more reliable. In summary, offers performance on par with the baseline model VDL when serving as a surrogate model. However, provides features like uncertainty quantification in the surrogate modeling process and the capability for reverse prediction of simulation parameters for the given simulation outcomes, functions that are not available for the baseline. §.§ Uncertainty Quantification Due to the explicit distribution modeling, captures the variations of predicted simulation outputs for the conditional input simulation parameters. In this section, we assess the uncertainties in the surrogate model using the method outlined in <ref>. The number of samples for uncertainty quantification N is set to 20. In <ref>, we show the volume rendering images of the estimated uncertainty field for the MPAS-Ocean dataset. These images are generated from three different simulation parameter configurations. We set parameters GM, CbrN, and HV to 900, 0.625, and 200, and explore the uncertainty introduced by the surrogate model when we change the parameter BwsA. The BwsA values are set to 0.5, 2.5, and 4.5 from left to right, respectively. As the value of BwsA increases, the uncertainty values in the eastern Pacific region also rise, as detailed in the zoom-in images in <ref>. Other regions maintain relatively low uncertainty levels. The uncertainty comes from the model's sensitivity to BwsA values, highlighting areas where can be less reliable. Despite this, the overall unreliability range remains at a lower level, underscoring the model's overall robustness. §.§ Case study: Simulation Parameter Space Exploration This section demonstrates the effectiveness of our visual system for efficient simulation parameter recommendation and exploration. The case study is to explore the simulation parameter space for the MPAS-Ocean dataset, which includes four simulation parameters detailed in <ref>. The case study involves a scientist with over 15 years of experience in ocean science simulation analysis and model development. The scientist aims to analyze the cold tongue in the eastern equatorial Pacific Ocean. The cold tongue phenomenon is caused by the upwelling of deep, cold waters to the sea surface, forming a strip of cold water stretching along the equator. In the exploration process, with the interactive visual system, the scientist starts by selecting several simulation parameters randomly within their valid ranges. This provides a starting point for exploration, after which domain knowledge is applied to refine and guide the parameter exploration based on the scientist's expertise and the visualization results. After selection, running in the backend will predict the corresponding simulation outputs for the input simulation parameters. Based on the volume-rendered images of 's outputs (as shown in <ref>), the scientist interactively assigns high preference scores to those with a relatively lower temperature in the east Pacific region, a feature he is interested in. The visual system uses a “Cool to Warm” colormap for the MPAS-Ocean dataset, allowing for intuitive temperature visualization. Parameters predicting higher temperatures in the target area are assigned with low preference scores to refine the focus on relevant simulations. The weights in the genetic algorithm's fitness function were determined iteratively, guided by the scientist's domain knowledge and practical evaluation. Starting with initial guesses based on the optimization goal, the scientist gets the desired weights for similarity, diversity, and uncertainty (0.8, 0.6, and -0.8, respectively) through repeated testing. This configuration aims to discover new parameters that closely match the preferred simulation outcomes while ensuring diversity and reliability. The negative weight for uncertainty is strategically chosen to penalize parameters with high uncertainty. The quantified uncertainty stems from surrogate model approximation errors. Focusing on low-uncertainty parameters ensures reliable predictions for fitness function computation and exploration. However, high-uncertainty regions can also offer valuable insights, highlighting areas that need improvement and guiding future simulations and data augmentation to enhance model quality and reliability. With the genetic algorithm optimized for 5 generations, the fitness scores plateau. As illustrated in the line chart in <ref>, there is a clear increasing trend in the fitness score over generations. By brushing from generation 1 to 5 on the line chart, the Sankey diagram will be updated, illustrating the evolution of candidates across these generations. Nodes in the Sankey diagram are color-coded based on the fitness scores, with darker colors indicating higher fitness scores. The overall fitness scores increase from left to right across generations. Nodes with higher fitness scores are more likely to be chosen as parents for the next generation. Nodes with larger lengths are those with a higher number of offspring. By hovering over a link in the Sankey diagram, the connected paths are highlighted in purple, tracing to the node's ancestors and descendants. Scientists can interact with the Sankey diagram by clicking on nodes to include the associated parameter configurations in the preferred selections and assign preference scores. By altering the colors of the nodes in the Sankey diagram, scientists can gain insights into the simulation parameter optimization process. As depicted in <ref>, for the brushed region from generation 3 to 5, nodes in the Sankey diagram are color-coded according to each candidate's fitness score, similarity, diversity, and uncertainty, respectively. It is clear that, as the evolutionary algorithm proceeds, the similarity scores as well as the diversity scores for candidates are increased across generations. Furthermore, given the algorithm's design to minimize uncertainty (indicated by a negative weight on uncertainty), there is a clear trend toward reduced uncertainty. The Sankey results demonstrate the parameters optimization process aligns with the predefined scientists' objectives. Given that each generation typically contains a lot of candidates, visualizing and analyzing results for all candidates may have duplicates and be inefficient. To address this, our visual system employs the K-means clustering algorithm to group the optimized candidates into K clusters based on their latent representations, as shown in <ref>. Then, the system leverages 's reverse prediction ability to derive K representative parameter configurations from the K cluster centers. We set K=3 in the experiment. As shown in <ref>, the recommended parameters settings are (1) BwsA=3.914, GM=993.87, CbrN=0.678, HV=235.086, (2) BwsA=4.059, GM=1030.204, CbrN=0.688, HV=239.752, and (3) BwsA=4.084, GM=1031.02, CbrN=0.707, HV=243.876. Through simulation runs, the scientist verifies that these identified configurations produce simulation outcomes that match the expected behavior of the cold tongue, a phenomenon he is interested in within the MPAS-Ocean dataset. Analyzing the cold tongue phenomenon is crucial because it is closely linked to significant climate patterns like El Niño and La Niña, which greatly impact global weather and climate variability. Accurate modeling of the cold tongue is essential for climate simulations. Our system's recommended parameters, such as higher BwsA values enhancing the cold tongue effect, provide a deep understanding of the contributing factors. Additionally, the system can identify less obvious configurations that influence the cold tongue, potentially guiding future research. This case study shows the and genetic algorithm-assisted visual system helps efficient simulation parameter exploration and recommendation. § DISCUSSION AND FUTURE WORK Compared to existing surrogate models, the proposed has extra benefits such as uncertainty quantification and bidirectional prediction. Using and the genetic algorithm as the backend, we have demonstrated the effectiveness of the visual system for parameter space exploration driven by scientists' preferences. However, there are still several future directions we can explore. First, although the normalizing flow excels at learning complex distributions and has shown outstanding performance in , its learning ability is constrained by the invertible requirement. Other more powerful generative models, such as denoising diffusion models <cit.>, can serve as potential alternatives for our task. However, they lack the bidirectional prediction ability. Second, in our case study, scientists prioritize data samples with low uncertainty to ensure reliability. However, samples with high uncertainty are also valuable. They reflect the areas where the network does not model well. By incorporating those highly uncertain data samples into the training dataset, we can enhance the robustness of the surrogate model. Third, applications for the scientist-guided simulation parameter recommendation can be broader. The proposed simulation parameter exploration system can be useful for a variety of real-world problems. In the future, we would like to explore the possibility of applying across different domains for various tasks such as optimizing engineering design and scientific discovery. § CONCLUSION In this paper, we introduce , a novel surrogate model for efficient simulation parameter recommendation and exploration. effectively captures the conditional distribution of simulation outcomes conditioned on the simulation parameters, facilitating efficient data generation for different simulation parameters. The complex conditional distribution is modeled by applying a series of invertible and learnable transformations on a base Gaussian distribution. The variance in the Gaussian distribution reflects the uncertainty in the surrogate modeling process. Remarkably, these transformations also enable reverse computation, allowing for the prediction of simulation parameters from simulation data. We integrate with a genetic algorithm and utilize them as the backend for a visual interface, where scientists can interactively specify their optimization objectives through the visual system. After that, the optimization process starts automatically for scientists' preference-guided parameter exploration. Our quantitative and qualitative results demonstrate 's ability to predict high-fidelity simulation outputs, navigate simulation parameter spaces efficiently, and quantify uncertainties. This work is supported in part by the US Department of Energy SciDAC program DE-SC0021360 and DE-SC0023193, National Science Foundation Division of Information and Intelligent Systems IIS-1955764, and Los Alamos National Laboratory Contract C3435. abbrv-doi-hyperref
http://arxiv.org/abs/2407.13640v1
20240718161858
Beyond Augmentation: Empowering Model Robustness under Extreme Capture Environments
[ "Yunpeng Gong", "Yongjie Hou", "Chuangliang Zhang", "Min Jiang" ]
cs.CV
[ "cs.CV" ]
Beyond Augmentation: Empowering Model Robustness under Extreme Capture Environments Yunpeng Gong2, Yongjie Hou1, Chuangliang Zhang2, Min Jiang22 2School of Informatics, Xiamen University, Xiamen, China 1School of Electronic Science and Engineering, Xiamen University, Xiamen, China Email: fmonkey625@gmail.com, 23120231150268@stu.xmu.edu.cn, 31520231154325@stu.xmu.edu.cn, minjiang@xmu.edu.cn 22 Min Jiang is the corresponding author. =========================================================================================================================================================================================================================================================================================================================================================================================================== Beyond Augmentation: Empowering Model Robustness under Extreme Capture Environments Yunpeng Gong2, Yongjie Hou1, Chuangliang Zhang2, Min Jiang22 2School of Informatics, Xiamen University, Xiamen, China 1School of Electronic Science and Engineering, Xiamen University, Xiamen, China Email: fmonkey625@gmail.com, 23120231150268@stu.xmu.edu.cn, 31520231154325@stu.xmu.edu.cn, minjiang@xmu.edu.cn 22 Min Jiang is the corresponding author. =========================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Person Re-identification (re-ID) in computer vision aims to recognize and track individuals across different cameras. While previous research has mainly focused on challenges like pose variations and lighting changes, the impact of extreme capture conditions is often not adequately addressed. These extreme conditions, including varied lighting, camera styles, angles, and image distortions, can significantly affect data distribution and re-ID accuracy. Current research typically improves model generalization under normal shooting conditions through data augmentation techniques such as adjusting brightness and contrast. However, these methods pay less attention to the robustness of models under extreme shooting conditions. To tackle this, we propose a multi-mode synchronization learning (MMSL) strategy . This approach involves dividing images into grids, randomly selecting grid blocks, and applying data augmentation methods like contrast and brightness adjustments. This process introduces diverse transformations without altering the original image structure, helping the model adapt to extreme variations. This method improves the model's generalization under extreme conditions and enables learning diverse features, thus better addressing the challenges in re-ID. Extensive experiments on a simulated test set under extreme conditions have demonstrated the effectiveness of our method. This approach is crucial for enhancing model robustness and adaptability in real-world scenarios, supporting the future development of person re-identification technology. Data Augmentation, Person Re-identification § INTRODUCTION With the rapid development of computer vision and deep learning technologies, person-centric wide-area surveillance systems play a crucial role in public safety and industrial monitoring<cit.>. Wide-area surveillance involves monitoring and analyzing large-scale environments through different camera setups, providing essential information for decision-making and security. However, the models in these application scenarios face increased demands for robustness due to extreme shooting conditions. In industrial settings, especially in construction sites, mining areas, and manufacturing plants, challenges such as dust or smoke, extreme lighting, temperature variations, vibrations, and pollution pose severe obstacles for machine vision systems <cit.>. These conditions demand special requirements for the accuracy and robustness of visual models, particularly in tasks like quality control and safety monitoring, where precise identification of target objects under diverse lighting and background conditions is crucial to avoid potential major accidents. Consequently, researchers need to develop algorithms that can adapt to these challenging conditions and train and test these models with real-world industrial data to ensure reliability under various circumstances. In applications like urban surveillance, traffic monitoring, and border security, where different camera settings, complex lighting, and extensive scene coverage are common, models are susceptible to color domain deviations. Additionally, surveillance systems often operate under extreme conditions such as adverse weather, low lighting, or long-distance shooting. The impact of these factors often leads to unstable model performance, as they impose high demands on the visual robustness of the models. Therefore, the key research question becomes how to enhance the performance of these surveillance systems under various extreme conditions using advanced image processing techniques and deep learning. Currently, to improve model performance under different conditions, researchers typically employ various data augmentation <cit.> techniques such as adjusting brightness, contrast, and adding noise to simulate extreme conditions. This approach aims to train models to better generalize to various input conditions. In some cases, research focuses on specific types of extreme conditions, such as nighttime or image recognition under adverse weather conditions. These studies often involve developing or adjusting models to adapt specifically to these particular contexts. In fields like autonomous vehicles and drone navigation, models need to maintain accuracy and reliability in various extreme environments. Therefore, research in these application areas poses special requirements for the robustness of models under actual physical conditions. Person Re-identification (re-ID) <cit.>, as a crucial security monitoring technology, faces the aforementioned challenges. Therefore, this paper focuses on studying the robustness of models under extreme shooting conditions with re-ID as the target. Current research in re-ID often improves model generalization under normal shooting conditions through data augmentation techniques like adjusting brightness and contrast. However, these methods pay less attention to the robustness of models under extreme shooting conditions <cit.>. To address this issue, we propose a multi-mode synchronization learning strategy based on the idea of data augmentation <cit.>. This method involves dividing images into a grid, randomly selecting grid blocks, and applying data augmentation methods such as contrast and brightness adjustments. This process introduces diverse transformations without altering the original image structure, helping the model adapt to extreme variations. The proposed method enhances the model's generalization under extreme conditions, enabling it to learn diverse features to better address the challenges in re-ID. Extensive experiments conducted under simulated extreme conditions have demonstrated the effectiveness of our approach. This method is crucial for enhancing the robustness and adaptability of models in real-world scenarios, supporting the future development of person re-identification technology. The main contributions of this paper are summarized as follows: ∙ We propose a novel learning strategy termed Multi-Mode Synchronization Learning (MMSL) to enhance the robustness of person re-identification (re-ID) models under extreme shooting conditions. This strategy involves partitioning images into grids, randomly selecting grid blocks, and applying data augmentation techniques such as contrast and brightness adjustments. It introduces diverse transformations without altering the original image structure, aiding the model in adapting to extreme variations. ∙ Addressing the challenges posed by extreme shooting conditions in wide-area surveillance systems, our proposed strategy contributes to improving the robustness and adaptability of models in real-world scenarios. Wide-area surveillance plays a crucial role in industrial production and safety monitoring, and extreme shooting conditions can lead to model performance instability. Our method provides an effective approach to mitigate this issue. ∙ We conduct extensive experiments to validate the effectiveness of the proposed strategy. Particularly in simulated extreme conditions, our method demonstrates improved model generalization, enabling the learning of diverse features to better address the challenges in re-ID tasks. This empirical evidence supports the practical applicability of our approach. § RELATED WORK In the context of varied visual scenarios, the intrinsic complexity of tasks introduces a comparable complexity in their data requirements. Failing to adequately address these intricate data needs can result in issues such as model overfitting and insufficient generalization to the training data. Enhancing generalization capabilities has consistently remained a central focus of convolutional neural network (CNN) research, and data augmentation has demonstrated notable effectiveness in enhancing the model's generalization ability. §.§.§ Person Re-identification Person Re-identification (re-ID) stands as a pivotal task in computer vision, focusing on the recognition and tracking of individuals across different time points and camera views within video sequences or images. This task holds significant applications in surveillance, video analysis, and intelligent transportation systems. However, re-ID faces formidable challenges, encompassing pose variations, changes in lighting conditions, occlusions, and low resolutions <cit.>. The subtle discrepancies in pedestrian appearances further compound the difficulty of distinguishing between distinct individuals. Addressing the intricacies of re-ID necessitates the extraction of robust features from images or video frames. Deep learning, particularly CNNs, is widely employed to learn high-level features from images within the domain of re-ID. Feature extraction networks tailored for this task often integrate pre-trained CNN architectures, such as ResNet <cit.> and PCB. Metric learning is a key technology in re-ID, crucial for quantifying the similarity between two pedestrian images. Commonly used metric learning methods include Euclidean distance and cosine similarity. Learning a suitable metric ensures that the feature representations of the same individual are closely aligned, while those of different individuals are distinctly separated. This emphasis on metric learning contributes to the efficacy of re-ID systems. The importance of re-ID extends beyond mere technological advancements, finding practical applications in surveillance, video analytics, and the optimization of intelligent transportation systems. By overcoming challenges related to pose variations, lighting changes, occlusions, and low resolutions, robust feature extraction through deep learning methodologies, and effective metric learning techniques, re-ID systems play a critical role in enhancing the security and efficiency of various real-world scenarios. §.§.§ Data Augmentation Data augmentation is a widely employed technique, especially in the field of computer vision. It enhances the diversity of a dataset by applying a series of transformations to the original data. This not only aids the model in learning more robust and generalized features but also contributes to improving model performance on limited datasets. Commonly utilized data augmentation techniques include: Random Cropping <cit.>, by randomly cropping a part of an image, the model focuses on different regions, enhancing its ability to recognize local features. This is particularly beneficial for tasks like object detection and classification. Horizontal Flipping and Rotation <cit.>, flipping images horizontally or vertically and rotating them helps the model learn orientation invariance, crucial for understanding the direction and shape of objects within images. Scale Transformation, scaling images aids the model in recognizing objects of various sizes, especially important in tasks like object detection. Noise Injection <cit.>, adding random noise to images improves the model's robustness, enabling it to handle imperfections in real-world images more effectively. Geometric Transformations <cit.>, introducing geometric transformations, such as skewing or distorting, allows the model to learn more complex geometric deformations. Shearing, moving a part of an image through shearing creates new perspectives and compositions, assisting the model in understanding different combinations of objects. CutMix <cit.>, an advanced data augmentation method, goes beyond simple cropping and pasting of image blocks; it replaces a region of one image with that of another. Such mixing aids the model in better understanding relationships between different regions. Introducing noise blocks or randomly deleting pixels in the image helps regularize the network, preventing the model from overly relying on specific parts and addressing occlusion issues. The application of these data augmentation techniques enhances the dataset's variability, leading to a more robust and versatile model capable of handling a wide range of scenarios in computer vision tasks. In enhancing the model's robustness under extreme shooting conditions, Color Jittering <cit.> involves altering the image's brightness, contrast, and saturation to better adapt the model to varying lighting and color conditions. Random Erasing <cit.> achieves regularization by introducing noise blocks or randomly deleting pixels in the image, preventing the model from excessive dependence on specific regions and aiding in overcoming occlusion challenges. AutoAugment <cit.> is an automated data augmentation strategy that integrates a series of augmentation methods. By searching for suitable augmentation strategies, such as image cropping transformations, color adjustments, and brightness/contrast adjustments, it aims to enhance model performance. It is widely applied to address issues related to significant dataset domain disparities. AugMix <cit.> aims to enhance the performance and robustness of deep learning models by introducing diversity and complexity. The method involves applying multiple data augmentation operations to the input image and blending these distorted images in a mixed form, creating diverse versions of the original training images. Please note that AugMix involves the blending of images from different samples, disrupting the original structure of the images. Consequently, as demonstrated in our subsequent experiments, its performance under extreme shooting conditions is less than ideal. In contrast, our proposed method avoids compromising the structural integrity of the original images. Simultaneously, it enables the model to learn various extreme variations, enhancing the robustness of the model to extreme shooting conditions. In future work, we will also consider integrating evolutionary computation techniques <cit.> to extend our method for adaptive optimization. § PROPOSED METHOD Drawing inspiration from AutoAugment <cit.>, this study introduces a novel augmentation strategy called multi-mode synchronization learning (MMSL). In comparison to its predecessor, it enables the model to learn and adapt to extreme variations in data distribution more efficiently. This approach enhances the model's generalization ability under extreme conditions, allowing it to capture invariant features in data with drastic variations and thereby improving robustness to visual challenges. Our MMSL strategy consists of two components, namely global differentiation learning and multi-grid differentiation learning. global differentiation learning is a specific case of multi-grid differentiation learning. §.§.§ Global differentiation learning AutoAugment integrates a series of data augmentation operations, including ShearX and ShearY for horizontal or vertical shear transformations, introducing variations in the shapes of objects within images. This aids the model in learning to handle images from different angles and perspectives, thereby improving the model's robustness. TranslateX and TranslateY perform horizontal or vertical translation transformations, causing the movement of the image's object positions. This enhances the model's adaptability to changes in object positions and improves robustness to spatial transformations. Additionally, rotating the image by a certain angle helps the model adapt to rotational transformations, addressing issues related to rotational invariance. In addition to these operations, AutoAugment includes color-related operations such as Color, Posterize, Solarize, Contrast, Sharpness, and Brightness. These operations enrich the color features of images, strengthening the model's resilience to changes in lighting conditions and color. Introducing AutoContrast, Equalize, and Invert further achieves automatic adjustments to contrast, histogram equalization, and color inversion, helping enhance the recognizability of images. The overall goal of these data augmentation operations is to introduce diversity into the training data, thereby enhancing the model's generalization ability. This enables the model to better adapt to different visual scenarios, lighting conditions, angles, and positional transformations, ultimately improving performance in practical applications. Furthermore, the enhanced diversity of data aids in mitigating the model's sensitivity to overfitting. In the data loading, it randomly samples Q images of per person and M identities to constitute a training batch, which size is B=Q× M. The set is denoted as I={I_k|k=1,2,...,Q× M}. The proposed method randomly performs global transformation on the training batch with a probability, and then inputs the processed images into the model for training. This transformation process can be defined as: I'_k = t(I_k) where t(∙) represents the augmentation function, which randomly selects one augmentation method from the AutoAugment library to transform the image. y_k is the label of the sample, the label of converted image remains unchanged, so (I_k|y_k) = (I'_k|y_k) §.§.§ multi-grid differentiation learning In this methodology, we divide the images in the training dataset into a grid of rows × cols. Subsequently, a random number N is generated from the range of the maximum number of tiles n, determining how many grid regions in the image will undergo transformations. For these selected N image tiles, we randomly extract the corresponding number of data augmentations from the AutoAugment library and apply them to the chosen image regions. The entire process can be expressed through the following equations: P = RandPatch(Grid(I_k,rows,cols),n), Here, Grid(I_k, rows, cols) partitions the image I_k into a grid of rows × cols. Additionally, RandPatch(∙, n) randomly selects n blocks from all image grid blocks for image transformations. P'_i = RandAugment(I_k(P_i)) (P'=[P'_i|i=1,2,...,n]), random data augmentation is applied to the i-th selected image block P_i from the image I_k, where the augmentation methods are drawn from the AutoAugment data augmentation library. The transformed image block is denoted as P'_i. The overall image transformation process for our MMSL strategy can be expressed as follows: I'_k = I_k- P + P' . y_k is the label of the sample, the label of converted image remains unchanged, so (I_k|y_k) = (I'_k|y_k) During the model training phase, our Multi-Mode Synchronization Learning (MMSL) strategy is stochastically applied to the training batch, introducing diverse modes within the same image while preserving the structural integrity of objects. § EXPERIMENTAL ANALYSIS This section will demonstrate the effectiveness of the proposed method through a series of qualitative, comparative experiments, and black-box attack experiments. §.§ Datasets and Evaluation Criteria The proposed method undergoes evaluation on two person re-identification (ReID) datasets: Market-1501 <cit.> and DukeMTMC <cit.>. These datasets are widely acknowledged as the most representative and extensively employed in ReID research. The Market-1501 dataset comprises 12,936 images with 751 identities for training, 19,732 images with 750 identities, and 3,368 query images for testing. DukeMTMC-reID includes 16,522 training images of 702 identities, 2,228 query images of the other 702 identities, and 17,661 gallery images. Consistent with prior research <cit.>, the evaluation utilizes Rank-k precision, Cumulative Matching Characteristics (CMC), and mean Average Precision (mAP) as standard metrics. Rank-1 precision represents the average accuracy of the top-ranked result corresponding to each cross-modality query image. mAP signifies the mean average accuracy, calculated by sorting query results based on similarity. The closer the correct result is to the top of the list, the higher the precision. These metrics collectively offer a comprehensive assessment of the proposed method's performance in comparison to existing works. To simulate extreme shooting conditions and showcase the superiority of our proposed method over existing approaches, we directly apply AutoAugment to transform the gallery set of the test set in the original dataset. It is worth noting that, intuitively, evaluating performance on a dataset configured in this way would obviously favor models trained with AutoAugment. However, subsequent experiments demonstrate that even in such an unfair test, our proposed method still outperforms AutoAugment. This substantiates that our method can effectively enhance existing data augmentation techniques, exhibiting superior gains. §.§ Parameter Configuration and Ablation Experiments There are several parameters to be determined in our strategy, including the probability p_1 of global augmentation, the size of the grid, the number of blocks k for local transformations, and the probability p_2 of local augmentation. In our experiments, we use  <cit.> as the baseline. In order to determine at which ratio models trained under different augmentations exhibit better robustness in extreme shooting conditions, we experimented with various augmentation proportions. Regarding the probability of global augmentation, we conducted quantitative analysis experiments. We performed experiments with p_g set to 5%, 10%, 15%, ..., 50%. The experimental results are shown in Fig. <ref>(a). From the graph, it can be observed that the performance is optimal when the probability of global augmentation is probability=0.2. Therefore, unless otherwise specified, we default to using p_g=0.2 in our subsequent experiments. Regarding the settings for the grid size and the number of local enhancement blocks, we conducted quantitative analysis experiments. We performed experiments with grid sizes 2×2, 3×3, ..., 6×6 to quantitatively analyze the optimal ratio of local enhancement blocks to the entire image for each corresponding grid size. We determined the optimal number of blocks for the respective sizes by analyzing quantitatively for the cases of 3×3. The experimental results are shown in Fig.<ref>. From the graph, it can be observed that the performance is optimal when the grid size is 5×5. Additionally, from Fig.<ref>(c), it can be observed that the performance is optimal when the optimal ratio of local enhancement blocks to the entire image is 1/3 (The number of grid cells is 3). Hence, unless stated otherwise, we adopt the use of the corresponding number of blocks with a ratio of 1/3 as the default configuration in our subsequent experiments. When the optimal parameters for the global augmentation probability p_g and the number of blocks for local transformations are determined, we fix the above parameters to evaluate the probability of local transformations. As shown in Fig. <ref>(b), the model achieves optimal performance when the probability of local transformations is 0.3. Therefore, unless otherwise specified, we default to using p_t=p_g+0.3 in our subsequent experiments. §.§ Comparison experiment We compared our method with two relevant approaches, AutoAugment <cit.> and AugMix <cit.>. As shown in Tab. <ref>, on the Market1501 dataset, our proposed method not only does not degrade the model's performance on the original dataset but also improves it, while other methods affect the model's performance to varying degrees. In 'Extreme Capture,' it can be observed that although extreme testing is simulated using AutoAugment, AutoAugment does not have a significant advantage in this test. Additionally, from Tab. <ref>, consistent performance is observed on the Duke dataset. In addition to the aforementioned tests, we conducted cross-domain testing to better evaluate the model's generalization performance. Cross-domain person reidentification aims at adapting the model trained on a labeled source domain dataset to another target domain dataset without any annotation. It is pointed out by <cit.> that the higher accuracy of the model does not mean that it has better generalization capacity. In response to the above potential problems, we use cross-domain tests to verify the robustness of the model. 'Market->Duke' indicates training on the Market dataset and testing on the Duke dataset, while 'Duke->Market' represents the reverse scenario. As observed in Tab. <ref>, in the 'Market->Duke' scenario, our proposed method outperforms AutoAugment by 3.26% in mAP. In the 'Duke->Market' scenario, our proposed method surpasses AutoAugment by 0.48% in mAP. These experimental results indicate that the proposed method is more beneficial for improving the model's generalization performance. Similar experiments were conducted on the simulated extreme test set, and the results demonstrated a high level of consistency, as illustrated in Tab. <ref>. Through the aforementioned series of experiments, it is evident that augmenting only local regions in an image surpasses augmenting the entire image. Our proposed approach represents a superior implementation of existing data augmentation methods. It embodies an optimized form of augmentation, demonstrating outstanding performance. § CONCLUSION In this paper, we proposed a novel approach, Multi-Mode Synchronization Learning (MMSL) strategy, to enhance the robustness of person re-identification (re-ID) models under extreme shooting conditions. Unlike conventional data augmentation methods that focus on normal shooting conditions, our strategy introduces diverse transformations to adapt models to extreme variations in data distribution. The MMSL strategy incorporates two key components, namely Global Differentiation Learning and Multi-Grid Differentiation Learning. In the Global Differentiation Learning, we drew inspiration from AutoAugment to perform random global transformations on the training batch, addressing challenges such as shear, translation, rotation, and color variations. This component contributes to the model's ability to generalize under varying perspectives, angles, and lighting conditions, ultimately improving its robustness. The Multi-Grid Differentiation Learning component involved dividing images into a grid, randomly selecting grid blocks, and applying data augmentation methods. This method introduces diverse transformations without altering the original image structure, aiding the model in adapting to extreme variations in data distribution. Through extensive experiments under simulated extreme conditions, we demonstrated the effectiveness of our approach in improving model generalization and addressing challenges in re-ID tasks. Our proposed MMSL strategy contributes to the field of person re-identification by providing an effective method to enhance model robustness and adaptability in real-world scenarios. The ability to handle extreme shooting conditions is crucial for the practical deployment of re-ID models in wide-area surveillance systems, industrial monitoring, and security applications. The empirical evidence from our experiments supports the practical applicability of the proposed approach, showcasing its potential to significantly improve the performance of person re-identification models in challenging environments. In future work, we plan to further explore the integration of additional data augmentation techniques and investigate the adaptability of the MMSL strategy to different fields. Additionally, we aim to conduct experiments with real-world data under extreme conditions to validate the robustness and practical utility of our approach in diverse application scenarios. The ongoing development of person re-identification technology demands innovative strategies to address emerging challenges, and we believe that the proposed MMSL strategy represents a step forward in this direction. § ACKNOWLEDGMENT This work was partly supported by the National Natural Science Foundation of China under Grant No. 62276222 00 6Yang Z, Zhong X, Zhong Z, et al. Win-win by competition: Auxiliary-free cloth-changing person re-identification[J]. IEEE Transactions on Image Processing, 2023. 5Zheng Z, Wang X, Zheng N, et al. Parameter-efficient person re-identification in the 3d space[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022. 3Gong, Y., Huang, L., Chen, L. (2022). Person re-identification method based on color attack and joint defence. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4313-4322). 4Gong Y. Cross-Modality Perturbation Synergy Attack for Person Re-identification[J]. arXiv preprint arXiv:2401.10090, 2024. 1Gong Y, Li J, Chen L, et al. Exploring Color Invariance through Image-Level Ensemble Learning[J]. arXiv preprint arXiv:2401.10512, 2024. 2Gong, Y., Huang, L. and Chen, L., 2021. Eliminate deviation with deviation for data augmentation and a general multi-modal data learning method. arXiv preprint arXiv:2101.08533. 7Zhong Z, Zheng L, Zheng Z, et al. Camstyle: A novel data augmentation method for person re-identification[J]. IEEE Transactions on Image Processing, 2018, 28(3): 1176-1190. 8Zhong Z, Zheng L, Luo Z, et al. Invariance matters: Exemplar memory for domain adaptive person re-identification[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 598-607. 9Zheng L, Shen L, Tian L, et al. Scalable person re-identification: A benchmark[C]//Proceedings of the IEEE international conference on computer vision. 2015: 1116-1124. 10Zheng Z, Zheng L, Yang Y. Unlabeled samples generated by gan improve the person re-identification baseline in vitro[C]//Proceedings of the IEEE international conference on computer vision. 2017: 3754-3762. 11Zhong Z, Zheng L, Cao D, et al. Re-ranking person re-identification with k-reciprocal encoding[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1318-1327. 12Somers V, De Vleeschouwer C, Alahi A. Body part-based representation learning for occluded person Re-Identification[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023: 1613-1623. 13Josi A, Alehdaghi M, Cruz R M O, et al. Multimodal data augmentation for visual-infrared person ReID with corrupted data[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023: 32-41. 14Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. 15Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014. 16Cutmix: Regularization strategy to train strong classifiers with localizable features 17Zhong Z, Zheng L, Kang G, et al. Random erasing data augmentation[C]//Proceedings of the AAAI conference on artificial intelligence. 2020, 34(07): 13001-13008. 18Cubuk E D, Zoph B, Mane D, et al. Autoaugment: Learning augmentation strategies from data[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 113-123. 19Hendrycks D, Mu N, Cubuk E D, et al. AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty[C]//International Conference on Learning Representations. 2019. 20Feng R, Zhao D, Zha Z J. Understanding noise injection in gans[C]//international conference on machine learning. PMLR, 2021: 3284-3293. 21Hou Y, Zheng L, Gould S. Learning to structure an image with few colors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 10116-10125. 22Mustafa A, Mantiuk R K. Transformation consistency regularization–a semi-supervised paradigm for image-to-image translation[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVIII 16. Springer International Publishing, 2020: 599-615. 23Zhong Z, Zheng L, Zheng Z, et al. Camera style adaptation for person re-identification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 5157-5166. 24He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778. 25Zheng Z, Ruan T, Wei Y, et al. VehicleNet: Learning robust visual representation for vehicle re-identification[J]. IEEE Transactions on Multimedia, 2020, 23: 2683-2693. 26Luo H, Gu Y, Liao X, et al. Bag of tricks and a strong baseline for deep person re-identification[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 2019: 0-0. 27M. Jiang, W. Huang, Z. Huang, and G. G. Yen, “Integration of global and local metrics for domain adaptation learning via dimensionality reduction,” IEEE Transactions on Cybernetics, vol. 47, no. 1, pp. 38–51,2015. 28M. Jiang, Z. Wang, S. Guo, X. Gao, and K. C. Tan, “Individual-based transfer learning for dynamic multiobjective optimization,” IEEE Transactions on Cybernetics, vol. 51, no. 10, pp. 4968–4981, 2020. 29M. Jiang, Z. Wang, H. Hong, and G. G. Yen, “Knee point-based imbalanced transfer learning for dynamic multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 25, no. 1, pp.117–129, 2020. 30Z. Wang, L. Cao, L. Feng, M. Jiang, and K. C. Tan, “Evolutionary multitask optimization with lower conffdence bound-based solution selection strategy,” IEEE Transactions on Evolutionary Computation, 2024 31M. Jiang, Z. Huang, L. Qiu, W. Huang, and G. G. Yen, “Transfer learning-based dynamic multiobjective optimization algorithms,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 4, pp. 501–514, 2017. 32Shi J, Zhang Y, Yin X, et al. Dual pseudo-labels interactive self-training for semi-supervised visible-infrared person re-identification[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 11218-11228. 33Shi J, Yin X, Chen Y, et al. Multi-Memory Matching for Unsupervised Visible-Infrared Person Re-Identification[J]. arXiv preprint arXiv:2401.06825, 2024. 34Shi J, Yin X, Zhang D, et al. Visible Embraces Infrared: Cross-Modality Person Re-Identification with Single-Modality Supervision[C]//2023 China Automation Congress (CAC). IEEE, 2023: 4781-4787. 35Shi J, Yin X, Wang Y, et al. Progressive Contrastive Learning with Multi-Prototype for Unsupervised Visible-Infrared Person Re-identification[J]. arXiv preprint arXiv:2402.19026, 2024. 36Gong Y, Zeng Z, Chen L, et al. A person re-identification data augmentation method with adversarial defense effect[J]. arXiv preprint arXiv:2101.08783, 2021.
http://arxiv.org/abs/2407.13373v1
20240718102829
Detection of maser emission at 183 and 380 GHz with ALMA in the gigamaser galaxy TXS2226-184
[ "A. Tarchi", "P. Castangia", "G. Surcis", "V. Impellizzeri", "E. Ladu", "E. Yu Bannikova" ]
astro-ph.GA
[ "astro-ph.GA" ]
INAF-Osservatorio Astronomico di Cagliari, via della Scienza 5, 09047, Selargius (CA), Italy andrea.tarchi@inaf.it Leiden Observatory, Leiden University, Post Office Box 9513, 2300 RA Leiden, Netherlands Department of Physics, University of Cagliari, S.P.Monserrato-Sestu km 0,700, I-09042 Monserrato (CA), Italy INAF - Astronomical Observatory of Capodimonte, Salita Moiariello 16, Naples I-80131, Italy Institute of Radio Astronomy, National Academy of Sciences of Ukraine, Mystetstv 4, Kharkiv UA-61002, Ukraine V.N.Karazin Kharkiv National University, Svobody Sq.4, Kharkiv UA-61022, Ukraine The LINER galaxy TXS 2226-184 is known to host a very luminous 22 GHz water maser, labeled as 'gigamaser' at the time of its discovery. So far, the nature of this maser is still debated, in particular, if it is associated with a nuclear accretion disk, or with an ejection component, namely a jet or an outflow originating in the AGN. We have obtained multi-band (band 5, 6, and 7) ALMA observations during Cycle 9, with the purpose of investigating the maser nature and the nuclear molecular material in the innermost region of the galaxy. While the full data sets are still under study, a preliminary data reduction and analysis of the band 5 and 7 spectral line cubes, presented in this Letter, offer already a significant outcome. We observed, bright, possibly maser emission from the 183 GHz and 380 GHz transitions in TXS 2226-184. This represents the first confident detection of 380 GHz (maser) emission in an extragalactic object. Emission features at both frequencies show a two-peaked line profiles resembling that of the 22 GHz maser features. The mm/sub-mm emission originates from a region coincident, within the errors, with that of the 22 GHz. The similarities in profile and position indicate that the emission at the three frequencies is likely produced by the same nuclear structure, although differences in line strengths and feature peak positions may hint a slightly different physical conditions of the emitting gas. A comparison with the few megamaser sources studied into high-enough details and sharing similarities with the water lines in TXS 2226-184, favors a nature associated with amplification of a bright nuclear continuum (from a jet/outflow) through dense and hot gas in front of the nucleus (e.g., a disk or torus), however, a more comprehensive analysis of the available data is necessary to better assess this scenario. ALMA observations of TXS2226-186 Tarchi A. et al. Detection of maser emission at 183 and 380 GHz with ALMA in the gigamaser galaxy TXS 2226-184 A. Tarchi1 P. Castangia1 G. Surcis1 V. Impellizzeri2 E. Ladu1,3 E. Yu Bannikova4,5,6 Received gg mm yyyy; accepted gg mm yyyy ======================================================================================================================================================== § INTRODUCTION Water masers associated with active galactic nuclei (AGN, the ‘megamasers’) have been related with three distinct phenomena: (i) with nuclear accretion disks, where they can be used to derive the disk geometry, enclosed nuclear mass, and distance to the host galaxy (see, e.g., and for NGC 4258, and, more recently, for UGC 3789), or sometimes with the innermost boundary of a dusty torus in the region of the interaction with the outflows (e.g., ); (ii) with radio jets, where they can provide important information about the evolution of jets and their hotspots (e. g., for Mrk 348, and, more recently, , for IRAS 15480); (iii) with nuclear outflows, tracing the velocity and geometry of nuclear winds at < 1 pc from the nucleus, as in the case of Circinus () and NGC 3079 (), where they offer a promising means to probe the structure and motion of the clouds in the Toroidal Obscuring Region predicted by clumpy torus models (e.g., ), and to help studies of AGN tori in general. In all these cases, water megamasers provide vital information on the structures, dynamics and composition of the molecular gas at close proximity to the AGN. In addition, thermal lines can provide complementary information to the megamasers. Recent ALMA observations of the molecular interstellar medium (ISM), and, in particular, of the CO(6-5), HCN(J=3-2), and HCO^+(J=3-2) transitions, in the central few pc of nearby AGN have revealed complex kinematics (; ; ; ). Most previous observational work on water vapor megamasers has focused on the 6_1, 6-5_2, 3 transition at a rest frequency of 22.235079 GHz[Transition and frequencies have been taken from Splatalogue: 'https://splatalogue.online/#/home'] (hereafter, referred to as the 22 GHz line). Thanks to this transition the relation with the three aforementioned phenomena has been studied so far. However, radiative transfer models and observations have found that similar conditions also give rise to additional water maser lines in the millimeter/submillimeter band. Indeed, megamaser detections have been obtained in a growing number of galaxies in 183, 321, and 325 GHz water maser transitions (e.g., ; ). These studies suggest that (sub)mm water masers may be common in AGN, associated with nuclear activity like the 22 GHz megamasers sources. Because of the matching velocities, the masers seem to occur in broadly the same regions as the 22 GHz, originating in the same thin, edge-on disk delineated by the 22 GHz lines, with sub-mm emission lines detected both at systemic and high rotation velocities. Line ratios of two or more maser transitions originating from the same gas would constrain radiative transfer models far better than it is now possible (e.g., ). The 3_1, 3-2_2, 0 transition at a rest frequency of 183.310087 (hereafter, the 183-GHz line) and the 4_1, 4-3_2, 1 transition at a rest frequency of 380.1973598 GHz (hereafter, the 380-GHz line), particularly together with that at 22 GHz, are known as the ”back-bone” masers, and are among the most promising water maser lines predicted by, e.g., <cit.> and . TXS 2226-184 (hereafter TXS 2226) is located at a distance of 107 Mpc (z = 0.025; ), is optically classified as an elliptical/S0 galaxy, and spectroscopically identified as a LINER, which is a signature of a (relatively) low luminosity active nucleus. In this galaxy, a bright 22 GHz water maser was detected in 1995 by <cit.> and the reported emission was so luminous (L_iso = 6100 ), given its redshift, that the term ”gigamaser” was coined. The detection of such a bright maser source in an ’early-type’ galaxy was quite unexpected. The bright nuclear radio emission, although heavily obscured, shows a symmetric and tight structure, seemingly produced by jets oriented at a large angle to the line-of-sight <cit.>. A very recent detailed very long baseline interferometry (VLBI) study, performed with the Very Long Baseline Array (VLBA) and the European VLBI network (EVN), of the 22 GHz water gigamaser in the nucleus of TXS 2226-184 has associated, for the first time, the water maser features with the most luminous radio continuum clump reported in the literature and a speculative model for the maser origin has also been proposed <cit.>. During Cycle 9, we observed the nuclear region of TXS 2226 with ALMA at Bands 5, 6, and 7. In the following, we present the first results from the band 5 and 7 measurements and discuss their impact in the framework of our understanding of the (complex) scenario present in the nucleus of this gigamaser galaxy. § OBSERVATIONS AND DATA REDUCTION The observations whose outcome is reported in this paper were taken as part of the ALMA project 2022.1.01591.S (PI: Tarchi). ALMA was used to observe the innermost region of TXS 2226 with dual polarization at Band 5 on May 24, and Band 7 on June 9, 2023. On April 28, 2023, Band 6 was also observed but it is not included in this work. Observations of the target were interleaved with observations of calibrators used to determine bandpass, flux density, and station gain calibration. The spectral setup for each band consisted of four spectral windows. Spectral windows were configured to have a frequency resolution of ∼ 1.13 MHz and a width in frequency of ∼ 2 GHz (corresponding to 1.6 and 0.8 and 3000 and 1500 for band 5 and 7, respectiveley). In particular, one spectral window was centered around the expected locations of the maser lines. All data from this ALMA project have corresponding quality assurance 2 (QA2) data processing. We have carried out the imaging of the spectral line continuum-subtracted data sets using CASA[https://casa.nrao.edu/] version 6.5.4.9, using the task 'tclean'. The restoring beam size of the images is 0.13 and 0.06 arcsec, for the 183 and 380 GHZ cubes, respectively. We extracted the spectra from the image cubes using the Spectral Profile Tool. In addition, we retrieved a dataset from the GBT archive. This was observed on December 9, 2010, under project name AGBT10C_013. The details of this observation are thoroughly reported in <cit.>, and will not be replicated here. The spectra obtained have been subsequently imported in the CLASS package, part of the GILDAS software[https://www.iram.fr/IRAMFR/GILDAS], for follow-up investigation (Gaussian fitting, comparison, etc.). § RESULTS The outcome of our data analysis yielded two important results. We detected spectral line emission at 183 GHz in TXS 2226. Fig. <ref> shows the spectra obtained from the ALMA cube. In particular, the emission has a clear double-peak structure with very broad profile. The line profile displays two main spectral features (labeled A1 and B1) with a peak flux density of 125 and 43 mJy, and a width of 94 and 110 , respectively. By using the standard equation (see, e.g., ; their Eq. 1) to derive the isotropic luminosity of the (maser) emission, L_iso, we derive values of ∼ 25000 and 10000 for A1 and B1, respectively. These values fall in ranges typical for sources classified as gigamasers. Results for a Gaussian fit of the individual spectral line features are reported in Table <ref>. We also detected, for the first time, spectral line emission at 380 GHz in TXS 2226 (Fig. <ref>). Also in this case, the emission has a double-peak structure with a very large width. The two features displayed at 380 GHz (labeled A2 and B2) have similar widths (both of order 100 ) to those at 183 GHz, but higher peak flux densities (217 and 119 mJy, respectively). Their isotropic luminosities are ∼ 100000 and 60000 , for A2 and B2, respectively. Indeed, these values indicate extremely bright sources, seemingly classifiable as gigamasers. Fit values for the individual features are reported in Table <ref>. § DISCUSSION The actual angular resolution of our observations at 183 GHz is 0.13 arcsec. The lower limit to the brightness temperature of our spectral line emission is then of order 130 K, and thus it is impossible to discriminate on a T_b base if the emission is thermal or non-thermal (maser). However, for the emission to be thermal but with a brightness temperature to be ≤ 2000 K, such that the water molecules are not dissociated, the emitting region has to be > 0.03 arcsec (corresponding to 15 pc), which would be significantly more extended than expected from previous similar cases (see, e.g., ). Also with the resolution of our 380 GHz measurements (0.06 arcsec), the lower limit to the brightness temperature of our spectral line emission (∼ 400 K) is still too low to rule out a thermal nature of the emission. In this case, the emitting region would have to be > 0.05 arcsec (corresponding to 25 pc), once again not as compact as possibly expected. In addition, by considering the similar profiles of all emitting lines (see next Section), and that the 22 GHz emission in TXS 2226 has been confidently associated with the maser phenomenon <cit.>, it is natural to assume that also the 183 and 380 GHz features have a maser nature. In addition, the presence of strong 22 GHz, and relatively strong 183 and 380 GHz emission indicate conditions for the gas of relatively high density (n_H > 10^4-6 cm^-3) and/or high temperature (> 50-100 K), different from those found, for example, in Arp220 <cit.>. This fact and the high luminosity of the (maser) features seemingly rule out a star formation origin (see also, ), thus confirming a nature for the detected features associated with AGN activity in the target. In the following, we will then discuss the spectral line features according to the aforementioned scenario, although we are aware that a different one cannot be 'a priori' ruled out. §.§ Comparison of the (maser) features Fig. <ref> shows a plot where all maser emission detected at the three different frequencies 22 (with archival GBT measurements; see Sect. <ref>), 183 and 380 GHz (with ALMA, present work) are displayed. We note that given that megamaser sources may be variable and the spectra are taken at different epochs (in particular, more than ten years are elapsed since the GBT observations) only a qualitative comparison will be performed. However, together with the unique chance of having three (maser) transitions observed in one target, the relative stability of the line profile and intensity of the 22 GHz maser emission in TXS 2226 over a 20-years period, indicated by multi-epoch (though not frequent and/or regular) GBT spectra (e.g., , and references therein), make such a comparison potentially relevant. The broad velocity range covered by the emission is consistent between the different transitions, indicating that they arise from the same region. Some differences are, however, visible. In particular, the flux densities are different between the 22 GHz and 183 GHz line, with the former being more intense for feature A by almost a factor three, although comparable in strength for feature B. Indeed, having a 183-GHz transition weaker then the 22 GHz one is expected since this is what is also found in other extragalactic sources by, e.g., <cit.>, despite in those cases differences of up to an order of magnitude are found. The strength of the 380-GHz is instead more difficult to interpret since this is the very first confident detection of this maser transition in an extragalactic objects. Indeed, the 380-GHz line falls outside the ALMA band 7 frequency range and it is detected in TXS 2226 thanks to the Doppler shift that relocates it to within the observable range. Indeed, a first and, so far, only detection of this maser has been obtained by <cit.> with the Kuiper Airborne Observatory in the Becklin-Neugebauer Kleinmann–Low Nebula of Orion. A detection, conservatively indicated as tentative, of a broad (possibly maser) line feature was also reported by <cit.> in the high-z lensed QSO MG J0414+0534 but unconfirmed, so far. The 380-GHz line belongs, together with the 22 and 183 GHz lines, to the maser transitions involving the back-bones levels. The strength of the 380-GHz line emission is then expected to be large and, in principle, depending on the degree of maser saturation even brighter than the 22 GHz one <cit.>. This seems to be the case for feature B, while for feature A the 380-GHz flux density is still below that at 22 GHz. The 380-GHz emission is, however, noticeably brighter than the 183-GHz line. The majority of the features detected at 22 GHz are also grossly present at the other two frequencies (Fig. <ref>). In particular, it is confirmed the presence of the feature labeled B3 (B1 and B2 at 183 and 380 GHz, respectively), and of the feature A3 (namely, A1 and A2). Interestingly, the 22 GHz feature A3a seems to be absent or greatly reduced in flux density in the ALMA spectra. Overall the mm/sub-mm maser main features seems to peak at velocity slightly closer to the systemic velocity of the galaxy than the 22 GHz 'counterparts (Fig. <ref>). Differences in peak velocity of different maser transitions may be due to diverse pump mechanisms and/or geometry associated with each maser source (see, e.g., ). Also a slightly different location of the higher-frequency masers w.r.t. the 22 GHz ones, e.g., at smaller/larger radius/distance from a putative center of activity, with different gas density/temperature condition would also lead to this spectral profile comparative scenario (see, e.g., , for the Circinus case). In addition, given the gap of more than ten years between when the 22 GHz and ALMA spectra were taken, another potential explanation could be the motion of the emitting molecular gas during the intervening time. Unfortunately, the spatial resolution is too coarse to yield detailed position estimates of the different maser emitting regions. §.§ On the nature of the maser emission In <cit.> the possible nature of the 22 GHz maser has been investigated through VLBI observations. Their conclusion was clearly indicating a nuclear origin of the emission, associated with the AGN activity of TXS2222. It also, however, left opened the possibility that the maser was associated with either a jet/outflow component or, although with smaller confidence, to an accretion disk. Indeed, more recently, <cit.> reported preliminary results of a multi-band radio continuum campaign led with the EVN that indicate that the 22 GHz maser spots seem to be associated with a region for which the radio spectrum is inverted and opacity is high, thus pinpointing the nucleus of the galaxy. The assessment of the nature of the maser requires a more thorough analysis of these VLBI observations and their combination with the present ALMA results that is beyond the purpose of this letter and that, together with a theoretical model of the AGN components, will be included in a subsequent publication (Tarchi et al. in preparation). Some details of the present mm/sub-mm detections may, however, already anticipate interesting clues to delineate the maser origin. First of all, the position derived from centroid measurements of both the 183 and 380 GHz masers is coincident, within the statistical uncertainty of 1 mas, at RA_J2000=22^h:29^m:12.4942^s and Dec._J2000=-18^∘:10':47.235”. Considering that the absolute astrometric precision for our ALMA observations is of the order of 10 mas (see, e.g., , and Sect. A.9.5 of the ALMA Cycle 9 Proposer’s Guide), the location of the (sub)mm masers appears to be coincident with that of the 22 GHz maser spots derived by Surcis, Tarchi & Castangia (2020; Fig. <ref>). This co-spatiality of the emitting regions and the similar overall spectral structure (see previous section) suggest that all features are tracing roughly the same gas and support the maser nature of the emission. By comparing the case of TXS 2226 with other megamaser sources, the most striking characteristic that stands out is the variety of widths. In particular, for masers associated to AGN activity, usually those produced in accretion disks show relatively narrow features bundled in three groups at velocities close to, red, and blueshifted w.r.t. the recessional velocity of the galaxy. For masers related to the AGN ejection activity (e.g., jet and/or outflows), the situation is less ordered but often, in these cases, broad, (usually) redshifted feature are observed (see, e.g., ). While an unambiguous interpretation for the maser nature based only on the maser profiles is not possible (composite and peculiar maser profiles are, in fact, not rare), and usually a detailed map of the maser spots distribution is necessary, it is somewhat reasonable to compare and discuss the cases that show the greatest similarities between the mm/sub-mm maser profiles. Indeed, so far, only one source showing 22 GHz broad maser lines, similar to those displayed by TXS 2226, and lacking high-velocity components has been reported to host maser emission also at mm wavelengths (although at 321 GHz): the radio galaxy NGC 1052 <cit.>. For the 321-GHz spectrum, the authors report a two-peaked profile emission line that is, however, quite different (much weaker and displaced in velocity) from that at 22 GHz. Nevertheless, their conclusions hint at a nature for the 321-GHz maser emission associated with amplification of a bright nuclear continuum (from a jet/outflow) through dense and hot gas in front of the nucleus (e.g., a disk or torus). The presence of a relatively strong nuclear radio jet also in TXS 2226 makes a similar scenario also for the mm maser emission in this galaxy reasonable. This would also be consistent with the interpretation presently invoked also for the 22 GHz maser in TXS 2226 by <cit.> and <cit.>. However, a more detailed study of the nuclear components, both at cm and mm, in TXS 2226 is indeed necessary to draw a definite answer. We thank the anonymous referee for his/her insightful suggestions. The authors would like to acknowledge the European ARC, and, in particular, Kazi Rygl for their help during several phases of the project. AT is grateful to his daughter Elena for her constant encouragement. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2022.1.01591.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. aa
http://arxiv.org/abs/2407.12521v1
20240717130610
Marginality from Leading Soft Gluons
[ "Sruthi A. Narayanan" ]
hep-th
[ "hep-th" ]
= 1mm 1cm Marginality from Leading Soft Gluons Sruthi A. Narayanan Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, Ontario, Canada N2L 2Y5 § ABSTRACT Marginal operators in a d-dimensional conformal field theory (CFT), those with conformal dimension Δ=d, give us information about the space of related theories. This can be incredibly useful when trying to develop an intrinsic definition of a holographic CFT. Recent work has established that in the case of celestial CFTs, those dual to asymptotically flat spacetimes, marginal operators can be constructed as shadow transforms of soft scalars. In this work, rather than starting from a bulk theory of scalars, we start with Yang-Mills and construct marginal operators from soft gluons via light transforms and a new set of transformations called half-shadow transforms. We obtain the local metric and connections for the associated conformal manifold by studying the two- and three-point functions of these marginal operators. We then discuss the implications of this for broadening our understanding of operator product expansions of soft operators in celestial CFTs. empty plain arabic § INTRODUCTION The defining ingredients of a conformal field theory (CFT) are the spectrum of operators, i.e the operators 𝒪_i and their conformal dimensions Δ_i, as well as the operator product expansion (OPE) coefficients. Given this information, one can completely specify the theory since higher point correlators can be built up from lower point correlators by way of the OPE.<cit.> However, generally this information is difficult to collect for a given CFT, especially when there is not an intrinsic Lagrangian construction. Celestial CFTs, those that are dual to asymptotically flat spacetimes, possess the advantage that their fundamental information is derived from bulk scattering amplitudes.<cit.> Since bulk Lorentz symmetry is manifest as two-dimensional (2D) conformal symmetry on the boundary two-sphere (the celestial sphere) one can transform S-matrix elements to 2D objects that have the behavior of conformal correlators. These are often referred to as celestial amplitudes and are the fundamental building blocks of celestial holography. Likewise, the collinear behavior of scattering amplitudes gives rise to the OPE data in celestial CFTs by way of collinear splitting functions.<cit.> However, deriving properties of celestial CFTs from bulk scattering amplitudes is naturally constrained by the knowledge we have about the amplitudes themselves. Therefore, it seems that we will only know as much about celestial CFTs as we know about our bulk theory. Since one of the goals of the celestial holography program is to use this duality to learn about a putative bulk theory of quantum gravity, one is motivated to develop a more intrinsic understanding of what a celestial CFT is. One way to do this is to study deformations to the boundary theory. In <cit.>, the authors studied deformations arising from adding loop corrections and also at tree level by adding in a background field in the bulk theory. However, this still involved transforming the modified bulk amplitudes and observing that effect on celestial CFT correlators. An approach that is relatively agnostic to the bulk, is the one taken in <cit.> where the authors studied the addition of marginal operators ℳ_I in the CFT. Marginal operators are operators with conformal dimension Δ = 2 and 2D spin J=0, i.e they transform like scalars. As opposed to relevant (Δ<2) and irrelevant (Δ>2) operators, the addition of a marginal operator into the spectrum preserves the conformal symmetry of the theory and therefore gives rise to a distinct but equally well-defined CFT. The standard approach to understanding deformations by a marginal operator is to start with an action S and then deform it by a term of the form δ S = λ^I∫ d^2 x ℳ_I. For each marginal operator ℳ_I, there is an associated parameter λ^I and each distinct set of λ^I's will give rise to a new CFT. We can think of this as a manifold where each point corresponds to a distinct CFT. The marginal operators serve as directions one can travel on this manifold and the λ^I's determine how much one travels in each direction.<cit.> The beauty of this conformal manifold, is that it comes equipped with an intrinsic geometry that can be used to understand the full space of possibly related CFTs. In <cit.> the authors explored the connections between a bulk non-linear sigma model (NLSM) and the geometry of the manifold of associated dual celestial CFTs. Due to the relationship between the manifold and the geometric soft theorems given in <cit.> there was a nice interpretation in terms of the target manifold of the NLSM. The marginal operators generating movement on the conformal manifold were given by shadow transformations of soft scalars. In <cit.> a somewhat analogous connection was made in the case of bulk gauge and gravity theories. However, the standard shadow transform shifts the left and right conformal weights by (h,h̅)→(1-h,1-h̅) which is equivalent to saying that the dimension and 2D spin are shifted according to (Δ,J)→ (2-Δ,-J). Therefore, while shifting a dimension Δ=0 soft operator will give an operator of dimension Δ=2, that operator will only be marginal if it is a scalar since a marginal operator must have 2D spin J=0. As such, it seems like the operators considered in <cit.> are not exactly marginal operators but rather some kind of “quasi-marginal" operators that can be nicely related to the soft theorems. The OPE of these “quasi-marginal" operators will not, in the traditional [For the shadowed soft operators in gauge and gravity, there probably is a manifold where each point consists of a CFT with a different copy of the conformal group. There should be a method to obtain the geometric aspects, however it is most not from reading off terms in the OPE.] sense, encode aspects of the geometry of the conformal manifold. To that end, to find the geometry from the OPE one needs to construct operators that have the right conformal weights. If the spectrum of operators contains those of integer conformal dimension <cit.>, when the operators have 2D spin J=± 1,± 2, none of these can be marginal and we must consider some appropriate transformations of these operators. Since it does not seem plausible to do so equipped with just the shadow transformation, in this paper we will explore alternative integral transforms that generate operators that have the right conformal weights to be marginal. One allowed transformation that is already familiar is the light transform. The light transform has appeared in much of the recent literature on celestial holography <cit.> as it has allowed for a clean understanding of the boundary soft algebras. This paper is organized as follows. In section <ref> we will review the necessary conventions for the remainder of the paper including the definition of celestial amplitudes, soft operators and some aspects of marginal operators. In section <ref> we explicitly define the eight integral transformations that preserve the conformal casimir and form the group D_8. We find that in addition to the shadow and light transforms, there is another transform that has not been previously studied in the context of celestial CFT which we denote the half-shadow transform. In section <ref> we use the light and half-shadow transforms to construct four marginal operators from the leading soft gluons in Yang-Mills. In sections <ref> and <ref> we use the two- and three-point functions of these marginal operators to construct the metric and connections on the associated conformal manifold. We are then able to deduce the form of the marginal OPEs and in section <ref> we speculate on how we could use this data to learn more about the intrinsic structure of celestial CFTs. § CONVENTIONS In this section we will outline the definitions and background necessary for our results. We will work in flat Minkowski space which has the following metric in four dimensions and (3,1) signature ds^2 = -dx_0^2+dx_1^2+dx_2^2+dx_3^2. A null vector q^μ can be parametrized by q^μ(ω,z,z̅) = ηωq̂^μ(z,z̅) = ηω (1+zz̅,z+z̅,-i(z-z̅),1-zz̅) where ω is the energy, η = +1 (-1) for outgoing (incoming) particles, and (z,z̅) labels a coordinate on the celestial sphere. Since we are working in (3,1) signature, z and z̅ are complex conjugates of one another however, we can generally analytically continue to (2,2) signature Klein space <cit.> where z,z̅ are real and independent. Scattering amplitudes in 4D asymptotically flat spacetimes can be recast as correlation functions on the 2D boundary that transform appropriately under conformal transformations. These boundary conformal correlators are often referred to as celestial amplitudes. Celestial amplitudes are found by computing scattering amplitudes in a basis that diagonalizes boosts rather than translations. For massless external particles, one transforms the scattering amplitude by Mellin transforming the momentum space scattering amplitude with respect to the energies of each particle 𝒜(1^η_1,a_1_Δ_1⋯ n^η_n,a_n_Δ_n) = ∏_j=1^n[∫_0^∞ dω_jω_j^Δ_j-1]A_a_1… a_n(η_1ω_1q̂(z_1,z̅_1),…,η_nω_nq̂(z_n,z̅_n)) where Δ_n are the conformal weights of the external particles and a_n is any other label, like helicity or color. We will write these amplitudes as correlation functions 𝒜(1^η_1,a_1_Δ_1,… n^η_n,a_n_Δ_n,a_n) = ⟨𝒪^η_1,a_1_Δ_1⋯𝒪^η_n,a_n_Δ_n⟩ . There has been significant recent work to try to understand the “correct" basis of operators to work with in celestial CFTs. It is widely accepted that the set of operators with integer dimensions is a good basis to work in. This includes, as a subset, the soft operators, the insertions of which generate the standard soft theorems. As such, one can define the soft gluon operators <cit.> R^k,a,±(z,z̅) = lim_ε→ 0ε𝒪^±, a_k+ε(z,z̅), k = 1, 0, -1,… The correlators of these soft operators will be finite, as the vanishing ε cancels a divergence arising from the small ω region of the Mellin transform. Alternatively, one can understand the right hand side of this equation as the residue <cit.> of the operator 𝒪_Δ^±, a when Δ is a negative integer. We will be primarily concerned with the leading soft gluon operator which is the k=1 operator above. In what follows, whenever we refer to a soft gluon operator we implicitly mean the leading soft operator above and will omit the k index to write it as R^±,a≡ R^k=1,a,±. Lastly, it will be useful to review some important aspects of marginal operators in relation to the geometry of the conformal manifold. In two dimensions the OPE of two marginal operators has the following form <cit.> ℳ_I(x)ℳ_J(y)∼g_IJ/|x-y|^4 + Γ_IJ^K ℳ_K(y)δ^(2)(x-y)+⋯ The more regular terms will be proportional to the curvature tensor and its derivatives. Additionally, one can determine the connections by integrating the three point function of marginal operators ∂ g_IJ/∂λ^K = ∫ d^2 z⟨ℳ_K(z)ℳ_I(1)ℳ_J(0)⟩ and then taking the usual combination of these to obtain Γ^K_IJ = 1/2g^KL(∂ g_LI/∂λ^J+∂ g_LJ/∂λ^I - ∂ g_IJ/∂λ^L). It should be noted that the metric and connections are functions of the couplings λ^i. However, when we extract this information, it is the local data and so it should be understood as g_IJ(0) and Γ_IJ^K(0). In what follows we will not explicitly write that the argument is 0. Additionally, when considering exactly marginal operators, it is always possible to choose local coordinates <cit.> such that the connections vanish. In this work we do not pick these coordinates because the connections, while coming from distributional terms in the OPE, tell us information about the OPE of non-marginal operators in the celestial CFT which we do care about. We will discuss this in more detail later on. Additionally, the integrand in (<ref>) should vanish when z≠ 0,1 if the operators in question are truly marginal. This is basically equivalent to saying that the three point function should evaluate to a contact term. In what follows we will determine the set of possible marginal operators that can be constructed from the Yang-Mills soft currents. All of the non-zero two-point functions will tell us the metric components from which we can try to understand the structure of the conformal manifold. § CASIMIR PRESERVING INTEGRAL TRANSFORMS Since marginal operators are scalars (2D spin-0), as we mentioned previously, unless we start with a soft scalar the soft operators discussed above cannot directly be identified as marginal operators in the CFT. In the general case, we need to transform the soft operators into single particle operators that can be identified as marginal. Thus far, the only integral transforms encountered in the celestial CFT literature are the shadow and the light transforms.<cit.> Therefore, in this section we will discuss the complete set of transformations that preserve the quadratic Casimir. In two dimensions, the allowed integral transforms are those that preserve the quadratic Casimir[In higher dimensions, one would need to preserve the higher Casimirs as well.] C_2(Δ,J) = Δ(Δ-2)+J^2 = 2(h(h-1)+h̅(h̅-1)). Just by looking at this expression, we can deduce transformations in the weights that will leave it invariant (h,h̅)→{𝒯_1(h,h̅), 𝒯_2(h̅,h), 𝒯_3(1-h,h̅), 𝒯_4(h,1-h̅), 𝒯_5(1-h̅,h), 𝒯_6(h̅,1-h), 𝒯_7(1-h,1-h̅), 𝒯_8(1-h̅,1-h)}. We have labeled each of the eight transformations by 𝒯_i. In what follows, we will write down the explicit expressions for these integral transformations in two dimensions. The normalizations for each of these transformations is derived along with their forms in Appendix <ref>. We notice that 𝒯_1 is obviously the identity, 𝒯_3, 𝒯_4 are the light transforms and 𝒯_7 is the shadow, all of which are given by[Note that the normalizations for the light transforms are different from previous literature <cit.>. Previously they were given with unit normalization however that was not consistent with the statement that the product of left and right light transforms is the shadow since that would imply a unit normalization for the shadow as well.] 𝒯_1 = 𝕀[𝒪](w,w̅) = ∫ d^2z δ^(2)(z-w)𝒪(z,z̅)𝒯_3 ≡ ℒ^+[𝒪](w,w̅) = Γ(2-2h)/√(π)∫ dz1/(w-z)^2-2h𝒪_h,h̅(z,w̅)𝒯_4 ≡ ℒ^-[𝒪](w,w̅) = 1/√(π)Γ(2h̅-1)∫ dz̅1/(w̅-z̅)^2-2h̅𝒪_h,h̅(w,z̅)𝒯_7 ≡ 𝒮[𝒪](w,w̅) = Γ(2-2h)/πΓ(2h̅-1)∫ d^2z1/(w-z)^2-2h(w̅-z̅)^2-2h̅𝒪_h,h̅(z,z̅). In principal, we should be able to deduce the forms of the other four 𝒯_2,𝒯_5, 𝒯_6, 𝒯_8. Noting that 𝒯_5,𝒯_6 each square to the shadow and that they are inverses of each other allows us to determine that 𝒯_5 ≡ ℛ[𝒪](w,w̅) = Γ(2-h-h̅)/πΓ(h̅-h)∫ d^2z1/(w-z)^2-h-h̅(w̅-z̅)^1+h-h̅𝒪_h,h̅(z,z̅)𝒯_6 ≡ ℛ[𝒪](w,w̅) = Γ(1-h+h̅)/πΓ(h+h̅-1)∫ d^2z1/(w-z)^1-h+h̅(w̅-z̅)^2-h-h̅𝒪_h,h̅(z,z̅). Henceforth we will refer to these two transformations as left and right half-shadow transforms. Noting that 𝒯_2^2 =𝒯_8^2 = 𝕀 and that 𝒯_2𝒯_8 = 𝒮 we find that 𝒯_2 ≡ 𝒮_J[𝒪](w,w̅) = Γ(h̅-h+1)/πΓ(h̅-h)∫ d^2 z1/(w-z)^h̅-h+1(w̅-z̅)^h-h̅+1𝒪_h,h̅(z,z̅)𝒯_8 ≡ 𝒮_Δ[𝒪](w,w̅) = Γ(2-h-h̅)/πΓ(h+h̅-1)∫ d^2 z1/(w-z)^2-h-h̅(w̅-z̅)^2-h-h̅𝒪_h,h̅(z,z̅). The first of these is usually referred to as the spin-shadow while the second one is the analogous transformation in terms of the dimension rather than the spin. The properties of these eight transformations are enumerated in Table <ref>. These transformations were, in fact, enumerated in <cit.> for CFTs in arbitrary dimensions. We could not find the explicit expressions for all these transformations for a 2D CFT in the literature which is partly why we have listed them here. These eight transformations form the dihedral group D_8, also denoted D_4 in geometry, which are the symmetries of a square. The elements of this group are written as D_4 = ⟨ a,b: a^4=b^2=𝕀,ab=ba^-1⟩. Comparing to the above table, we can identify the two generators a,b as ℛ, 𝒮_J. All the other transformations can be written as compositions of these. § YANG MILLS MARGINAL OPERATORS In order to obtain an operator that is exactly marginal using one of the above transformations, we need to start with an operator of spin J=0,± 1. While there are a multitude of reasons to consider scalars[In addition to the shadow transform, marginal operators could have been constructed from scalars by performing either 𝒮_Δ or 𝒮_J. While 𝒮_Δ is indistinguishable from 𝒮 in this case, 𝒮_J is distinct and it might be interesting to see what the implications of that marginal operator is along with those already considered.], for the sake of this work, we construct marginal operators starting from positive and negative helicity soft gluons. Starting with a positive helicity soft gluon R^+,a, we can either perform ℒ^- or ℛ to obtain a marginal operator. We will denote these as ℳ_ℒ^-(z,z̅) ≡ -1/√(2)T^aℒ^-[R^+,a] = T^a/√(2π)∫dw̅/(z̅-w̅)^2R^+,a(z,w̅)ℳ_ℛ(z,z̅) ≡ 1/2π i√(π/2)T^aℛ[R^+,a] = iT^a/(2π)^3/2∫d^2w/(z-w)(z̅-w̅)^2R^+,a(w,w̅). Likewise, starting with a negative helicity soft gluon R^-,a we can either perform ℒ^+ or ℛ to obtain a marginal operator. We will denote these as ℳ_ℒ^+(z,z̅) ≡ 1/√(2)T^aℒ^+[R^-,a] = T^a/√(2π)∫dw/(z-w)^2R^-,a(w,z̅)ℳ_ℛ(z,z̅) ≡ -1/2π i√(π/2)T^aℛ[R^-,a] = iT^a/(2π)^3/2∫d^2w/(z-w)^2(z̅-w̅)R^-,a(w,w̅). The normalizations in these definitions have been chosen in order to fix the overall constant that appears in the two point functions. Scaling the operators arbitrarily will not affect the conclusions but will rather affect the actual explicit values of the quantities we will calculate. While light transformations have been studied in detail from the perspective of celestial CFTs, the half-shadow transformations have not. Naively, it does not look like the half-shadow transforms will result in operators that are isomorphic to the light transformed operators, namely because one is a 1D integral and the other is a 2D integral, so it is interesting to see what the implications of this extra set of transformations is on the boundary theory. One can do the explicit transformation of the conformal primary wavefunctions for gluons to try and understand these operators from the bulk perspective. The results are akin to those in <cit.> and the functional form is not particular enlightening. However, it might be relevant that the gluon we begin with is the Δ=1 soft mode. In <cit.> this was denoted as the Goldstone mode and is identified as a pure gauge primary. This may be important for understanding this story from the bulk however we do not explore that further in this paper. Our goal is to extract any information that we can about the geometry of the space of CFTs. In <cit.> the authors looked at the four point function of marginal operators which encoded some non-singular data from the OPE, mainly the curvature of the manifold. However, for a given space the most basic data is the metric which is given by the two-point functions which we calculate first. Our starting point will be the color-ordered amplitudes in 4D Yang-Mills. We will denote celestial amplitudes as 𝒜^a_1⋯ a_n_Δ_1⋯Δ_n(z⃗_1,⋯,z⃗_n) where it is understood that the first two operators are the positive helicity operators and the rest are negative helicity. Correlators of marginal operators will be found by transforming the celestial amplitudes in accordance with (<ref>) and (<ref>). It should be noted that one could have also considered the MHV amplitudes where there are two negative helicity operators and the rest positive. The calculations for those will proceed analogously and we will combine the results from both sets of amplitudes. § METRIC FROM TWO POINT FUNCTION From (<ref>), we see that the metric of the conformal manifold can be extracted from the two point function of marginal operators. Therefore, first we will look at the possible two point functions that we can construct using both the light transform and the half-shadow transforms. The celestial two point function of a positive and negative helicity gluon is <cit.> 𝒜_Δ_1,Δ_2^ab(z⃗_1,z⃗_2) = 2πδ(Δ_1+Δ_2-2)δ^abδ(z_12)δ(z̅_12). Taking the limit as the dimension of both operators goes to Δ=1 gives the following soft two point function ⟨ R^+,aR^-,b⟩ = lim_ε→ 02πδ^abεδ(ε)δ(z_12)δ(z̅_12) = 2πδ^abδ^(2)(z_12). Before moving on, it is important to outline how the soft limit was taken. The definition of soft operators in (<ref>) suggests that in taking the soft limit of this two point function, we would get two factors of ε, one for each operator which would render this two point function zero. We take the approach where we first take one operator to be soft, just by setting the dimension to 1 and then the second operator comes with a factor of ε which cancels out the singularity from the delta function. This is equivalent to defining the operators as residues around negative conformal dimension as in  <cit.>. In what follows, this is how we will take the soft limits of two and three point functions. Additionally, as in <cit.>, we will take the residues of the coefficients using the relation _Δ=-nΓ(Δ) = (-1)^n/n!. There are four possible two point functions that we need to consider, given that there are two ways in which we can transform each helicity operator. We list them below ⟨ℳ_ℒ^-ℳ_ℒ^+⟩ = T^aT^b/2π∫dw̅_1dw_2⟨ R^+,a(z_1,w̅_1)R^-,b(w_2,z̅_2)⟩/(z̅_1-w̅_1)^2(z_2-w_2)^2 = (N^2-1)/2z_12^2z̅_12^2⟨ℳ_ℒ^-ℳ_ℛ⟩ = iT^aT^b/(2π)^2∫dw̅_1d^2w_2⟨ R^+,a(z_1,w̅_1)R^-,b(w_2,w̅_2)⟩/(z̅_1-w̅_1)^2(z_2-w_2)^2(z̅_2-w̅_2)= (N^2-1)/2z_12^2z̅_12^2⟨ℳ_ℛℳ_ℒ^+⟩ = iT^aT^b/(2π)^2∫d^2w_1dw_2⟨ R^+,a(w_1,w̅_1)R^-,b(w_2,z̅_2)⟩/(z_1-w_1)(z̅_1-w̅_1)^2(z_2-w_2)^2 = (N^2-1)/2z_12^2z̅_12^2⟨ℳ_ℛℳ_ℛ⟩ = -T^aT^b/(2π)^3∫d^2w_1d^2w_2⟨ R^+,a(w_1,w̅_1)R^-,b(w_2,w̅_2)⟩/(z_1-w_1)(z̅_1-w̅_1)^2(z_2-w_2)^2(z̅_2-w̅_2)= -(N^2-1)/2z_12^2z̅_12^2 where we have used the convention for SU(N) generators in the defining representation, that T^aT^a = N^2-1/2N𝕀 and 𝕀 = N. Some pertinent details of these computations are done in Appendix <ref>. Comparing to (<ref>) we can identify the metric on the conformal manifold as g_IJ = N^2-1/2[ 0 1 1 0 1 0 0 1 1 0 0 -1 0 1 -1 0 ], {I,J}∈{ℒ^-,ℒ^+,ℛ,ℛ}. Again, the metric should be understood as g_IJ(λ)|_λ=0. The first aspect to notice is that the metric is inherently not diagonal. Given that the two point function pairs opposite helicity gluons, that is not surprising. It is, however, possible to diagonalize the metric and we see that the following linear combinations of operators will give rise to a diagonal metric 𝒪 = ℳ_ℒ^-±√(2)ℳ_ℒ^++ℳ_ℛ, ±√(2)ℳ_ℒ^-+ℳ_ℒ^+ + ℳ_ℛ. With the exception of the presence of the half-shadow transformed fields, linear combinations of this type are reminiscent of <cit.>. Regardless of the choice of linear combination we see that the conformal manifold is four dimensional. In the next section, we will discuss the three point functions which give rise to the leading order term in the OPE of marginal operators and whose coefficients correspond to the connections on this manifold. § CONNECTION FROM THREE-POINT FUNCTION The connections can be obtained from the leading term in the OPE which comes from the three point functions. The celestial three point function of two positive helicity gluons and one negative helicity gluon is <cit.> 𝒜_Δ_1,Δ_2,Δ_3^abc(z⃗_1,z⃗_2,z⃗_3) = 2iπ f^abcδ(Δ_1+Δ_2+Δ_3-3)Θ(z̅_13/z̅_12)Θ(z̅_32/z̅_12)δ(z_13)δ(z_23)/z̅_12^-Δ_3z̅_32^2-Δ_1z̅_13^2-Δ_2. For each helicity there are two possible transformations that give a marginal operator. Therefore there are 2^3=8 possible three point functions of marginal operators that originate from this three point function. They are as follows ⟨ℳ_ℒ^-(z_1,z̅_1)ℳ_ℒ^-(z_2,z̅_2)ℳ_ℒ^+(z_3,z̅_3)⟩ = -N(N^2-1)/4(2π)^1/2δ(z_12)δ(z̅_12)/z_32^2z̅_32^2⟨ℳ_ℛ(z_1,z̅_1)ℳ_ℒ^-(z_2,z̅_2)ℳ_ℒ^+(z_3,z̅_3)⟩ = -iN(N^2-1)/4(2π)^3/2δ(z̅_12)/z_121/z_32^2z̅_32^2⟨ℳ_ℒ^-(z_1,z̅_1)ℳ_ℛ(z_2,z̅_2)ℳ_ℒ^+(z_3,z̅_3)⟩ = -iN(N^2-1)/4(2π)^3/2δ(z̅_12)/z_211/z_31^2z̅_32^2⟨ℳ_ℛ(z_1,z̅_1)ℳ_ℛ(z_2,z̅_2)ℳ_ℒ^+(z_3,z̅_3)⟩ = -iN(N^2-1)/4(2π)^3/2δ(z_12)δ(z̅_12)/z̅_32^2z_32^2⟨ℳ_ℒ^-(z_1,z̅_1)ℳ_ℒ^-(z_2,z̅_2)ℳ_ℛ(z_3,z̅_3)⟩ = -N(N^2-1)/4(2π)^1/2δ(z_12)δ(z̅_12)/z_32^2z̅_32^2⟨ℳ_ℛ(z_1,z̅_1)ℳ_ℒ^-(z_2,z̅_2)ℳ_ℛ(z_3,z̅_3)⟩ = - iN(N^2-1)/4(2π)^3/2δ(z̅_12)/z_121/z_32^2z̅_32^2⟨ℳ_ℒ^-(z_1,z̅_1) ℳ_ℛ(z_2,z̅_2)ℳ_ℛ(z_3,z̅_3)⟩ = - iN(N^2-1)/4(2π)^3/2δ(z̅_12)/z_211/z_31^2z̅_32^2⟨ℳ_ℛ(z_1,z̅_1)ℳ_ℛ(z_2,z̅_2)ℳ_ℛ(z_3,z̅_3)⟩ = - iN(N^2-1)/4(2π)^3/2δ(z_21)δ(z̅_21)/z_32^2z̅_32^2 where we have used the convention (f^abcT^aT^bT^c) = iN/4(N^2-1). Pertinent details of these computations are contained in Appendix <ref>. We observe that while all of these three point functions are distributional, which was expected given the properties of marginal operators we previously discussed, some of them look like δ^(2)(z_12) while others look like δ(z̅_12)/z_12. The first type is consistent with the usual discussion in Euclidean CFTs <cit.> while the second type is an artifact of celestial CFT three-point functions being Lorentzian.[We thank Sabrina Pasterski for pointing this out.] One way to obtain the connections Γ_IJ^K is to determine the leading term in the OPE by comparing the three point functions to the two point functions. This would be consistent with the method traditionally used to compute OPE coefficients in celestial CFT <cit.>. However, the argument for the possible operators that appear in the OPE comes from knowing the possible three point interactions in the bulk theory. In this case, all that tells us is that a gluon has to appear but we have no way of determining from the bulk, which marginal operators should appear since they are rather indistinguishable from the perspective of the 4D theory. Therefore there will be an ambiguity if we look at the three point functions and try to deduce what the terms in the OPE are since we cannot determine whether ℳ_ℒ^-ℳ_ℒ^-→ℳ_ℒ^- or if ℳ_ℒ^-ℳ_ℒ^-→ℳ_ℛ since the two point functions are exactly alike. Instead, we can find the connections by integrating the possible three point functions as per (<ref>) and (<ref>). Therefore, from the three point functions we obtain the following derivatives of the metric components ∂ g_ℛℛ/∂λ^ℛ = ∂ g_ℛℒ^+/∂λ^ℛ= -iN(N^2-1)/4(2π)^3/2∂ g_ℒ^-ℒ^+/∂λ^ℒ^- = ∂ g_ℒ^-ℛ/∂λ^ℒ^-=∂ g_ℛℒ^+/∂λ^ℒ^- =∂ g_ℒ^-ℒ^+/∂λ^ℛ =∂ g_ℛℛ/∂λ^ℒ^- =∂ g_ℒ^-ℛ/∂λ^ℛ = - N(N^2-1)/4√(2π). It is important to note that since the metric is symmetric, i.e we can exchange the two operators in the two point function and it does not change, these derivatives are also symmetric in the exchange of the metric index labels. Substituting these into the expression for the connections will give us the following non-zero connections Γ^ℒ^-_ℒ^-ℒ^- = Γ^ℒ^-_ℒ^-ℛ = -N/2√(2π), Γ^ℒ^-_ℛℛ = -iN/4√(2)π^3/2. As we noted before, we could have chosen to work with the MHV amplitudes instead, which have basically the same structure as the amplitudes. In fact, in order to get the full set of connections we should include those as well which can be deduced from the above to be Γ^ℒ^+_ℒ^+ℒ^+ = Γ^ℒ^+_ℒ^+ℛ = -N/2√(2π), Γ^ℒ^+_ℛℛ = -iN/4√(2)π^3/2. It is evident that we do not get any connections that mix opposite helicity operators. This is not surprising because the two-point function of same-helicity gluons vanishes. We will discuss this point in more detail in the next section. Now that we have the connections and the metric components, we can write out the operator product expansion of the marginal operators without being concerned about ambiguities. This gives us the following putative OPEs of the marginal operators ℳ_ℒ^-(z,z̅)ℳ_J(w,w̅) ∼ (N^2-1)(δ_Jℒ^++δ_Jℛ)/2|z-w|^4 -N/2√(2π)δ_Jℒ^-ℳ_ℒ^-(w,w̅)δ^(2)(z-w) - N/2√(2π)δ_Jℛℳ_ℛ(w,w̅)δ^(2)(z-w)ℳ_ℒ^+(z,z̅)ℳ_J(w,w̅) ∼ (N^2-1)(δ_Jℒ^-+δ_Jℛ)/2|z-w|^4 -N/2√(2π)δ_Jℒ^+ℳ_ℒ^+(w,w̅)δ^(2)(z-w) - N/2√(2π)δ_Jℛℳ_ℛ(w,w̅)δ^(2)(z-w)ℳ_ℛ(z,z̅)ℳ_J(w,w̅) ∼ (N^2-1)(δ_Jℒ^--δ_Jℛ)/2|z-w|^4-N/2√(2π)δ_Jℒ^+ℳ_ℒ^+(w,w̅)δ^(2)(z-w) - iN/4π√(2π)δ_Jℛℳ_ℛ(w,w̅)δ^(2)(z-w)ℳ_ℛ(z,z̅)ℳ_J(w,w̅) ∼ (N^2-1)(δ_Jℒ^+-δ_Jℛ)/2|z-w|^4 -N/2√(2π)δ_Jℒ^-ℳ_ℒ^-(w,w̅)δ^(2)(z-w) - iN/4π√(2π)δ_Jℛℳ_ℛ(w,w̅)δ^(2)(z-w). As we mentioned before, we have specifically chosen not to work in local coordinates where the connections vanish. If we had, all the distributional terms in the above OPEs would vanish. While that would be perfectly fine for understanding the OPEs of the marginal operators, the presence of the distributional terms above are important for extrapolating the operators appearing in the soft OPEs. Since the marginal operators are transformed soft gluons, in principle one can backtrack and deduce what soft operators we expect to find in their OPEs. Unsurprisingly what we find is that these terms are consistent with the OPEs that we already have of soft gluons because they say that ++→ + and –→ -. § DISCUSSION In <cit.> the authors focused on extracting the curvature of the manifold from the four point function of marginal operators. However, the most fundamental description of the geometry comes from the metric which one can deduce from the two point function. In the case of Yang-Mills, we find that there exists a set of marginal operators giving us four independent directions that we can travel on the conformal manifold. While we are fairly certain that there are no more than four directions, since that would imply being able to construct some new set of marginal operators, it is not clear if the above analysis gives us the full metric. The components of the metric we determined were based solely on the existence of non-zero two point functions. While it is true that the only non-vanishing two point function is between opposite helicity gluons, it is not clear whether that necessarily implies the non-existence of a two point function between ℒ^- and ℒ^-, for instance. In fact, starting from the three point functions one could ask if the metric derivatives we wrote down are the only ones that exist. Since the definition of ∂ g_IJ/∂λ^K, takes the three point function where ℳ_K is a function of the variables (z,z̅) we integrate over, if we take the same three point function and set a different operator to have the argument (z,z̅), that will lead to a different combination of indices like ∂ g_IK/∂λ^J, for instance. However, if we look at the three point functions they all contain δ(z̅_12), which means that if we set the argument of the third operator to be the one integrated over, the delta function will be δ(1) which is 0. Therefore, given the set of three point functions, we have derived the only non-zero metric derivatives that exist. It is additionally interesting to speculate on the connections to the statements made in <cit.> where they mention how to understand the geometric soft theorems in gauge theory and gravity by relating them to a sigma model. The authors propose a way to tack on the necessary kinematic structure to the nonlinear sigma model field that is consistent with gauge theory and gravity. While the authors have expressions for the metric, connections and curvature it is not clear how those explicitly relate to what we have found in this work. There are small ingredients that are similar, like the existence of δ^ab in the metric and f^abc in the connection, but those could just be a result of working with quantities in a theory of Yang-Mills. Additionally, the authors of <cit.> work specifically with Yang-Mills in Feynman gauge and the expressions they propose for the metric and connections are gauge dependent while in this paper we have made statements that are all agnostic to the gauge. It would be incredibly useful if we could completely decouple from the bulk theory but at the present moment, that seems rather difficult. Simultaneously, it is useful to point out a subtlety when comparing this work to that done in <cit.>. In both of the previous papers, there was a known set of vacua in the bulk spacetime so that moving along the conformal manifold was understood as moving from one vacuum state to another. In <cit.> since the un-shadowed operators were scalars, the shadowed operators were exactly marginal, the boundary symmetries were preserved and marginal deformations corresponded to moving around in the space of vacua of the bulk sigma model. In <cit.> the operators were not marginal in the traditional sense and deforming by these operators actually shifted from one copy of the boundary conformal group to another. Therefore, strictly speaking, the conformal symmetry was broken but then restored which allowed for a deformation of this type that was not quite marginal.[We thank Daniel Kapec for highlighting this point.] In this work, the operators are marginal but do not have an interpretation as moving around in the space of vacua corresponding to large gauge transformations. However, a discussion of this type involving the geometry of a conformal manifold suggests that there is such an interpretation. At this time, we do not understand what space of vacua this could describe but it is definitely a very interesting question left for future work. Lastly, we should discuss the case of gravity since that is what we endeavor to understand. Ideally, we would like to understand marginal operators in the case where the bulk theory was a theory of quantum gravity, however it seems like if you begin with an operator of spin J=± 2, then its dimension must be complex in order to transform it to a marginal operator. Therefore, this particular logic does not work for the graviton case but it is possible that there is some way to shift the spin or dimension, maybe via the use of a descendent rather than a primary or by considering composite operators, in order to discuss marginal operators in that case. We also leave this for future exploration. § ACKNOWLEDGEMENTS I am grateful to Erin Crawley, Dan Kapec, Rajamani Narayanan, Sabrina Pasterski and Ana-Maria Raclariu for useful discussions. Additional thanks to Sabrina Pasterski and Dan Kapec for looking at a preliminary draft. The author acknowledges support by the Celestial Holography Initiative at the Perimeter Institute for Theoretical Physics and by the Simons Collaboration on Celestial Holography. The author's research at the Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Industry Canada and by the Province of Ontario through the Ministry of Colleges and Universities. § ACTION OF COMPOSITE INTEGRAL TRANSFORMS In this appendix we will compute the action of a composite integral transforms on an operator. We will consider arbitrary transformations so that we can use this to solve for the forms of explicit transformations. Let us take integral transforms of the following form 𝒯_i[𝒪](w,w̅) = 𝒞^i_h,h̅∫d^2 z/(w-z)^f_i(h,h̅)(w̅-z̅)^g_i(h,h̅)𝒪_h,h̅(z,z̅) where the initial operator has weights (h,h̅) and the transformed operator has weights (h_i,h̅_i). The action of a composite operator is 𝒯_i𝒯_j[𝒪](w,w̅) = 𝒞^i_h_j,h̅_j𝒞^j_h_,h̅∫d^2z_2d^2z_1𝒪_h,h̅(z_1,z̅_1)/(w-z_2)^f_i(h_j,h̅_j)(w̅-z̅_2)^g_i(h_j,h̅_j)(z_2-z_1)^f_j(h,h̅)(z̅_2-z̅_1)^g_j(h,h̅). In order to determine whether this composite transformation is equivalent to another transformation, we need to perform the intermediary integral which is the one over z_2,z̅_2. We can first rewrite it as ℐ = ∫d^2z_2(-1)^f_j(h,h̅)+g_j(h,h̅)/(w-z_2)^f_i(h_j,h̅_j)(w̅-z̅_2)^g_i(h_j,h̅_j)(z_1-z_2)^f_j(h,h̅)(z̅_1-z̅_2)^g_j(h,h̅) = (-1)^f_j(h,h̅)+g_j(h,h̅)∫d^2z_2 (w-z_2)^g_i(h_j,h̅_j)-f_i(h_j,h̅_j)(z_1-z_2)^g_j(h,h̅)-f_j(h,h̅)/|w-z_2|^2g_i(h_j,h̅_j)|z_1-z_2|^2g_j(h,h̅). Then we can inverse Fourier transform twice to put it in the following form ℐ = (-1)^f_j(h,h̅)+g_j(h,h̅)∫ d^2 z_2∫d^2k_1/(2π)^2d^2k_2/(2π)^2∫ d^2x_1 d^2 x_2 x_1^g_i(h_j,h̅_j)-f_i(h_j,h̅_j)x_2^g_j(h,h̅)-f_j(h,h̅)/(x_1x̅_1)^g_i(h_j,h̅_j)(x_2x̅_2)^g_j(h,h̅) × exp[ik_1·(x_1-w+z_2)+ik_2·(x_2-z_1+z_2)]. The integral over z_2,z̅_2 yields a delta function which can be used to do one of the momentum integrals to give ℐ = 2(-1)^f_j(h,h̅)+g_j(h,h̅)∫d^2k_1/(2π)^2e^-ik_1·(w-z_1)∫ d^2x_1 d^2 x_2 x_1^g_i(h_j,h̅_j)-f_i(h_j,h̅_j)x_2^g_j(h,h̅)-f_j(h,h̅)e^ik_1· x_12/(x_1x̅_1)^g_i(h_j,h̅_j)(x_2x̅_2)^g_j(h,h̅). Next we can use Schwinger parametrization to rewrite the denominator and then perform the x_i integrals ℐ = 2(-1)^f_j(h,h̅)+g_j(h,h̅)/Γ(g_i(h_j,h̅_j))Γ(g_j(h,h̅))∫d^2k_1/(2π)^2e^-ik_1·(w-z_1)∫_0^∞ dβ_1 dβ_2 β_1^g_i(h_j,h̅_j)-1β_2^g_j(h,h̅)-1 × ∫ d^2x_1 d^2 x_2x_1^g_i(h_j,h̅_j)-f_i(h_j,h̅_j)x_2^g_j(h,h̅)-f_j(h,h̅)e^-β_1x_1x̅_1-β_2x_2x̅_2+ik_1·(x_1-x_2) = 2π^2i^g_i(h_j,h̅_j)-f_i(h_j,h̅_j)+g_j(h,h̅)-f_j(h,h̅)/Γ(g_i(h_j,h̅_j))Γ(g_j(h,h̅))∫d^2k_1/(2π)^2e^-ik_1·(w-z_1) × k̅_1^g_i(h_j,h̅_j)-f_i(h_j,h̅_j)+g_j(h,h̅)-f_j(h,h̅)∫_0^∞ dβ_1 dβ_2 β_1^f_i(h_j,h̅_j)-2β_2^f_j(h,h̅)-2e^-k_1k̅_1/β_1-k_1k̅_1/β_2 = π^2Γ(1-f_i(h_j,h̅_j))Γ(1-f_j(h,h̅))i^g_i(h_j,h̅_j)-f_i(h_j,h̅_j)+g_j(h,h̅)-f_j(h,h̅)/2Γ(g_i(h_j,h̅_j))Γ(g_j(h,h̅)) × ∫d^2k_1/(2π)^2e^-ik_1·(w-z_1)k_1^f_i(h_j,h̅_j)+f_j(h,h̅)-g_i(h_j,h̅_j)-g_j(h,h̅)/(k_1k̅_1)^2-g_i(h_j,h̅_j)-g_j(h,h̅) = π(-1)^f_i(h_j,h̅_j)+f_j(h,h̅)-g_i(h_j,h̅_j)-g_j(h,h̅)Γ(1-f_i(h_j,h̅_j))Γ(1-f_j(h,h̅))/Γ(g_i(h_j,h̅_j))Γ(g_j(h,h̅))Γ(2-g_i(h_j,h̅_j)-g_j(h,h̅)) × ∫_0^∞ dββ^-f_i(h_j,h̅_j)-f_j(h,h̅)(w̅-z̅_1)^f_i(h_j,h̅_j)+f_j(h,h̅)-g_i(h_j,h̅_j)-g_j(h,h̅)e^-|w-z_1|^2/β = π(-1)^f_i(h_j,h̅_j)+f_j(h,h̅)-g_i(h_j,h̅_j)-g_j(h,h̅)Γ(1-f_i(h_j,h̅_j))Γ(1-f_j(h,h̅))/Γ(g_i(h_j,h̅_j))Γ(g_j(h,h̅)) × Γ(f_i(h_j,h̅_j)+f_j(h,h̅)-1)/Γ(2-g_i(h_j,h̅_j)-g_j(h,h̅)) (w-z_1)^1-f_i(h_j,h̅_j)-f_j(h,h̅)(w̅-z̅_1)^1-g_i(h_j,h̅_j)-g_j(h,h̅) One also has to fix the normalization of the operators. For the operators 𝒯_i, they have to satisfy the following relations 𝒯_2^2 = 𝒯_3^2 = 𝒯_4^2 = 𝒯_5𝒯_6 = 𝒯_7^2 = 𝒯_8^2 = 𝕀𝒯_3𝒯_4 = 𝒯_5^2 = 𝒯_6^2 = 𝒯_2𝒯_8 = 𝒯_7 ≡𝒮. We see that a product of operators is constrained either to be the shadow or the identity depending on the operators in question. If the product of two operators gives the identity then the functions must satisfy f_i(h_j,h̅_j)+f_j(h,h̅) = g_i(h_j,h̅_j) + g_j(h,h̅) = 2. Then we have [From <cit.> there are actually two possible choices for this integral product but we have chosen the one that allows us to fulfill the other constraints simultaneously.] 𝒯_i𝒯_j[𝒪](w,w̅) = 𝒞^i_h_j,h̅_j𝒞^j_h_,h̅∫d^2z_2d^2z_1𝒪_h,h̅(z_1,z̅_1)/(w-z_2)^f_i(h_j,h̅_j)(w̅-z̅_2)^g_i(h_j,h̅_j)(z_2-z_1)^f_j(h,h̅)(z̅_2-z̅_1)^g_j(h,h̅) = 𝒞^i_h_j,h̅_j𝒞^j_h_,h̅π^2 Γ(1-g_i(h_j,h̅_j))Γ(1-g_j(h,h̅)/Γ(f_i(h_j,h̅_j))Γ(f_j(h,h̅))∫ d^2z_1δ^(2)(w-z_1)𝒪_h,h̅(z_1,z̅_1) = 𝒞^i_h_j,h̅_j𝒞^j_h_,h̅π^2 Γ(1-g_i(h_j,h̅_j))Γ(1-g_j(h,h̅)/Γ(f_i(h_j,h̅_j))Γ(f_j(h,h̅))𝒪_h,h̅(w,w̅). This implies that the normalizations should satisfy the following equation 𝒞^i_h_j,h̅_j𝒞^j_h,h̅ = Γ(f_i(h_j,h̅_j))Γ(f_j(h,h̅))/π^2Γ(1-g_i(h_j,h̅_j))Γ(1-g_j(h,h̅)). It is easy to see that in the case of the identity the normalizations are 𝒞^i_h,h̅ = Γ(f_i(h,h̅))/πΓ(1-g_i(h,h̅)) and this is true for all i∈{2,⋯,8}. If the product of two operators is the shadow, the powers have to satisfy f_i(h_j,h̅_j)+f_j(h,h̅)-1 = 2-2h g_i(h_j,h̅_j)+g_j(h,h̅)-1 = 2-2h̅. Therefore, we know that the normalizations should satisfy the following equation 𝒞^i_h_j,h̅_j𝒞^j_h,h̅πΓ(1-f_i(h_j,h̅_j))Γ(1-f_j(h,h̅))Γ(2-2h)/Γ(g_i(h_j,h̅_j))Γ(g_j(h,h̅))Γ(2h̅-1) = Γ(2-2h)/πΓ(2h̅-1). which we can see is satisfied using (<ref>). §.§ Solving for 𝒯_5 In addition to the normalizations we need to solve for the powers when we do not know them. We do one example here to illustrate. The transformation 𝒯_5 squares to the shadow. We can solve for it in the following way. We know that 𝒯_5 takes the weights to (1-h̅,h). Which means 𝒯_5^2[𝒪](w,w̅) = 𝒞^5_1-h̅,h𝒞^5_h,h̅(-1)^f_5(1-h̅,h)+f_5(h,h̅)-g_5(1-h̅,h)-g_5(h,h̅)Γ(1-f_5(1-h̅,h))Γ(1-f_5(h,h̅))/2^3(2π)^2Γ(g_5(1-h̅,h))Γ(g_5(h,h̅)) × Γ(f_5(1-h̅,h)+f_5(h,h̅)-1)/Γ(2-g_5(1-h̅,h)-g_5(h,h̅)) × ∫d^2z_1𝒪_h,h̅(z_1,z̅_1)/(w-z_1)^f_5(1-h̅,h)+f_5(h,h̅)-1(w̅-z̅_1)^g_5(1-h̅,h)+g_5(h,h̅)-1 ≡ Γ(2-2h̅)/πΓ(2h-1)∫d^2z_1𝒪_h,h̅(z_1,z̅_1)/(w-z_1)^2-2h(w̅-z̅_1)^2-2h̅. Letting f_5(h,h̅) = a_1h+a_2h̅+a_3 and g_5(h,h̅) = b_1h+b_2h̅+b_3 we get the following equations a_1(1-h̅)+a_2h+a_3 + a_1h+a_2h̅+a_3 -1 = (a_1+a_2)h + (a_2-a_1)h̅+2a_3+a_1-1 =2-2h → a_1=-1,a_2=-1,a_3 = 2 b_1(1-h̅)+b_2h+b_3+b_1h+b_2h̅+b_3-1 = (b_1+b_2)h + (b_2-b_1)h̅ + 2b_3+b_1-1 =2-2h̅ → b_1=1, b_2=-1, b_3=1. Therefore f_5(h,h̅) = -h-h̅+2, g_5(h,h̅) = h-h̅+1. § CALCULATION OF CORRELATORS In this appendix we explicitly calculate the two and three point functions of marginal operators. An important integral relation that we will use is <cit.> ∫_ℝdz z^-a(1-z)^-b = 2π i/(1-a-b)B(a,b). The two point functions are ⟨ℳ_ℒ^-ℳ_ℛ⟩ = ∫dw̅_1d^2w_2δ(z_1-w_2)δ(w̅_1-w̅_2)/(z̅_1-w̅_1)^2(z_2-w_2)^2(z̅_2-w̅_2) = 1/z_12^2z̅_12^2∫ dw̅_1w̅_1^-2(1-w̅_1)^-1 = -2π i/z_12^2z̅_12^2 ⟨ℳ_ℛℳ_ℒ^+⟩ = ∫d^2w_1dw_2δ(w_1-w_2)δ(w̅_1-z̅_2)/(z_1-w_1)(z̅_1-w̅_1)^2(z_2-w_2)^2 = 1/z_12^2z̅_12^2∫ dw_2w_2^-2(1-w_2)^-1 = -2π i/z_12^2z̅_12^2 ⟨ℳ_ℛℳ_ℛ⟩ = ∫d^2w_1d^2w_2δ(w_1-w_2)δ(w̅_1-w̅_2)/(z_1-w_1)(z̅_1-w̅_1)^2(z_2-w_2)^2(z̅_2-w̅_2) = -1/z_21^2z̅_21^2∫ dw_1 w_1^-1(1-w_1)^-2∫ dw̅_1w̅_1^-2(1-w̅_1)^-1= -(2π i)^2/z_21^2z̅_21^2. For the three point functions we will need this general integral ℐ_αβγ = ∫dw̅_1/(z̅_1-w̅_1)^αdw̅_2/(z̅_2-w̅_2)^βdw_3/(z_3-w_3)^γΘ(w̅_13/w̅_12)Θ(w̅_32/w̅_12)δ(w_13)δ(w_23)/w̅_12^-Δ_3w̅_32^2-Δ_1w̅_13^2-Δ_2 = δ(w_21)/(z_3-w_1)^γ∫dw̅_1/(z̅_1-w̅_1)^αdw̅_2/(z̅_2-w̅_2)^βΘ(w̅_1-w̅_3)Θ(w̅_3-w̅_2)/(w̅_1-w̅_2)^-Δ_3(w̅_3-w̅_2)^2-Δ_1(w̅_1-w̅_3)^2-Δ_2 = (-1)^αδ(w_21)/(z_3-w_1)^γ∫dw̅_1/(w̅_1+w̅_3-z̅_1)^αdw̅_2/(w̅_2-w̅_3+z̅_2)^βΘ(w̅_1)Θ(w̅_2)/(w̅_1+w̅_2)^-Δ_3w̅_2^2-Δ_1w̅_1^2-Δ_2 = (-1)^αδ(w_21)/(z_3-w_1)^γ∑_k=-∞^∞Γ(1+Δ_3)/Γ(k+1)Γ(1+Δ_3-k)∫_0^∞dw̅_1w̅_1^k-2+Δ_2/(w̅_1+w̅_3-z̅_1)^α∫_0^∞dw̅_2w̅_2^Δ_3-k-2+Δ_1/(w̅_2-w̅_3+z̅_2)^β = (-1)^αδ(w_21)/(z_3-w_1)^γ∑_k=-∞^∞Γ(1+Δ_3)/Γ(k+1)Γ(1+Δ_3-k) × (w̅_3-z̅_1)^k-1+Δ_2-αΓ(k-1+Δ_2)Γ(-k+1-Δ_2+α)/Γ(α) × (z̅_2-w̅_3)^Δ_3-k-1+Δ_1-βΓ(Δ_3-k-1+Δ_1)Γ(1-Δ_3+k-Δ_1+β)/Γ(β). Where we have used the generalized binomial theorem to perform the integrals. We only care about this when Δ_1+Δ_2+Δ_3=3 and α=β=γ=2 ℐ_2,2,2(w_1,z̅_1;w_2,z̅_2;z_3,w̅_3) = δ(w_21)δ(z̅_21)/(z_3-w_1)^2(w̅_3-z̅_1)^2. The three point functions are then ⟨ℳ_ℒ^-ℳ_ℒ^-ℳ_ℒ^+⟩ = if^abcT^aT^bT^c/(2π)^1/2ℐ_2,2,2(z_1,z̅_1;z_2,z̅_2;z_3,z̅_3)= if^abcT^aT^bT^c/(2π)^1/2δ(z_12)δ(z̅_12)/z_31^2z̅_31^2⟨ℳ_ℛℳ_ℒ^-ℳ_ℒ^+⟩ = -f^abcT^aT^bT^c/(2π)^3/2∫dw_1/(z_1-w_1)ℐ_2,2,2(w_1,z̅_1;z_2,z̅_2;z_3,z̅_3) = -f^abcT^aT^bT^c/(2π)^3/2δ(z̅_12)/z_121/z_32^2z̅_32^2⟨ℳ_ℒ^-ℳ_ℛℳ_ℒ^+⟩ = -f^abcT^aT^bT^c/(2π)^3/2∫dw_2/(z_2-w_2)ℐ_2,2,2(z_1,z̅_1;w_2,z̅_2;z_3,z̅_3) = -f^abcT^aT^bT^c/(2π)^3/2δ(z̅_12)/z_211/z_31^2z̅_31^2⟨ℳ_ℛℳ_ℛℳ_ℒ^+⟩ = -i f^abcT^aT^bT^c/(2π)^5/2∫dw_1dw_2/(z_1-w_1)(z_2-w_2)ℐ_2,2,2(w_1,z̅_1;w_2,z̅_2;z_3,z̅_3) = -i f^abcT^aT^bT^c/(2π)^5/2δ(z̅_12)1/z̅_31^2∫dw_1/(w_1-z_13)(w_1-z_23)1/w_1^2 = -f^abcT^aT^bT^c/(2π)^3/2δ(z̅_12)1/z̅_31^2z_32^21/z_31∑_k_1=-∞^∞ z_31^-k_1z_32^k_1= -f^abcT^aT^bT^c/(2π)^3/2δ(z_12)δ(z̅_12)/z̅_32^2z_32^2⟨ℳ_ℒ^-ℳ_ℒ^-ℳ_ℛ⟩ = - f^abcT^aT^bT^c/(2π)^3/2∫dw̅_3/(z̅_3-w̅_3)ℐ_2,2,2(z_1,z̅_1;z_2,z̅_2;z_3,w̅_3) = - f^abcT^aT^bT^c/(2π)^3/2δ(z_12)δ(z̅_12)/z_31^2z̅_31^2∫ dw̅_3(1-w̅_3)^-1w̅_3^-2 = if^abcT^aT^bT^c/(2π)^1/2δ(z_12)δ(z̅_12)/z_31^2z̅_31^2⟨ℳ_ℛℳ_ℒ^-ℳ_ℛ⟩ = -i f^abc T^aT^bT^c/(2π)^5/2∫dw_1dw̅_3/(z_1-w_1)(z̅_3-w̅_3)ℐ_2,2,2(w_1,z̅_1;z_2,z̅_2;z_3,w̅_3) = -i f^abc T^aT^bT^c/(2π)^5/2δ(z̅_12)/z_121/z_32^2z̅_31^2∫ dw̅_3(1-w̅_3)^-1w̅_3^-2 = - f^abc T^aT^bT^c/(2π)^3/2δ(z̅_12)/z_121/z_32^2z̅_31^2⟨ℳ_ℒ^-ℳ_ℛℳ_ℛ⟩ = -i f^abcT^aT^bT^c/(2π)^5/2∫dw_2dw̅_3/(z_2-w_2)(z̅_3-w̅_3)ℐ_2,2,2(z_1,z̅_1;w_2,z̅_2;z_3,w̅_3) = -i f^abcT^aT^bT^c/(2π)^5/2δ(z̅_12)/z_211/z_31^2z̅_31^2∫ dw̅_3(1-w̅_3)^-1w̅_3^-2 = - f^abcT^aT^bT^c/(2π)^3/2δ(z̅_12)/z_211/z_31^2z̅_31^2⟨ℳ_ℛℳ_ℛℳ_ℛ⟩ = f^abcT^aT^bT^c/(2π)^7/2∫dw_1dw_2dw̅_3/(z_1-w_1)(z_2-w_2)(z̅_3-w̅_3)ℐ_2,2,2(w_1,z̅_1;w_2,z̅_2;z_3,w̅_3) = - i f^abcT^aT^bT^c/(2π)^5/2δ(z̅_21)/z̅_31^2∫dw_1/(w_1-z_13)(w_1-z_23)w_1^2 = - f^abcT^aT^bT^c/(2π)^3/2δ(z̅_21)/z_32^2z̅_31^21/z_31∑_k_1=-∞^∞ z_31^-k_1z_32^k_1= - f^abcT^aT^bT^c/(2π)^3/2δ(z_21)δ(z̅_21)/z_32^2z̅_31^2 § CHECK OF LIGHT-TRANSFORMED THREE POINT In this section we calculate the light transformed three point function as a check of the computations used in the main text. First we calculate the stripped three point function, which does not have the normalizations of the light transform ℒ_3^++-(z⃗_1,z⃗_2,z⃗_3) = 2πδ(Δ_1+Δ_2+Δ_3-3)∫dw̅_1/(z̅_1-w̅_1)^3-Δ_1dw̅_2/(z̅_2-w̅_2)^3-Δ_2dw_3/(z_3-w_3)^3-Δ_3 × Θ(w̅_1-z̅_3/w̅_1-w̅_2)Θ(z̅_3-w̅_2/w̅_1-w̅_2)δ(z_1-w_3)δ(z_2-w_3)/w̅_12^-Δ_3(z̅_3-w̅_2)^2-Δ_1(w̅_1-z̅_3)^2-Δ_2 = 2πδ(Δ_1+Δ_2+Δ_3-3)δ(z_12)/z_32^3-Δ_3∫dw̅_1/(z̅_1-w̅_1)^3-Δ_1dw̅_2/(z̅_2-w̅_2)^3-Δ_2 × Θ(w̅_1-z̅_3/w̅_1-w̅_2)Θ(z̅_3-w̅_2/w̅_1-w̅_2)1/w̅_12^-Δ_3(z̅_3-w̅_2)^2-Δ_1(w̅_1-z̅_3)^2-Δ_2 If w̅_1>w̅_2, then w̅_1>z̅_3 and w̅_2<z̅_3 ℒ_3^++-(z⃗_1,z⃗_2,z⃗_3) = 2πδ(Δ_1+Δ_2+Δ_3-3)δ(z_12)/z_32^3-Δ_3∫dw̅_1/(z̅_1-w̅_1)^3-Δ_1∫dw̅_2/(z̅_2-w̅_2)^3-Δ_2 × 1/w̅_12^-Δ_3(z̅_3-w̅_2)^2-Δ_1(w̅_1-z̅_3)^2-Δ_2Θ(w̅_1-z̅_3)Θ(z̅_3-w̅_2) = 2πδ(Δ_1+Δ_2+Δ_3-3)δ(z_12)/z_32^3-Δ_3∑_k=-∞^∞Γ(Δ_3+1)(-1)^Δ_1-3/Γ(Δ_3-k+1)Γ(k+1) × ∫_0^∞ dw̅_1 w̅_1^Δ_3+Δ_2-2-k(w̅_1-z̅_13)^Δ_1-3∫_0^∞ dw̅_2w̅_2^Δ_1+k-2(z̅_23+w̅_2)^Δ_2-3 = 2πδ(Δ_1+Δ_2+Δ_3-3)δ(z_12)/z_32^3-Δ_3∑_k=-∞^∞Γ(Δ_3+1)(-1)^Δ_1-3-1-k/Γ(Δ_3-k+1)Γ(k+1) × z̅_23^Δ_1+k+Δ_2-4z̅_13^-1-kB(k+1,2-Δ_1-k) B(1+Δ_3-k,Δ_1+k-1) = 2π(-1)^-Δ_2δ(Δ_1+Δ_2+Δ_3-3)Γ(Δ_3+1)δ(z_12)/Γ(Δ_3+Δ_1)Γ(3-Δ_1)z_32^3-Δ_3z̅_32^1+Δ_3z̅_13 × ∑_k=-∞^∞(z̅_32/z̅_13)^kΓ(2-Δ_1-k) Γ(Δ_1+k-1) = 2π(-1)^1-Δ_2δ(Δ_1+Δ_2+Δ_3-3)Γ(Δ_3+1)Γ(Δ_1)Γ(1-Δ_1)δ^2(z_12)/Γ(3-Δ_2)Γ(3-Δ_1)z_32^3-Δ_3z̅_32^1+Δ_3 In the case where w̅_1<w̅_2, the signs in the theta functions have to change and we have ℒ_3^++-(z⃗_1,z⃗_2,z⃗_3) = 2πδ(Δ_1+Δ_2+Δ_3-3)δ(z_12)/z_32^3-Δ_3∫dw̅_1/(z̅_1-w̅_1)^3-Δ_1∫dw̅_2/(z̅_2-w̅_2)^3-Δ_2 × 1/w̅_12^-Δ_3(z̅_3-w̅_2)^2-Δ_1(w̅_1-z̅_3)^2-Δ_2Θ(z̅_3-w̅_1)Θ(w̅_2-z̅_3) = -2πδ(Δ_1+Δ_2+Δ_3-3)δ(z_12)/z_32^3-Δ_3∑_k=-∞^∞Γ(Δ_3+1)(-1)^3-Δ_2/Γ(Δ_3-k+1)Γ(k+1) × ∫_0^∞ dw̅_1(w̅_1+z̅_13)^Δ_1-3w̅_1^1-Δ_1-k∫_0^∞ dw̅_2(w̅_2-z̅_23)^Δ_2-3w̅_2^Δ_1+k-2 = -2πδ(Δ_1+Δ_2+Δ_3-3)δ(z_12)/z_32^3-Δ_3∑_k=-∞^∞Γ(Δ_3+1)(-1)^3-Δ_2/Γ(Δ_3-k+1)Γ(k+1) × z̅_32^-Δ_3-1+kz̅_13^-k-1B(1+k,2-Δ_1-k) B(1+Δ_3-k,Δ_1+k-1) = -2πδ(Δ_1+Δ_2+Δ_3-3)Γ(Δ_3+1)(-1)^3-Δ_2δ(z_12)/Γ(3-Δ_1)Γ(Δ_1+Δ_3)z_32^3-Δ_3z̅_32^1+Δ_3z̅_13 × ∑_k=-∞^∞(z̅_32/z̅_13)^kΓ(2-Δ_1-k)Γ(Δ_1+k-1) = 2π(-1)^1-Δ_2δ(Δ_1+Δ_2+Δ_3-3)Γ(Δ_3+1)Γ(1-Δ_1)Γ(Δ_1) δ^2(z_12)/Γ(Δ_1+Δ_3)Γ(3-Δ_1)z_32^3-Δ_3z̅_32^1+Δ_3. We will be concerned with this three point function for the case where Δ_1=Δ_2=Δ_3=1. This gives us ℒ_3^++-(z⃗_1,z⃗_2,z⃗_3) = 2πδ^2(z_12)/z_32^2z̅_32^2 which is in agreement with the previous appendix up to prefactors. utphys
http://arxiv.org/abs/2407.12165v1
20240716204043
Building AI Agents for Autonomous Clouds: Challenges and Design Principles
[ "Manish Shetty", "Yinfang Chen", "Gagan Somashekar", "Minghua Ma", "Yogesh Simmhan", "Xuchao Zhang", "Jonathan Mace", "Dax Vandevoorde", "Pedro Las-Casas", "Shachee Mishra Gupta", "Suman Nath", "Chetan Bansal", "Saravan Rajmohan" ]
cs.SE
[ "cs.SE", "cs.AI", "cs.DC" ]
Vision Paper ^1Microsoft, ^2University of California, Berkeley, ^3University of Illinois Urbana-Champaign, ^4Indian Institue of Science, ^5Microsoft Research, ^6Agnes Scott College § ABSTRACT The rapid growth in the use of Large Language Models (LLMs) and AI Agents as part of software development and deployment is revolutionizing the information technology landscape. While code generation receives significant attention, a higher-impact application lies in using AI agents for operational resilience of cloud services, which currently require significant human effort and domain knowledge. There is a growing interest in AI for IT Operations (AIOps) which aims to automate complex operational tasks, like fault localization and root cause analysis, thereby reducing human intervention and customer impact. However, achieving the vision of autonomous and self-healing clouds though AIOps is hampered by the lack of standardized frameworks for building, evaluating, and improving AIOps agents. This vision paper lays the groundwork for such a framework by first framing the requirements and then discussing design decisions that satisfy them. We also propose , a prototype implementation leveraging agent-cloud-interface that orchestrates an application, injects real-time faults using chaos engineering, and interfaces with an agent to localize and resolve the faults. We report promising results and lay the groundwork to build a modular and robust framework for building, evaluating, and improving agents for autonomous clouds. Building AI Agents for Autonomous Clouds: Challenges and Design Principles Manish Shetty^1,2, Yinfang Chen^1,3, Gagan Somashekar^1, Minghua Ma^1, Yogesh Simmhan^1,4, Xuchao Zhang^1, Jonathan Mace^5, Dax Vandevoorde^5,6, Pedro Las-Casas^1, Shachee Mishra Gupta^1, Suman Nath^5, Chetan Bansal^1, Saravan Rajmohan^1 July 22, 2024 ================================================================================================================================================================================================================================================== plain plain § INTRODUCTION 1.25pgsYS IT applications and services for enterprises and web-scale companies are becoming more complex to develop, deploy, and maintain. The broad adoption of the micro-services pattern to compose complex applications and serverless deployment models hosted on the cloud has arguably simplified application development and scaling. But this has also introduced more moving parts that make their operations, reliability, fault diagnosis and recovery even harder <cit.>. Cloud services have set the expectation of five-9s of availability, and missing it can affect customer satisfaction. At the same time, failures in the clouds are becoming more frequent and with significant impact <cit.>. For instance, for one major recent outage, the estimated cost for one hour of service downtime for Amazon was approximately $100 million <cit.>. Site Reliability Engineers (SRE) and DevOps engineers are tasked with deployment and operational maintenance of cloud services <cit.>. They typically follow four steps when rectifying faults: fault detection, triaging, localization, and mitigation. This scope can be broader when we include proactive fault predictions and other preemptive maintenance. Many tools help with monitoring, anomaly detection, incident triage, root cause analysis, etc. <cit.>. However, SREs face information overload and complex decision-making in operating large-scale cloud environments. E.g., a large-scale study of outages in Microsoft Teams found that root-causing and mitigation require significant manual effort, context awareness and technical expertise <cit.>. Hence, the need for AIOps Agents, which can automatically detect, localize and mitigate faults with minimal human intervention, is becoming critical. These AIOps agents are key to achieve the vision of Autonomous Clouds. While the concept of self-healing clouds has been proposed earlier <cit.>, the emerging adoption of AIOps, i.e., the use of AI to support IT Operations, is making this a reality <cit.>. Here, AI algorithms and agents leverage the monitoring infrastructure to rapidly and efficiently track the health, identify failures and mitigate them within the operational environment <cit.>. More recently, agents are able to converse with Large Language Models (LLMs) using an observe–thought–action pattern to localize problems, explore possible causes, and enact solutions to achieve recovery in a semi-autonomous manner <cit.>. We are at the cusp of AIOps agents being able to independently carry out these tasks in a matter of minutes within production environments, compared to several hours taken even by experts <cit.>. The design, development, evaluation and iterative improvement of AIOps agents are challenging. While the adoption of AI has seen rapid progress for coding and programming <cit.> due to the availability of platforms like WebArena <cit.>, R2E <cit.> and benchmarks like HumanEval<cit.>, LiveCodeBench <cit.>, SWE-bench <cit.>, the same is lacking for cloud operations. Existing tools address individual aspects of the AIOps lifecycle: observability and introspection tools <cit.>, application suites <cit.>, chaos engineering and fault injection tools <cit.>, agent-computer interfaces <cit.>, etc., but do not cohesively integrate them. Moreover, recent prior works on leveraging AIOps agents for cloud operations like RCAgent <cit.>, and Zhang et al. <cit.> use proprietary services and datasets. Other prior works use frameworks specific to the solutions that they are building <cit.>, or ad hoc and static benchmarks and metrics <cit.> that fail to capture the dynamic nature of real-world cloud services. Furthermore, current approaches do not agree on standard metrics or a standard taxonomy for operational tasks. This calls for a standardised and principled framework for building, testing, comparing and improving AIOps agents. The framework should allow agents to interact with realistic service operation tasks in a reproducible manner. It must be flexible in extending to new applications, workloads, and faults. Importantly, it should go beyond just evaluating the AI agents and also enable users to improve the agents themselves, e.g., by providing sufficient observability and even serving as a training environment (“gym”) to generate samples to learn on <cit.>. In this vision paper, we draw upon our experiences from developing AIOps agents at Microsoft to make these contributions: * We envision the requirements for a holistic framework to enable the design, development, evaluation, and enhancement of AIOps agents that, additionally, serves the purpose of reproducible, standardized, interoperable and scalable benchmarks. We also suggest key design decisions to build it ( <ref>). * We describe a prototype framework, , that adopts this design to combine workload and fault generators to mimic production incidents and an agent-cloud interface for orchestrating the service operation lifecycle ( <ref>). This is the foundation for our ongoing work on a more comprehensive framework. * We report a case study on using this preliminary framework to evaluate an LLM agent for two critical operations tasks: Fault Detection and Mitigation ( <ref>). § BACKGROUND AND RELATED WORK Next, we identify gaps in existing work for evaluating AIOps tools and agents in a standardized, realistic and reliable manner. Fault Mitigation Lifecycle A typical incident goes through four stages: (1) Detection <cit.>: When an anomalous system behaviour is observed, an alert is raised by monitors or users of the service (internal engineers or external customers) and reported to the Incident Management System (IcM). (2) Triaging <cit.>: After the detection, the incident is notified to the On-Call Engineers (OCEs) to begin investigation and then assigned to the most appropriate engineering team. (3) Diagnosis <cit.>: The assigned engineers inspect different aspects of the incident and have several rounds of back-and-forth communication to identify the root cause. (4) Mitigation <cit.>: Several actions are taken by the team to mitigate the incident and to restore service health. The outcome is also updated postmortem in the IcM. AIOps Benchmarks Several benchmarks have been proposed to evaluate AIOps at various stages of the software lifecycle. For instance, ADBench <cit.> and AnomalyBench <cit.> evaluate anomaly detection, and LogParser <cit.> evaluates log parsing. However, these benchmarks focus only on single aspects of incident management. More recently, LLMs have powered a new wave of AIOps tools and autonomous agents, forcing evaluations to evolve. General benchmarks <cit.> like AgentBench task LLMs to operate as autonomous agents in diverse environments, solving problems across domains like operating systems and databases. That said, specialized AIOps evaluations have received less attention. OpsEval <cit.> attempts to address this with a question-answer-based evaluation for LLMs, but it disconnects from real-world operational challenges that require complex debugging, code understanding, and multi-step fault resolution. Our proposed work on bridges this gap. We believe using real services, workloads, and faults to create problems and enable executing concrete actions (e.g., run commands) is necessary for the reliable evaluation of state-of-the-art AIOps solutions, and to enhance them. Application Benchmark Suites Relevant to this work are benchmark suites for cloud applications <cit.>. <cit.> presented to study the architectural implications, and TailBench <cit.> proposes a methodology to analyze the performance of web servers and database services. Further, the emergence of microservices has prompted recent work to study their characteristics and requirements <cit.>. The popular DeathstarBench <cit.> differentiates from these studies by focusing on diverse large-scale applications with tens of unique microservices, allowing studying effects that only emerge at scale, such as network contention and cascading Quality of Service (QoS) violations due to dependencies between tiers. Beyond static application suites <cit.>, BluePrint <cit.> provides the ability to reconfigure applications and iteratively generate variants. We integrate with such application workloads to enable benchmarking of AIOps solutions. Chaos Engineering and Fault Injection Prior work developed fault injection techniques aimed at applications and distributed backend systems, including storage and data processing <cit.>. However, these existing techniques fall short in providing a generic, one-click fault generator that can be universally applied across various microservices. The limitations are multifaceted: many rely on application or domain-specific knowledge to create policies and oracles, making them unsuitable for the diverse requirements of AIOps <cit.>. Others offer mechanisms without automated policies or oracles beyond simple crashes, requiring developers to manually implement complex functionalities <cit.>. Additionally, fault injections at a single level (e.g., HTTP) do not adequately expose root-causes of failures due to: 1) their coarse-grained nature, 2) challenge of constructing meaningful error objects, and 3) a lack of consideration for dependencies between microservices <cit.>. § : A PRINCIPLED VISION 2.5pgs In this section, we first discuss the principles that we believe should guide the envisioned framework (<ref>), followed by a discussion of the design choices that can lead to such a framework (<ref>). §.§ Requirements and Principles YS1pg * Modular Design for applications, workloads, and agents. An effective evaluation system should seamlessly incorporate existing and evolving resources on application workloads, fault injection models, and resilience techniques for fault detection and resolution. It must be flexible, supporting easy integration of new components through standard interfaces. This allows for varied use cases, including enterprise and end-user assessments of fault and recovery strategies, AIOps research on new agents using established error models and workloads, and red-team simulations of complex faults on synthetic applications to test mitigation effectiveness. * Flexible Interface for human, digital, and AI agents. The framework must provide diverse interfaces to agents, spanning the unique requirements of humans to LLM-based agents. Humans might use a web interface for log review and command execution, digital agents need APIs for integration, and conversational LLM agents require prompt-based interaction for requests and responses. * Scalability at various operational and temporal scales. Operating at diverse spatial (single VM to clusters) and temporal scales (minutes to days) is paramount to make the framework amenable to different use cases and resource availability, e.g., from large enterprises with complex deployments and a wide fault surface to software engineering course assignment with a tiny scenario. Operating across time scales also allows faults that gradually emerge (e.g., memory leaks) or occur periodically to be detected, predicted, and preemptively mitigated. * Reproducible Setup for reliable measurements. An evaluation framework needs consistent and, ideally, automated deployments for reproducible and standardized assessment of mitigation strategies. Challenges may arise from non-deterministic elements in applications or faults, which should stem from external models, not the framework itself. While evaluation metrics can differ depending on the user or context, the framework should offer default objective metrics (e.g., accuracy, time to mitigate) and support the creation of more intricate measures with the data it provides. * Versatility in operating environments. The operating environments may vary based on the needs of the user. Chaos engineering <cit.>, for instance, advocates for fault injections directly in production environments, necessitating a framework capable of integrating with live applications. Alternatively, a sandboxed deployment with a simpler variant of production applications may be preferred. Synthetic and emulated systems <cit.> can also help evaluate large-scale what-if scenarios. * Comprehensive and Cross-Layer fault support. The framework should enable the introduction of faults across the entire stack, including hardware, network, OS, middleware, application, and external services. It must offer capabilities to simulate realistic faults inspired by real-world production incidents, which can cause cascading effects across distributed components, as well as synthetic faults for assessing potential future scenarios. * Diverse and realistic workload conditions. Workloads across domains have different burstiness, interarrival times, and parallelism <cit.>. An effective benchmarking framework should allow for generating workloads that reflect these characteristics rather than a `one-size-fits-all' approach. Current benchmarking often overfits well-known publicly available workload traces due to the scarcity of realistic workloads available <cit.>. Realism is essential for effectively testing and improving AI agents, as evidenced by practitioners at CompanyX who faced delays in AI agent deployment due to the difficulty in obtaining genuine user interaction and traffic patterns. * Coverage of the operations lifecycle. The incident management process can have diverse goals and scopes, and the framework should support the different stages that correspond to them, like fault detection, root-cause analysis, and mitigation. It should allow proactive strategies to predict and preempt failures and reactive strategies to detect and correct errors. * Sufficient Observability into applications. Adequate visibility into the different aspects of the system and application environment should be available to detect faults and their impact. It includes interfaces to access logs, telemetry, traces, KPIs, and documentation such as incident reports, standard procedures, etc. Tools to explore configuration files and source code may be beneficial in acquiring the necessary context for localization and mitigation. * Adequate controls for agent operations. Fault correction may require modifying configuration files or restarting VMs or services. Even fault detection may require the agent to run test suites to gain more visibility into the possible cause. The framework should enable such actions by having hooks into different layers. §.§ Design Decisions To address the requirements outlined in  <ref> (annotated below), we recommend several key design choices to build , a new standardized evaluation framework for AIOps: §.§.§ Bring your Services Alive We propose evaluating with live services, that are designed and implemented using different architectures at various scales <ref>, and capable of operating in various contexts, from sandbox to production <ref>. Real-world cloud services present complex behaviours, challenging AIOps agents with variability in workload patterns, diversity of faults, and intricacies of service dependencies. This approach closely mirrors the operational and reliability challenges faced in production systems, making the benchmark directly applicable to practitioners' needs. Further, we choose automation tools like Helm <cit.> and Blueprint <cit.> for repeatable and consistent setups <ref> and interfaces to extend to new services, preserving its applicability and rigour <ref>. This approach provides a comprehensive, realistic, and relevant evaluation platform to advance AIOps research and practice. §.§.§ Real Faults, Real Challenges We recommend incorporating dynamic workload and fault generators to simulate real-world conditions accurately. These generators are designed to produce a wide range of realistic traffic patterns and operational challenges <ref>, from typical user behaviour to peak loads and various fault scenarios, including kernel failures, network issues, and configuration bugs <ref>. This approach not only tests adaptability and robustness but also mitigates the risk of “training data contamination” for LLM-powered tools. A key implication of this choice is that the state of a faulty service (say telemetry or logs) is only relevant and observed in the context of an injected fault – making it publicly unavailable to seep into training data. §.§.§ Evaluate Across the Board The combination of live services ( <ref>) and real faults ( <ref>) allow us to reliably evaluate the impact of agents on various AIOps tasks and performance dimensions. Here, one can use generated faults independently or in conjunction to create benchmark problems for agents. Notably, one can use a single fault (e.g., network misconfiguration) to evaluate tasks across the board – from detection to resolution <ref>. Evaluation involves quantitative dimensions, such as performance, time, resource usage, dollar cost, and other metrics beyond accuracy <cit.>. Furthermore, to reliably compare agents, we emphasize qualitative evaluation of their traces by a human or LLM-as-a-Judge <cit.>. §.§.§ Orchestrate the Agent-Cloud-Interface Typically, service engineers operate cloud environments with various programming (e.g. APIs, database queries) and user interfaces (incident portals). Existing UI to the cloud are designed only for a human, and not amenable for LLMs and agents. E.g., humans reliably ignore extra information while the same can distract the context and harm performance for agents <cit.>. Inspired by human-computer interaction (HCI), <cit.> introduce agent-computer-interface, finding that agents can similarly benefit from better-designed interfaces for coding tasks. We posit the same for AIOps and envision an Agent-Cloud-Interface (ACI). The ACI is an Orchestrator between the agent and the cloud (<ref>) which specifies both the actions available and how the service's state is conveyed back to the agent as the observation of its actions <ref>. It simplifies the action space into a concise list of APIs, each documented to ensure that agents can make meaningful progress towards objectives <ref>. Also, the Orchestrator takes actions on behalf of the agent and returns high-quality feedback (e.g., outputs, error messages, etc.). §.§.§ Abstract Environments, not agents Two sides to a real-time evaluation are the agent and the environment. For AIOps, the agent can be a DevOps engineer or an AIOps tool (e.g., LLM-powered agent). The environment is a deployed application (e.g., SocialNetwork) that the user interacts with. Here, we suggest providing abstractions for the environment, not the agent. This choice maximizes flexibility in implementing and evaluating various kinds of tools and agents <ref>. Consequently, provides sufficient information (task description, available APIs/actions, and additional instructions) to solve a problem in the benchmark. §.§.§ Observe Everything, Everywhere, All at Once. Observability captures a system's internal states from its external outputs. It traditionally includes (1) traces detailing the end-to-end request paths through distributed entities; (2) application logs as textual records of runtime operations; and (3) metrics monitoring component health. We suggest an observability layer to collect not only canonical telemetry data but also other system indicators, such as cluster information, including logs, running status, and configurations <ref>. But increased observability can overwhelm AIOps tools with data volume and complex data types. Therefore, we offer flexible APIs for users to select the specific information they need, ensuring tailored and comprehensive observability. Summary In summary, these design decisions prioritize creating a framework that is: * Realistic: By using live services, workloads, and faults that mirror real-world operational challenges. * Scalable: Through dynamic workload and fault generators that can create new problem scenarios at varying scales. * Reliable: Evaluating tasks and performance dimension across the operations lifecycle. * Observable: Through rich telemetry data and actual service interactions, ensuring reliable evaluation. * Flexible: With the Agent-Cloud-Interface (ACI) that supports plugging a diverse range of tools. * Extensible: With the ability to easily incorporate new services, workloads, and faults. § SYSTEM ARCHITECTURE This section expands on how the decisions discussed in  <ref> lead to an prototype system architecture for . <ref> shows our system architecture consisting of 5 key pieces. §.§ Orchestrator strictly separates the Agent and the Application Service using an intermediate Orchestrator. It provides several interfaces for other system parts to integrate and extend. First, it establishes a session with an Agent to share information about benchmark problems: (1) the problem description, (2) instructions (e.g., response format), and (3) available APIs to call as actions. As shown in <ref>, the APIs are a set of documented tools, e.g., get_logs, get_metrics, exec_shell, designed to help the Agent solve a task. There are no restrictions on the Agent's implementation; the Orchestrator poses problems and polls it for the next action to perform given the previous result. Each action must be a valid API call, which the Orchestrator validates and carries out. The Orchestrator has privileged access to the deployment and can take arbitrary actions (e.g., scale-up, redeploy) using appropriate tools (e.g., helm, kubectl) to resolve problems on behalf of the Agent. Lastly, the Orchestrator calls workload and fault generators to create service disruptions, which serve as live benchmark problems. provides additional APIs to extend to new services and generators. §.§ Service abstracts a diverse set of services to reflect the variance in production environments. This includes live, running services implemented using various architectural principles, including microservices, serverless and monolithic. We also leverage open-sourced application suites such as DeathStarBench <cit.> as they provide artifacts, like source code and commit history, along with run-time telemetry. Adding tools like BluePrint <cit.> can help scale to other academic <cit.> and production services <cit.> and also seamlessly deploy new variants of these services. §.§ Workload Generator The workload generator in plays a crucial role by creating simulations of both faulty and normal scenarios. It receives specifications from the Orchestrator, such as the task, desired effects, scale, and duration. The generator can utilize a model trained on real production traces, to generate workloads that align with these specifications. Faulty scenarios may simulate conditions like resource exhaustion, exploit edge cases, or trigger cascading failures, inspired by real incidents. Normal scenarios mimic typical production patterns, such as daily activity cycles and multi-user interactions. When various characteristics (e.g., service calls, user distribution, arrival times) can lead to the desired effect, multiple workloads can be stored in the problem cache for use by the Orchestrator. In coordination with the Fault Generator ( <ref>), the workload generator can also create complex fault scenarios with workloads. §.§ Fault Generator has a novel push-button fault generator designed for generic applicability across various cloud scenarios. Our approach integrates application and domain knowledge to create adaptable policies and “oracles” compatible with AIOps scenarios. This includes fine-grained fault injection capable of simulating complex failures inspired by production incidents. Additionally, it can inject faults at various system levels, exposing root causes while maintaining semantic integrity and considering interdependencies between cloud microservices. The fault injector's versatility can enhance the reliability and robustness of cloud systems by enabling thorough testing and evaluation of AIOps capabilities. §.§ Observability is equipped with an extensible observability layer designed to provide comprehensive monitoring capabilities across various system layers for any AIOps tool. collects a wide array of telemetry data, including (1) traces from Jaeger detailing the end-to-end paths of requests through distributed systems, (2) application logs formatted and recorded by Filebeat and Logstash, and (3) system metrics monitored by Prometheus. Additionally, also captures lower-level system information such as syscall logs and cluster information. As mentioned, we handle the potential data overload through flexible APIs to tune the telemetry data relevant to the AIOps tools. § CASE STUDY AND INSIGHTS As a proof-of-concept, we implement a prototype of and evaluate an LLM agent on an AIOps incident mitigation task. Here, we aim to demonstrate the potential of to standardize evaluation for AIOps tools and uncover novel insights. Setup We deploy the SocialNetwork application from DeathStarBench <cit.> on a Kubernetes cluster. We instantiate the fault generator to induce a realistic misconfiguration fault at the virtualization layer: the target port of a microservice is misconfigured, causing connectivity issues with other microservices. We generate traffic by using an exponential workload pattern from the wrk tool. We study a ReAct <cit.> agent with a GPT-4 <cit.> backend. ReAct is a popular LLM-agent paradigm that leverages interleaved Thought-Action traces, utilizing LLMs' reasoning and planning abilities. Evaluation Goals This case study aims to showcase several of the requirements in <ref>. We address <ref> by employing quantitative metrics (Success, Time-to-Detect (TTD), Time-to-Mitigate (TTM), and Efficiency) and qualitative analysis of the agent's actions. We demonstrate <ref> by covering multiple stages: detection, root-cause analysis, and mitigation. <ref> is ensured through application observability (logs, metrics, traces), and the well-defined APIs for the agent, including a , fulfill <ref>. Key Insights <ref> illustrates the agent's trace while attempting to first detect a service anomaly. It then takes a series of actions, as shown in <ref>, to identify the root cause and mitigate the fault. Overall, it successfully detected the problem in 14 secs (TTD) and mitigated it in 36 secs (TTM). We measured the average resource efficiency, and found that it takes 10–12 interactions costing around $0.25 Below, we distill key insights from this case study: 1 Importance of Observability The agent's ability to detect the misconfiguration fault hinged on the detailed telemetry data provided by . For example, the agent identified repeated connection refusals in the logs, which made it hypothesize a potential misconfiguration. This highlights the critical role observability will play in AIOps in the future. 2 AIOps needs Efficient Actions We found that the efficiency of the available actions influenced the agent's performance. For instance, too few APIs limited its ability to explore solutions, while too many arguments for each API hindered its performance. Also, flexible APIs (like exec_shell) were pivotal to the agent balancing explore and exploit actions. This reiterates the importance of a well-designed Agent-Cloud-Interface (ACI). 3 Fault Injection Reflects Real-World Complexity Interestingly, injecting simple faults (like a K8s misconfiguration) into a real service proves challenging problems for advanced models like GPT-4 <cit.>, requiring 10+ interaction rounds to reach a solution. This demonstrates how automated fault generators could accelerate the creation of rigorous testbeds for reliable AIOps evaluation. 4 Incorporating Service Dependency Tools The trace in <ref> demonstrates the agent's implicit understanding of service dependencies just via logs. While promising, for complex applications, the agent could spend significant time traversing the call graph with only partial views of the system <cit.>. Concurrent efforts in coding agents have shown the power of code dependency analysis for repository-level tasks <cit.>. We believe future research can similarly look at augmenting AIOps agents with tools for explicit service dependencies and impact analysis. 5 Error Handling goes a long way In <ref>, when the agent encountered a syntax error, it quickly corrected the mistake and retried the command. We even find that poor error messages hinder the agent's performance. This insight emphasizes the need for robust error propagation for actions throughout the operations and reliability lifecycle of systems. § CONCLUSION 0.25pgs In this vision paper, we have framed the requirements and design principles for a framework to build, test, compare and improve Agents used for the operational management of cloud services. We also discuss our prototype implementation, , and report preliminary results. This forms the kernel for a more comprehensive framework that will address the key gap in standardized framework for building agents to help achieve autonomous clouds. ACM-Reference-Format
http://arxiv.org/abs/2407.13023v1
20240717212627
A Three-Stage Algorithm for the Closest String Problem on Artificial and Real Gene Sequences
[ "Alireza Abdi", "Marko Djukanovic", "Hesam Tahmasebi Boldaji", "Hadis Salehi", "Aleksandar Kartelj" ]
cs.AI
[ "cs.AI" ]
⌈⌉ ⌊⌋ status=draft, theme=color Elsavier Journal of Templates .bib @inproceedingsliu2005, title=Parallel genetic algorithm and parallel simulated annealing algorithm for the closest string problem, author=Liu, Xuan and He, Hongmei and Sỳkora, Ondrej, booktitle=International Conference on Advanced Data Mining and Applications, pages=591–597, year=2005, organization=Springer @inproceedingsgomes2005, title=Parallel algorithm for the closest string problem, author=Gomes, Fernando C and Meneses, Cláudio N and Pardalos, Panos M and Viana, GV, booktitle=Proceedings of the Fourth Brazilian Symposium on Mathematical and Computational Biology, volume=2, pages=326–332, year=2005 @articlegomes2008, title=A parallel multistart algorithm for the closest string problem, author=Gomes, Fernando C and Meneses, Cláudio N and Pardalos, Panos M and Viana, Gerardo Valdisio R, journal=Computers & Operations Research, volume=35, number=11, pages=3636–3643, year=2008, publisher=Elsevier @inproceedingsliu2008, title=A compounded genetic and simulated annealing algorithm for the closest string problem, author=Liu, Xiaolan and Holger, Mauch and Hao, Zhifeng and Wu, Guangchao, booktitle=2008 2nd International Conference on Bioinformatics and Biomedical Engineering, pages=702–705, year=2008, organization=IEEE @inproceedingsfaro2010, title=Ant-CSP: An ant colony optimization algorithm for the closest string problem, author=Faro, Simone and Pappalardo, Elisa, booktitle=SOFSEM 2010: Theory and Practice of Computer Science: 36th Conference on Current Trends in Theory and Practice of Computer Science, Spinderuv Mlyn, Czech Republic, January 23-29, 2010. Proceedings 36, pages=370–381, year=2010, organization=Springer @articleliu2011, title=Exact algorithm and heuristic for the closest string problem, author=Liu, Xiaolan and Liu, Shenghan and Hao, Zhifeng and Mauch, Holger, journal=Computers & operations research, volume=38, number=11, pages=1513–1520, year=2011, publisher=Elsevier @articleliu2004, title=Largest distance decreasing algorithm for the closest string problem, author=Liu, Xiaolan and Fu, Keqiang and Shao, Renxiang, journal=J. Inf. Comput. Sci., volume=1, pages=287–292, year=2004 @articlexu2022, title=A Heuristic Solution to the Closest String Problem Using Wave Function Collapse Techniques, author=Xu, Shirley and Perkins, David, journal=IEEE Access, volume=10, pages=115869–115883, year=2022, publisher=IEEE @articleli2002, title=On the closest string and substring problems, author=Li, Ming and Ma, Bin and Wang, Lusheng, journal=Journal of the ACM (JACM), volume=49, number=2, pages=157–171, year=2002, publisher=ACM New York, NY, USA @bookroman1992, title=Coding and information theory, author=Roman, Steven, volume=134, year=1992, publisher=Springer Science & Business Media @articlemeneses2004, title=Optimal solutions for the closest-string problem via integer programming, author=Meneses, Cláudio N and Lu, Zhaosong and Oliveira, Carlos AS and Pardalos, Panos M, journal=INFORMS Journal on Computing, volume=16, number=4, pages=419–429, year=2004, publisher=INFORMS @articlegramm2003, title=Fixed-parameter algorithms for closest string and related problems, author=Gramm and Niedermeier and Rossmanith, journal=Algorithmica, volume=37, pages=25–42, year=2003, publisher=Springer @inproceedingsdor1997, title=Banishing bias from consensus sequences, author=Ben-Dor, Amir and Lancia, Giuseppe and Ravi, R and Perone, Jennifer, booktitle=Combinatorial Pattern Matching: 8th Annual Symposium, CPM 97 Aarhus, Denmark, June 30–July 2, 1997 Proceedings 8, pages=247–261, year=1997, organization=Springer @articlepappalardo2014, title=A combined greedy-walk heuristic and simulated annealing approach for the closest string problem, author=Pappalardo, Elisa and Cantone, Domenico and Pardalos, Panos M, journal=Optimization Methods and Software, volume=29, number=4, pages=673–702, year=2014, publisher=Taylor & Francis @articlemousavi2012, title=A GRASP algorithm for the Closest String Problem using a probability-based heuristic, author=Mousavi, Sayyed Rasoul and Esfahani, Navid Nasr, journal=Computers & operations research, volume=39, number=2, pages=238–248, year=2012, publisher=Elsevier @inproceedingsgkasieniec1999, title=Efficient approximation algorithms for the Hamming center problem, author=Gkasieniec, Leszek and Jansson, Jesper and Lingas, Andrzej, booktitle=Proceedings of the tenth annual ACM-SIAM symposium on Discrete algorithms, pages=905–906, year=1999 @articlelanctot2003, title=Distinguishing string selection problems, author=Lanctot, J Kevin and Li, Ming and Ma, Bin and Wang, Shaojiu and Zhang, Louxin, journal=Information and Computation, volume=185, number=1, pages=41–55, year=2003, publisher=Elsevier @mischayes1976, title=Speech understanding systems: Summary of results of the five-year research effort, author=Hayes-Roth, P and Fox, M and Gill, G and Mostow, DJ and Reddy, R, year=1976, publisher=Carnegie Mellon University @articleow1988, title=Filtered beam search in scheduling, author=Ow, Peng Si and Morton, Thomas E, journal=The International Journal Of Production Research, volume=26, number=1, pages=35–62, year=1988, publisher=Taylor & Francis @articlefreitag2017, title=Beam search strategies for neural machine translation, author=Freitag, Markus and Al-Onaizan, Yaser, journal=arXiv preprint arXiv:1702.01806, year=2017 @articleblum2009, title=Beam search for the longest common subsequence problem, author=Blum, Christian and Blesa, Maria J and Lopez-Ibanez, Manuel, journal=Computers & Operations Research, volume=36, number=12, pages=3178–3186, year=2009, publisher=Elsevier @articlemousavi2010, title=A hybridization of constructive beam search with local search for far from most strings problem, author=Mousavi, Sayyed R, journal=International Journal of Computer and Information Engineering, volume=4, number=8, pages=1200–1208, year=2010 @articlesilver2017, title=Mastering the game of go without human knowledge, author=Silver, David and Schrittwieser, Julian and Simonyan, Karen and Antonoglou, Ioannis and Huang, Aja and Guez, Arthur and Hubert, Thomas and Baker, Lucas and Lai, Matthew and Bolton, Adrian and others, journal=nature, volume=550, number=7676, pages=354–359, year=2017, publisher=Nature Publishing Group @articleswiechowski2023, title=Monte Carlo tree search: A review of recent modifications and applications, author=Świechowski, Maciej and Godlewski, Konrad and Sawicki, Bartosz and Mańdziuk, Jacek, journal=Artificial Intelligence Review, volume=56, number=3, pages=2497–2562, year=2023, publisher=Springer @articleabe2019, title=Solving np-hard problems on graphs with extended alphago zero, author=Abe, Kenshin and Xu, Zijian and Sato, Issei and Sugiyama, Masashi, journal=arXiv preprint arXiv:1905.11623, year=2019 @articlefayed2019, title=Speed up grid-search for parameter selection of support vector machines, author=Fayed, Hatem A and Atiya, Amir F, journal=Applied Soft Computing, volume=80, pages=202–210, year=2019, publisher=Elsevier @articlenair2022, title=Cross-species identification of cancer resistance–associated genes that may mediate human cancer risk, author=Nair, Nishanth Ulhas and Cheng, Kuoyuan and Naddaf, Lamis and Sharon, Elad and Pal, Lipika R and Rajagopal, Padma S and Unterman, Irene and Aldape, Kenneth and Hannenhalli, Sridhar and Day, Chi-Ping and others, journal=Science Advances, volume=8, number=31, pages=eabj7176, year=2022, publisher=American Association for the Advancement of Science @articlenikolic2021, title=Solving the longest common subsequence problem concerning non-uniform distributions of letters in input strings, author=Nikolic, Bojan and Kartelj, Aleksandar and Djukanovic, Marko and Grbic, Milana and Blum, Christian and Raidl, Günther, journal=Mathematics, volume=9, number=13, pages=1515, year=2021, publisher=MDPI @articletabataba2012, title=A hyper-heuristic for the longest common subsequence problem, author=Tabataba, Farzaneh Sadat and Mousavi, Sayyed Rasoul, journal=Computational biology and chemistry, volume=36, pages=42–54, year=2012, publisher=Elsevier @bookmacario2012, title=Gene probes for bacteria, author=Macario, Alberto, year=2012, publisher=Elsevier @inproceedingsli1999, title=Finding similar regions in many strings, author=Li, Ming and Ma, Bin and Wang, Lusheng, booktitle=Proceedings of the thirty-first annual ACM symposium on Theory of computing, pages=473–482, year=1999 @articlelanctot2000, title=Some string problems in computational biology, author=Lanctot, J Kevin, year=2000, publisher=University of Waterloo @articleniedermeier2002, title=Closest Strings, Primer Design, and Motif Search, author=Niedermeier, Jens Gramm Falk Huffner Rolf @articlehufsky2011, title=Swiftly computing center strings, author=Hufsky, Franziska and Kuchenbecker, Léon and Jahn, Katharina and Stoye, Jens and Böcker, Sebastian, journal=BMC bioinformatics, volume=12, pages=1–12, year=2011, publisher=Springer @articlefriedman1940, title=A comparison of alternative tests of significance for the problem of m rankings, author=Friedman, Milton, journal=The annals of mathematical statistics, volume=11, number=1, pages=86–92, year=1940, publisher=JSTOR @articledemvsar2006, title=Statistical comparisons of classifiers over multiple data sets, author=Demšar, Janez, journal=The Journal of Machine learning research, volume=7, pages=1–30, year=2006, publisher=JMLR. org @articleGreninger2010, doi = 10.1371/journal.pone.0013381, author = Greninger, Alexander L. AND Chen, Eunice C. AND Sittler, Taylor AND Scheinerman, Alex AND Roubinian, Nareg AND Yu, Guixia AND Kim, Edward AND Pillai, Dylan R. AND Guyard, Cyril AND Mazzulli, Tony AND Isa, Pavel AND Arias, Carlos F. AND Hackett, Jr, John AND Schochetman, Gerald AND Miller, Steve AND Tang, Patrick AND Chiu, Charles Y., journal = PLOS ONE, publisher = Public Library of Science, title = A Metagenomic Analysis of Pandemic Influenza A (2009 H1N1) Infection in Patients from North America, year = 2010, month = 10, volume = 5, url = https://doi.org/10.1371/journal.pone.0013381, pages = 1-16, abstract = Although metagenomics has been previously employed for pathogen discovery, its cost and complexity have prevented its use as a practical front-line diagnostic for unknown infectious diseases. Here we demonstrate the utility of two metagenomics-based strategies, a pan-viral microarray (Virochip) and deep sequencing, for the identification and characterization of 2009 pandemic H1N1 influenza A virus. Using nasopharyngeal swabs collected during the earliest stages of the pandemic in Mexico, Canada, and the United States (n = 17), the Virochip was able to detect a novel virus most closely related to swine influenza viruses without a priori information. Deep sequencing yielded reads corresponding to 2009 H1N1 influenza in each sample (percentage of aligned sequences corresponding to 2009 H1N1 ranging from 0.0011 number = 10, @articlepohlert2014pairwise, title=The pairwise multiple comparison of mean ranks package (PMCMR), author=Pohlert, Thorsten, journal=R package, volume=27, number=2019, pages=9, year=2014 @inproceedingslameski2015svm, title=SVM parameter tuning with grid search and its impact on reduction of model over-fitting, author=Lameski, Petre and Zdravevski, Eftim and Mingov, Riste and Kulakov, Andrea, booktitle=Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing: 15th International Conference, RSFDGrC 2015, Tianjin, China, November 20-23, 2015, Proceedings, pages=464–474, year=2015, organization=Springer @inproceedingsreixach2024neural, title=A Neural Network Based Guidance for a BRKGA: An Application to the Longest Common Square Subsequence Problem, author=Reixach, Jaume and Blum, Christian and Djukanović, Marko and Raidl, Günther R, booktitle=European Conference on Evolutionary Computation in Combinatorial Optimization (Part of EvoStar), pages=1–15, year=2024, organization=Springer @inproceedingsma2008, title=More efficient algorithms for closest string and substring problems, author=Ma, Bin and Sun, Xiaoming, booktitle=Annual International Conference on Research in Computational Molecular Biology, pages=396–409, year=2008, organization=Springer @inproceedingsgramm2002, title=Closest strings, primer design, and motif search, author=Gramm, Jens and Hüffner, Falk and Niedermeier, Rolf and others, booktitle=Currents in Computational Molecular Biology, poster abstracts of RECOMB, volume=2002, pages=74–75, year=2002, organization=Citeseer elsarticle-num uiasbs]Alireza Abdimycorrespondingauthor [mycorrespondingauthor]Corresponding author alirezaabdi@iasbs.ac.ir ubanjaluka]Marko Djukanovic marko.djukanovic@pmf.unibl.org uiasbs]Hesam Tahmasebi Boldaji hesam.tahmasebi@iasbs.ac.ir uiasbs]Hadis Salehi hadith.salehi@iasbs.ac.ir ubelgrade]Aleksandar Kartelj aleksandar.kartelj@matf.bg.ac.rs [uiasbs]Department of Computer Science and Information Technology, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran [ubanjaluka]Mladena Stojanovića 2, 78000, Banja Luka, Republic of Srpska, Bosnia and Herzegovina [ubelgrade]Department for Computer Science, Faculty of Mathematics, University of Belgrade, Belgrade, Serbia § ABSTRACT The Closest String Problem is an NP-hard problem that aims to find a string that has the minimum distance from all sequences that belong to the given set of strings. Its applications can be found in coding theory, computational biology, and designing degenerated primers, among others. There are efficient exact algorithms that have reached high-quality solutions for binary sequences. However, there is still room for improvement concerning the quality of solutions over DNA and protein sequences. In this paper, we introduce a three-stage algorithm that comprises the following process: first, we apply a novel alphabet pruning method to reduce the search space for effectively finding promising search regions. Second, a variant of beam search to find a heuristic solution is employed. This method utilizes a newly developed guiding function based on an expected distance heuristic score of partial solutions. Last, we introduce a local search to improve the quality of the solution obtained from the beam search. Furthermore, due to the lack of real-world benchmarks, two real-world datasets are introduced to verify the robustness of the method. The extensive experimental results show that the proposed method outperforms the previous approaches from the literature. Beam Search, Expected Distance Score, Hamming distance, Local Search, Closest String Problem § INTRODUCTION The Closest String (CS) problem involves finding a string that is closest to a given set of strings; the term “closest” usually refers to the minimum Hamming distance but may refer to the other sequential measures. CP problem has many applications in coding theory <cit.>, and computational biology <cit.>. More specifically, this problem is used in designing drugs, degenerated primers, diagnostic probes, and finding gene clusters <cit.>. CS problem is also known as the Hamming Center problem and comes with various versions of different computational complexities. It is known that the CS problem is NP-hard in general <cit.>, while its decision problem variant is NP-complete <cit.>. In the decision version of the CS problem, the aim is to answer a binary question “Does the input set of string have a closest string with maximum distance of d?”. The general version of the CS problem seeks a string that is as close as possible to all given input strings. This paper tackles the general form of the CS problem with an arbitrary set of strings. Let S = {s_1, s_2, ⋯, s_n } be the set of strings over a finite alphabet Σ = {σ_1, σ_2, ⋯, σ_m}, where n and m refer to the number of strings and alphabet set cardinality, respectively. Without loss of generality, we assume that all strings have the same length (L). The closest string is a string with length L that has the minimum Hamming distance from strings in S. CS problem has attracted many researchers to solve it exactly as well as heuristically. Gramm et al. <cit.> proposed an exact fixed-parameter algorithm that runs in linear time assuming the Hamming distance is specified and fixed. Meneses et al. <cit.> developed an integer linear programming to solve the CS in general case. This algorithm has been efficient in finding an exact solution in a short time for small and near-moderate instances. However, a huge amount of time is requested for solving larger instances, specifically for random strings with n > 30 and L ≥ 800. In 2011, Liu et al. <cit.> introduced an exact algorithm for solving the CS problem for the particular case when n = 3 and |Σ| = 2. Among the exact methods, the ILP model proposed by Meneses et al. <cit.> is the most effective one and extremely efficient for binary strings. When it comes to DNA and protein sequences, that are much larger and more complicated to solve, exact methods perform poorly. Therefore, approximation algorithms are widely applied. For example, approximation algorithms with approximation guarantees are proposed <cit.> and despite their speed, they showed poor performance in finding near-optimal solutions. (Meta)heuristic methods instead, could find near-optimal solutions in a reasonable time, but they do not guarantee optimality in general. Nevertheless, the heuristic methods outperform the previously mentioned methods when applied to large DNA and protein sequences. In 2005, Liu et al. <cit.> suggested using genetic and simulated annealing algorithms to solve the CS problem. Moreover, they implemented the parallel version of the algorithm and showed that the parallel genetic algorithm is superior when compared to the others. Later, Gomes et al. <cit.> proposed a simple heuristic along with its parallel version. In 2008, Liu et al. <cit.> proposed a hybridization of genetic and simulated annealing algorithms and combined their metrics to solve the CS problem. However, their algorithm is only tested for binary strings. Later in 2010, Faro and Pappalardo <cit.> introduced an algorithm based on the Ant Colony Optimization method which excelled over all former approaches. Mousavi and Esfahani <cit.> suggested using the greedy randomized adaptive search procedure (GRASP) algorithm followed by a new heuristic function inspired by probability theory to solve the general case of the CS problem. In 2011, Mousavi <cit.> utilized a beam search as a part of a hybrid meta-heuristic and also, while modifying the heuristic function from <cit.> to get a more enhanced one. Liu et al. <cit.> in their other work from 2011, built on their heuristic function from <cit.> to design an effective local search. In 2014, Pappalardo et al. <cit.> proposed a hybrid of a greedy algorithm and simulated annealing (GWSA) to solve the CS problem. They first generated an initial solution by a greedy walk algorithm, next, they fed the output of the greedy walk algorithm to a simulated annealing algorithm. GWSA excelled over all former approaches but delivered high running time. Recently, Xu and Perkins <cit.> proposed using Wave Function Collapse (WFC) to solve the CS problem, claiming it outperforms all other approaches. However, they did not compare their method with the GWSA. The WFC is a lightweight method with very low running time. First, they constructed a frequency table based on the repetition of characters in strings. Second, they identified the string with the maximum Hamming distance. Finally, according to the frequency table, they assigned a character to the solution. They repeated the second and the last steps until a solution of complete length was obtained. This method is particularly suitable for solving the CS problem when dealing with protein sequences. In summary, the WFC is considered the state-of-the-art heuristic method for solving the general form of the closest string problem. Contributions of this paper are as follows. * A Three-Stage Algorithm (TSA) to address the CS problem is designed. In the first stage of the algorithm, a novel pruning method that enhances both the speed and quality of the algorithm is performed. The next phase employs a time-restricted variant of beam search on the restricted search space. A new heuristic function based on expected distance score to guide the beam search is designed. In the last stage, an efficient local search is executed to refine the solution obtained by the beam search. * We have created two real-world datasets and discussed the robustness of the approaches. The first one consists of the well-known TP53 gene nucleotide, and the second dataset comprises the nucleotides of two segments of a common flu variant across different hosts to check their conservation levels. * Extensive experimental evaluation shows that our proposed algorithm (TSA) outperforms the state-of-the-art approaches over all five benchmark sets. The rest of this paper is structured as follows. In Section <ref>, we provide several definitions and formulations required for the subsequent sections. Following that, we introduce details of our algorithm in Section <ref>. Section <ref> discusses the experimental results along with statistical reports. Finally, Section <ref> concludes the paper by additionally providing some outlines for future work. § BASIC DEFINITIONS AND PRELIMINARIES This section provides the fundamentals and the definitions essential for understanding the major core of the paper. Given is a set S = {s_1, s_2, …, s_n } of input strings over a finite alphabet Σ = {σ_1, σ_2, …, σ_m }, where n and m refer to the numbers of input strings and alphabet cardinality, respectively. Assume that all strings in S have the same length L. Let s_i^j denotes the j^th character of i^th string in S, while s_i^1 denotes the leading character of string s_i. The notion of level l (1 ≤ l ≤ L) takes all l^th characters over input strings in S. For two integer values 1 ≤ i ≤ j ≤ |s|, by s[i, j] we denote the continuous part of string s (sub-string) that starts with the character at position i and ends with the character at position j. The frequency of a character (σ) at level l is denoted by f_l(σ), i.e. the number of occurrences of σ at level l. Let S[l]={ s_i[1, l] | i=1, …, n} be the set of prefix strings of each input string w.r.t. position l ∈{1, …, L }. In the closest string problem, the Hamming distance (hd) represents the distance between strings and the proposed solution. It is computed by counting the differing positions between each string (s_i) and the solution, with the largest of these distances representing the solution’s overall Hamming distance. Symbolically, for any two strings (s_1) and (s_2) of equal length (L), the Hamming distance is defined as in Eq.<ref>. hd(s_1, s_2) = ∑_i = 1^lδ(s_1^i, s_2^i) δ(ch_1, ch_2) ← 1 ch_1 ≠ ch_2 0 ch_1 = ch_2 To facilitate the explanation of our expected distance heuristic in Section <ref>, let hd^c(s_1,s_2):=∑_i=1^l (1-δ(s_1^i, s_2^i), denote the complementary score of the Hamming distance by hd(s_1, s_2), which refers to the number of equal characters between the two strings at the same positions. Note that hd(s_1, s_2)=L-hd^c(s_1, s_2). In essence, the hamming distance between string s and a set of input string S is calculated by hd(s, S) = max{ hd(s, s_i) | i=1, …, n }. Note that since we solve the CS problem in a constructive manner where our solutions are partial, we explain the hamming distance calculation for a partial solution s^p with length of |s^p|=l_p. That is, the hamming distances is calculated applying Eq. (<ref>) as the score hd(s^p, S[l_p]). One can easily see that hd^c(s^p) is equal to l_p - hd(s^p, S[l_p]). Beam Search (BS) is a heuristic tree-search algorithm that works in a limited breadth-first manner. It was proposed by Hayes et al. <cit.> and applied in the context of the speech understanding problem. The BS and its variants are widely applied to solve various optimization problems, e.g., scheduling and machine translation <cit.>. In the context of string problems from bioinformatics, it has been proven as one of the most efficient techniques; for example, it is successfully employed to solve the prominent Longest Common Subsequence problem <cit.>. A specific number (β>0) of nodes based on a heuristic evaluation is selected at each level of the search to be further processed. BS uses parameter β to provide the trade-off between completeness and greediness. As the value of β increases, the runtime of the algorithm and the chance of finding the optimal answer gets higher and vice versa. For the CS problem, BS starts with an empty solution and constructs solutions by adding letters from Σ to the most promising β nodes, which correspond to (actions of) appending these letters to the end of respective partial solutions, until a complete node is reached. Complete nodes are those whose corresponding partial solutions cannot be further expanded. In the context of CS problems, complete nodes are solutions of length L. For the sake of clarity, consider the S = { s_1 = , s_2 = } and Σ = {, }. Here, n, L, and m are 2, 10, and 2, respectively. For a partial solution x = with length l_p = 5, the hamming distance is calculated based on the first 5 characters of all strings i.e., s_1 =, and s_2 =, which hd(x, s_1) = 1, and hd(x, s_2) = 0. Also, hd^c(x, s_1) = 4, and hd^c(x, s_2) = 5. In addition, S^4_2 denotes the fourth character of the second string which is . Furthermore, the frequency of character and at level 1 (f_1() and f_1()) is 2 and 0, respectively. § THE PROPOSED METHOD In this section, the Three-Stage Algorithm (TSA) is designed to address the CS problem, pseudocoded by Algorithm <ref>. Initially, the algorithm pre-processes the set of effective actions at each level of BS employing the pruning mechanism, which serves to diminish the search space. Subsequently, it employs the expected distance heuristic function to guide the Time-Restricted Beam Search (TRBS) <cit.> towards a solution of reasonable quality. Lastly, the obtained solution is passed to the new local search algorithm to further refine its quality. To summarize, the methodology applies the three core stages: (i) the novel pruning method described in Section <ref> to support effective decision-making; (ii) the TRBS algorithm specifically tailored for the CS problem provided in Section <ref> whose the search is guided by the new expected distance heuristic, supported by the associated tie-breaking strategy; see Section <ref>; (iii) a new local search algorithm detailed in Section <ref>. §.§ A Novel Pruning Method for the Closest String Problem Due to the NP-hardness of the CS problem, the state space grows exponentially with the instance size. The following question arises: Should we regard all members of Σ as candidates at level l ∈{1, …, L} for a solution? To answer this question, we have analyzed the exact solution for 50 instances with a length of 100 and it turned out that in most cases, the selected character was from a specific subset of characters. Hence, we decided to design a pruning method to determine those specific subsets of alphabets and consequently, improve the quality of solutions and running time of the BS approach. To describe the pruning method, let us first define the notion of `rank 1' (R1) and `rank 2' (R2). For the following list of tuples [(A, 10), (B, 12), (C, 10), (D, 7), (E, 1), (F, 6), (G, 9)] where the first coordinate associates a character and the second coordinate refers to the number of occurrences of that letter, the R1 assigns the set of characters with the highest second coordinate, which here would be B. The R2 contains the characters with the first and second highest assigned values; in this example, the first highest value is given to B, and the second highest value (10) is assigned to A and C. As the values for both of them are the same, both are assigned to R2. Thus, the R2 contains the three characters, B, A, and C. To clarify on a concrete instance, consider Table <ref>, S = {s_1, s_2, …, s_8} whose the length of input strings are 10. Now, we should determine the R1 and R2 for each l ∈{1, …, L }. To do this, we use the frequency of characters at each level l. For example at the fixed level l = 6, the frequency of characters is as follows, f_6(X) = 2, f_6(K) = 3, f_6(E) = 2, f_6(C) = 1, and the frequency of the rest of Σ members are 0. According to the introduced notation, the most frequent characters are assigned R1, which is K. For identifying the R2 set, we add the first and second most frequent characters here; thus, K is the first, and since the X and E's frequencies match, both of them are considered as the second most frequent characters. Hence, the R1 set for (l = 6) is {K}, and the R2 set for (l = 6) is {K, X, E}. It is notable that after obtaining ranks, these pairs (R1, R2) are stored in a list of alphabets for each l, called the ranked alphabet (Σ_P) which are subsequently passed to the BS procedure. For example, in Table <ref>, consider we want to use R2, so, our solution's first character could only be `M' or `C'. During the preprocessing phase and before obtaining the CS for a set of input strings, it is essential to identify the proper rank that will be leveraged. The choice between R1 and R2 should be guided by the inherent characteristics of the input strings in question, as this decision is dependent on specific conditions. A simple way <cit.> to determine the most appropriate rank for a given set of strings is to initially execute the BS with a small (trial) β_t value using both ranks. Subsequently, the algorithm runs with a larger (actual) β value using the rank that yielded a smaller hamming distance. When the hamming distances are equal for both ranks, the option is selected randomly. This pruning method could be used in all methods that tried to solve the CS problem and it is not limited to our approach. §.§ Time Restricted Beam Search for the CSP To explore the search space of the CS problem effectively, a time-restricted version of the BS approach is employed. This version, therefore called the Time-Restricted Beam Search (TRBS) <cit.>, has proven to be more robust than the basic BS variant with a provided good trade-off between greediness and completeness <cit.>. Moreover, it addresses the problem of adjusting the value of β parameter dynamically within the allowed execution time (t_max). Note that this is not the characteristic of the basic beam search. At each major iteration of the algorithm, TRBS identifies β value depending on the remaining runtime and the value of β used in the previous iteration. The details of computing β at iteration l are given by Eq. <ref>. β←*β·1.1 if t_rem / t_rem≥ 1.1; min(*β / 1.1, 150) if t_rem / t_rem≤ 0.9; β otherwise. Here, t_rem denotes the remaining time to reach the t_max and t_rem is the expected remaining time which for CS problem is equal to t_iter· (L - l). In other words, t_rem estimates the remaining time by multiplying the last iteration time (t_iter) with the remaining number of steps to achieve a leaf node. In the context of the CS problem, since the length of each leaf node (a solution) is equal to L supposing the BS performs at the current level l, the remaining number of BS iterations would be (L - l). §.§ An Expected Distance Heuristic for CSP This section is devoted to a constriction of the expected distance heuristic (EX) to evaluate nodes of the TRBS. Additionally, an effective way of breaking ties between nodes with the same expected distance score values is proposed. First, we explain the term of an expected solution to the CS problem. For each level l ∈{1, …, L}, we determine the most frequent letter across strings from S. Assume that the information is kept in the (sequential) structure , where [l] stores the information about the most frequent letter at level l across all input strings in S. Suppose that s is a partial solution of a fixed length l<L which has to be evaluated. Based on the expected solution string , let us approximate the expected distance from this partial solution to a complete solution. To do so, for each string s_j and the expected string , the number of positional matchings between each suffix of and s_j of the same size in both strings are calculated. For example, if = and s_j=, the scoring vector is obtained [0, 1, 1, 2] by first comparing characters at position 4 (does not match), then at position 3 (which matches, yielding a score of 1), etc. Reversely, the vector score_j=[2, 1, 1, 0] is constructed for each j=1,…, n. This means, that score_j[r] immediately restores the number of characters between the r-length suffix of and the r-length suffix of s_j that match at the same position (index) between these two. These structures are therefore preprocessed before entering the main loop of the BS. The score of solution s is based on the expected number of matched letters obtained by taking into account two scores for each input string s_i: (i) the number of positions between the prefix of s_i of length l and solution s for which respective characters match each other, and (ii) estimated number of positions between the suffixes of and s_i with the starting position l+1 whose characters match each other. In essence, the final score is based on the most probable suffix solution for the remaining parts (suffixes) of input strings relevant for extending partial solution s. Symbolically, for a solution x, l=|x|, the following vector of dimension n is calculated: fitness_vector(x) =hd^c(x, S[l]={s_1[1, l], …, s_n[1, l]}) +(score[l+1, L]), where hd^c(x, S)=( hd^c(x,s_1), …, hd^c(x, s_n)), and the plus operator on vectors is applied coordinate-by-coordinate. Now, an estimated expected distance heuristic score for solution s and the set of input strings S is given by EX(x)=min_i=1, …, n fitness_vector(x). Note that incorporating the minimum in Eq. (<ref>) has been found the most suitable as utilizing the maximum or the mean value has been shown as too optimistic estimators, yielding weaker performing search guidance. As an important detail, the first term in Eq. (<ref>) is calculated incrementally, updating the score from the parent node of the node associated with partial solution x. This operation is executed in O(n) time. The second term, using the preprocessed data structures score, requires O(n) time. Therefore, one can calculate Eq. (<ref>) efficiently, in a linear O(n) time. Note that nodes with larger EX values are preferred. In the case of ties among nodes at the same level (i.e., having the same EX(·) score for each node), we break them by utilizing another heuristic rule based on the variance of hamming distances between solution x and corresponding prefixes of input strings; a smaller variance is preferred. For a partial solution x, the variance is calculated based on Eq. <ref>. Var(x) = ∑_i = 1^n (hd(x, s_i) - hd )^2 /n - 1, Where hd is the mean of hamming distances between x and all strings, i.e. hd=∑_i=1^nhd(x, s_i)/n. §.§ A New Local Search for CS problem In this section, we present a new local search algorithm that starts with the initial solution obtained by executing the BS algorithm and intends to improve its quality by performing certain character modifications. The main idea of this procedure is to flatten peak strings with the largest Hamming distances in a way that minimizes the impact on other strings. In more detail, we refer to Algorithm <ref>: the preliminary step involves identifying the critical strings that possess the maximal distances between the incumbent solution best_sol and all input strings from S. In the case where multiple critical strings exist, all of them are taken into account. Subsequently, the character that appears most frequently (the character with the highest f_l(·), 1 ≤ l ≤ L) within these critical strings is recognized along with its position of appearance, denoted by index. In case of multiple characters with the same frequency, all of them are considered. We store all these pairs of indices and characters in a list All_Pairs_Indices_Chars which is then shuffled to prevent the search from any bias. We iterate through the list by considering each pair (index, char) to replace char into a temporary solution temp_sol at the position index. If this modification leads to a hamming distance that is either reduced or equivalent to that of the prior optimal solution, it is declared for the new best solution, i.e. best_sol=temp_sol and the next iteration of LS has been performed. If none of the pairs (index, char) yield a better or equivalent Hamming distance, the algorithm terminates by returning best_sol even before the time limit is reached. Otherwise, the above procedure repeats until a certain time limit is reached. Consider the following example of a major iteration of the local search algorithm. Let S={s_1 = , s_2 = , s_3 = , s_4 = }. Given the initial solution best_sol =, we compute the hamming distances as follows: hd(best_sol, s_1) = 3, hd(best_sol, s_2) = 4, hd(best_sol, s_3) = 2, hd(best_sol, s_4) = 2. At first, the critical strings with maximum hamming distances must be identified, which in this case is s_2 with a distance of 4. Now, to reduce the hamming distance while to minimally impact the other strings, we analyze the character frequency in s_2, yielding: f_1(s_2^1: ) = 3, f_2(s_2^2: ) = 1, f_3(s_2^3: ) = 2, f_4(s_2^4: ) = 2, f_5(s_2^5: ) = 2. As it is clear, the most frequent character of s_2 is at level 1 with a frequency of 3. By placing the character at the first position of temp_sol=best_sol, we derive a new (temporary) solution temp_sol= resulting in the updated hamming distances: hd(temp_sol, s_1) = 2, hd(temp_sol, s_2) = 3, hd(temp_sol, s_3) = 3, hd(temp_sol, s_4) = 1. In this way, the hamming distance of the modified solution is reduced (from 4) to 3 and, therefore, best_sol is set to temp_sol. § EXPERIMENTAL RESULTS AND STATISTICAL ANALYSIS In this section, we compare our method with state-of-the-art approaches to the CS problem from the literature. To the best of our knowledge, ILP, GWSA, and WFC have reported the best results on the known CS problem instances from the literature. Since the ILP model designed by Meneses et al. <cit.> is extremely strong and more effective than any heuristic approach for binary strings, clearly state-of-the-art method there, we compare our TSA on DNA and protein strings against the ILP, GWSA, and WFC methods. To facilitate this comparison, we first detail the benchmark datasets used for the comparison purposes as given in Section <ref>. We then describe the implementation details and parameters' tuning in Section <ref>. Numerical comparison of the methods over five benchmark datasets is presented in Section <ref>. Section <ref> discusses the run-time analysis of the approaches, and finally, we analyze the methods from a statistical perspective in Section <ref>. §.§ Benchmark Datasets In the CS problem literature, we found a big confusion with the employed random instances as each work produces its own randomly generated dataset for testing purposes. We emphasize that the datasets from most of the authors of the existing work are requested. However, none of them responded to our queries. Thus, we were forced to produce another uniformly generated dataset that consists of a wide range of strings. Two datasets called Alpha-4 and Alpha-20 are generated for comparison purposes which correspond to artificial gene sequences representing DNA and protein sequences. In order to resolve the aforementioned confusion with instances, our instances can be found at the publicly available GitHub repository at the link <https://github.com/Hesam1991/Three-Stage-Algorithm-for-the-Closest-String-Problem>. Both Alpha-4 and Alpha-20 consist of strings with n ∈{10, 15, 20, 25, 40, 60, 80, 100} and L ∈{50, 100, 150, 200, 250, 300, 350, 400, 800, 1000, 1500} to test the algorithm in different situations. This gives us 88 · 2=176 random instances. Furthermore, the McClure is a small real-world protein dataset that is widely used to test the performance of the algorithms. It consists of six instances with n ∈{6, 10, 12} and L ∈{98, 100, 141 }. We requested the McClure dataset from the authors of the original paper, but the instances could not be provided to us. After an intensive web search, this dataset is found but, unfortunately, one of these instances was corrupted. However, we report the results of the algorithms on the remaining five instances. The McClure is a small dataset that includes protein sequences and realistic instances are much larger, we introduce a real-world DNA dataset from the TP53 gene which is known as the guardian of the genome. TP53 plays a crucial role in suppressing the tumor cells in living creatures. When the TP53 gene is damaged, it cannot repair itself and consequently, it cannot prevent cancer. The sequence of the TP53 gene differs for distinct living creatures, but its functionality is the same in all of them. Understanding the conservation levels of these genes can help to predict a species’ cancer resistance <cit.>. We have collected the TP53 gene of 27 animals including humans, chimpanzees, elephants, etc. from NCBI and randomly put them together as a dataset. This dataset consists of nine instances with n ∈{5, 10, 27}, and L ∈{500, 1000, 2000, 3000 }. One application of the CS problem consists in designing degenerated primers for detecting viral diseases. The flu disease type A sub-type H1N1 is one of the common strains affecting both humans and animals. This type of flu mutates frequently and consists of eight segments. In some segments, the mutation rate is high, while in others, it appears to be low. Mainly, two of these segments—4 (HA) and 6 (NA)—are used in designing degenerated primers <cit.>. Thus, we decided to review these two segments to check the conservation level across different hosts. To do this, we collected the nucleotide sequences of the mentioned flu disease from various hosts and years via NCBI and compiled them into a dataset with ten instances (five for HA and five for NA), each containing 20 sequences ranging from 1000 to 1700 nucleotides. According to the obtained distances in Table <ref>, the conservation levels in both considered segments are close to each other, but segment 4 (HA) is slightly more conserved (the difference is at about 2%) than segment 6 (NA). §.§ Implementation Details and Parameters Tuning In this section, we describe the way we implemented and set the parameters of the TSA method as well as our three competitors (ILP, GWSA, and WFC). We requested the authors of the mentioned methods for their original implementations. The authors of the WFC algorithm provided us with their Python code; however, any response has not been received by the authors of the ILP, and GWSA. Therefore, we carefully re-implemented these two approaches. Our re-implementations are freely accessible via previously mentioned GitHub repository. All implementations are coded in Python 3.8. To solve the ILP model, CPLEX 12.8 has been executed under default settings. All experiments were executed in single-threaded mode on a machine equipped with an Intel Core i7-4702MQ processor running at 2.2 GHz and using 6 GB of memory. ILP and WFC have no parameters to be tuned. For the GWSA method, we used the parameters as mentioned in <cit.>. For tuning the TSA parameters, we have used grid search <cit.>, trail β_t for assigning proper rank is set to 15. We set the starting value of β to 300, which the value is adjusted over iterations according to Eq. <ref>. Also, the maximum time limit for local search is set to 5 seconds. Since a maximum run time should be given to ILP and our proposed TSA, we set the maximum time limit for both methods according to Eq. <ref> chosen to be comparable to the average running times reported for the other two heuristic approaches from the literature. It is notable that all methods (GWSA, WFC, and TSA) except the ILP are non-deterministic, so, to ensure a fair comparison, we execute each stochastic algorithm ten times per instance and reported the best, worst, and average distances, and finally, we compare them over the average hamming distance over all ten runs. t_max (s) ← 30 L < 400; 60 400 ≤ L < 1000; 120 L ≥ 1000. §.§ Numerical Comparisons In this section, we compare our TSA with ILP <cit.>, GWSA <cit.>, and WFC <cit.> across our randomly generated and real-world datasets. Table <ref> reports an overall summary results of each method. The following conclusions are drawn from there. * Among the five datasets, TSA achieved a better average solution length in four of them. For example, the average solution quality between the TSA and the second-best ILP approach on the benchmark set FLU-A-H1N1 is 819.72 vs. 874.6 in favor of our approach. For the benchmark set McClure, the best results are achieved by the ILP, slightly better than that of the WFC and TSA. The reason lies in the fact that these instances are small and the advanced exact approach like the Branch-and-cut incorporated in the CPLEX solver is extremely effective here. * Regarding the number of best solutions achieved, TSA attained the highest numbers again for four datasets. For Alpha-4, Alpha-20, and FLU-A-H1N1 the differences are huge in favor of our approach, and slightly better on TP53 from other approaches. As expected, ILP is the best one for the benchmark set McClure (the best results achieved in 5 out of 5 cases), and the second best TSA (the best result achieved in 3 out of 5 cases). Overall, our TSA approach achieved the best results in 148 (out of 200 cases), while the second-best ILP approach did it in 49 cases, followed by the WFC with 43 cases. * In summary, concerning all datasets (comprising 200 instances), GWSA, WFC, ILP, and TSA could reach 2% (4), 21.5% (43), 24.5% (49), and 74% (148) best solutions, respectively. The full numerical results are reported in Tables <ref>–<ref> for the benchmark set TP53, FLU-A-H1N1, and McClue. For the benchmark sets Alpha-4 and Alpha-20, complete numerical results are reported in Appendix, see Tables <ref> and <ref>, due to the sizes of the tables. In the case of the Alpha-4 benchmark set, the following conclusions can be made. * For small and moderate instances (n ≤ 30), TSA achieved 4 out of 44 optimal solutions, while ILP reached 18 out of 44. However, although TSA did not attain optimality for the remaining moderate instances, obtained solutions were very close to the quality of known optimal solutions found by the ILP (≈ 97% of the quality of the known optimal solutions). * As the instance size increases (n > 30), the ILP approach becomes less effective, and our TSA approach outperforms all other competitor approaches. In the case of the Alpha-20 benchmark set, the following conclusions can be made. * While the ILP model achieved 8 optimal solutions and 11 best solutions mainly for the smallest instances (n=10), TSA reached 6 optimal solutions and 64 best-known solutions. * For the instances with n ∈{20, 25}, our TSA found a best solution in all cases. * For the instances with n ∈{40, 60}, the situation here is less clear and WFC and TSA perform similarly while significantly outperform the other two. * For the largest instances with n ∈{80, 100}, TSA and WFC again perform similarly, and much better than the remaining competitors. Regarding the real-world TP53 dataset (Table <ref>) the following conclusions are drawn. * ILP, GWSA, and TSA reach six, three, and five optimal solutions, respectively. ILP has been extremely effective for n ≤ 10. However, its performance quickly deteriorates by increasing n. * Overall, the TSA approach outperforms the other approaches in both the average solution quality and the number of best solutions achieved. For the second real-world dataset, FLU-A-H1N1 (Table <ref>) the following conclusions are made. * ILP could not solve any of the instances to optimality. This is mainly because for a larger n (in this case n=20) this approach is already inefficient. * TSA is the clear winner, outperforming the other methods in all instance. * The conservation percentages of the obtained solutions range from 38 to 40 %. For the McClure dataset (Table <ref>), we conclude the following. * The ILP approach solved all (5) instances optimally. * Concerning average solution quality, WFC took the second place, and TSA came at the third place. * Nevertheless, TSA achieved more best solutions than WFC. §.§ Time Comparison In this section, we compare the algorithms from a running time perspective. Figure <ref> illustrates the run times of all methods across five benchmarks. In each plot, the y-axis represents the running time (in seconds), while the x-axis indicates the instance number (a unique instance ID) indicating an instance for which the (average) running time is obtained by executing respective algorithm. The WFC algorithm exhibits the lowest running time, however it comes with poor performance. Due to the absence of tunable parameters, we consider this method to have reached its limitations. For the Alpha-4, Alpha-20, and FLU-A-H1N1 datasets, ILP and TSA algorithms demonstrate similar running times. However, for the TP53 and McClure datasets, TSA outperforms ILP on medimum to large-sized instances, while ILP is faster for smaller instances where instances are solved optimally. When compared to the GWSA method, the TSA generally has a shorter running time, with the notable exception of the McClure dataset. §.§ Statistical Analysis In this section, we compare methods from a statistical perspective. For this analysis, we utilize the Friedman test <cit.> at a 5% significance level for the results of all four competitors over two groups of instances: the first group includes randomly generated instances from Alpha-4 and Alpha-20 (176 instances), whereas the second group includes real-world instances from the remaining three benchmark sets (24 instances in overall). The results of the Friedman test performed for both groups separately indicate that we reject the null hypothesis (H_0), suggesting that the methods do not perform equally. Subsequently, the pairwise comparison of methods is illustrated in Figure <ref> using CD plots by utilizing Nemenyi post-hoc test <cit.>. In detail, for each pair of algorithms, a critical difference is calculated at the significance level of 5%. Each algorithm is positioned at x-axis w.r.t. its average ranking. If there is a horizontal bar joining two algorithms, it means that they perform statistically equally. The following conclusions are drawn from the generated CD plots. * Concerning random instances, the best average ranking is obtained by our TSA. The second-best average ranking is achieved by WFC, followed by ILP and GWSA. Hereby, the TSA approach performs significantly better than the competitor WFC. Both of these approaches are statistically significantly better than ILP and GWSA. * Concerning real-world instances, TSA again achieved the best average ranking (1.4), followed by the ILP approach. However, here the performance of both approaches does not differentiate significantly from each other. On the other hand, TSA outperforms the GSWA and WFC statistically significantly. However, no statistical difference is presented between the results of ILP and WFC. § CONCLUSION In this paper, we introduced the Three-Stage Algorithm for solving the Closest String problem. Our proposed method comprises three core stages. First, we introduce a novel pruning to reduce the search space effectively. In particular, at each level of the search, a predetermined subset of promising members of the alphabet instead of a whole set is employed for a set of allowed transitions. This pruning method positively affects the quality of solutions while reducing the overall running time of the algorithm. Second, a time-restricted beam search to explore the search space for obtaining complete solutions is utilized. The search process of the beam search is guided by the newly designed expected distance heuristic function. Third, a new local search algorithm is proposed. It enables a complete solution to be improved by repetitively substituting critical characters in the incumbent solution. In addition, we created four datasets: two real-world and two uniformly at random distributed to test the robustness of the methods on both, real and simulated sequences. Our extensive experimental results show that the proposed method can significantly improve over the best solutions from the literature thus yielding new state-of-the-art method. In some details, for both, random and real-world instances, the results of our method achieved the best average ranking when compared to other competitor approaches. In the case of random instances, the difference between our approach and the second-best WFC approach is statistically significant in favor of the former approach. In the case of real-world instances, our approach and the second-best, an ILP approach, perform statistically equally good, and significantly better than the other two approaches. Regarding future work, one could engage the Monte Carlo Tree Search in the search process to leverage a predictive exploration in the state space of the CS problem and to design possibly more accurate heuristic guidance. Another way could be designing a complementary approach to the classical optimization approaches. This includes designing a learning mechanism that can be developed by separately training a model of machine learning, such as a kind of neural network, to support better decision-making in the beam search, see e.g. <cit.>. In order to do so, one has to determine a set of crucial input features extracted from instances that affect the final score; e.g. maximum frequency of characters at each level, standard deviation/average of maximum frequencies per each level, the variance of the solution w.r.t. related values of the hamming distance to the input strings, etc. § ACKNOWLEDGEMENTS Marko Djukanović acknowledges financial support of the Ministry of Science and Technology Development and Higher Education of the Republic of Srpska with the project entitled “Development of artificial intelligence models and algorithms for solving difficult combinatorial optimization problems” under no. 1259086. § NUMERICAL RESULTS FOR BENCHMARK SET ALPHA-4 =2pt llllllllllllllllllll Comparison of all methods over Alpha-4 benchmark 3l 2lILP 4lGWSA 4lWFC 4lTSA 4-5 7-10 12-15 17-20 |Σ| n l solution t(s) best worst average t(s) best worst average t(s) best worst average t(s) Comparison of all methods over Alpha-4 benchmark 3l 2lILP 4lGWSA 4lWFC 4lTSA 4-5 7-10 12-15 17-20 |Σ| n l solution t(s) best worst average t(s) best worst average t(s) best worst average t(s) 4 10 50 31* 0.3 34 35 34.1 1.7 32 32 32 0.005 31 31 31* 11.9 4 10 100 58* 0.5 61 63 62.4 5 59 60 59.7 0.02 59 59 59 13.6 4 10 150 88* 1.6 92 94 93 9.1 89 91 89.8 0.04 89 89 89 16.8 4 10 200 116* 1.8 122 125 123.4 14 117 119 118.1 0.07 118 118 118 17.1 4 10 250 144* 1.7 150 153 151.5 19.8 145 148 146.5 0.1 144 145 144.4 21.7 4 10 300 175* 3.7 186 189 187.5 27.6 178 178 178 0.2 176 176 176 18.5 4 10 350 202* 2.6 211 215 212.7 36 203 205 204.1 0.2 202 202 202* 27.1 4 10 400 231* 10.9 238 240 238.8 44.7 232 233 232.7 0.3 232 233 232.3 63.3 4 10 800 467* 9 484 488 486.4 157.1 469 472 470.6 1.4 469 469 469 45.6 4 10 1000 579* 22.2 588 593 589.9 240.1 580 582 581.1 2.2 579 579 579* 131.8 4 10 1500 874 122 883 885 884.4 534 872 874 872.7 5.1 871 871 871* 93.7 4 15 50 32* 0.7 35 37 35.9 1 33 34 33.4 0.007 33 33 33 14 4 15 100 63* 2.2 69 71 70.5 5.4 65 66 65.2 0.02 64 64 64 21.7 4 15 150 93* 15.2 98 100 99 10.1 95 96 95.7 0.06 94 94 94 13.6 4 15 200 124* 7.5 126 127 126.7 15.3 126 127 126.5 0.1 126 126 126 20.6 4 15 250 154* 28.6 161 163 162.5 21.5 156 158 156.8 0.1 156 156 156 31.5 4 15 300 186* 31.2 194 198 195.9 29.9 187 190 188.6 0.2 188 188 188 20.7 4 15 350 215 31.3 227 230 229.2 38.3 217 221 218.8 0.3 218 218 218 64.6 4 15 400 246 6.8 255 257 256.1 47.6 248 249 248.1 0.4 246 246 246 46.8 4 15 800 493 61.6 505 508 506.1 163.2 495 496 495.6 1.7 494 494 494 53.4 4 15 1000 1001 124 632 637 633.8 248.6 619 621 620.3 2.8 618 618 618 80.7 4 15 1500 1461 123.2 940 945 942.3 546 921 923 921.8 6.7 919 919 919 91.7 4 20 50 34 30.1 39 41 40.4 1.3 35 36 35.5 0.009 34 34 34 33.2 4 20 100 65* 1.1 67 69 68.2 6.3 66 67 66.9 0.03 66 66 66 24.3 4 20 150 96 30.8 100 105 102.8 11.1 98 99 98.8 0.08 98 98 98 35.4 4 20 200 128* 17 135 136 135.5 16.7 130 132 131 0.1 130 130 130 25.1 4 20 250 159 30.6 167 169 168.1 23.3 160 163 161.4 0.2 161 161 161 19.4 4 20 300 214 30.8 199 203 200.7 31.9 195 197 195.6 0.3 193 193 193 30.1 4 20 350 245 30.9 231 234 232.4 40.6 225 227 225.6 0.3 223 224 223.5 26.9 4 20 400 281 62.1 259 261 259.7 50.7 255 257 255.7 0.4 254 254 254 58.4 4 20 800 569 62.1 523 528 525 169 513 515 514.3 2.2 513 515 513.6 75.4 4 20 1000 704 122.7 654 662 656.6 255.3 638 640 639.3 3.5 638 638 638 86.7 4 20 1500 1391 124 973 976 974.9 556.9 955 957 956.2 7.9 953 953 953 100.5 4 25 50 35 30.2 39 41 40.2 1.3 36 36 36 0.009 35 35 35 31.8 4 25 100 68 30.4 71 74 72.1 6.7 69 70 69.5 0.03 69 69 69 14.6 4 25 150 98* 24.6 106 109 106.8 11 101 102 101.9 0.07 100 100 100 24.1 4 25 200 136 30.6 133 135 134.2 18.1 131 134 132.3 0.1 131 131 131 45.9 4 25 250 173 30.8 171 174 172.3 25 166 168 167.2 0.1 166 166 166 29.1 4 25 300 213 31 208 210 209.1 34.1 198 200 198.8 0.3 198 199 198.7 37.7 4 25 350 246 31.1 235 239 237.6 43.1 229 231 230.4 0.4 227 227 227 29.4 4 25 400 288 61.2 266 269 266.9 53.8 262 264 262.6 0.5 261 261 261 69.3 4 25 800 600 62.6 536 538 536.6 176 521 525 522.7 2.5 520 521 520.7 51.4 4 25 1000 678 123.3 672 677 674.1 263.4 649 651 650.2 4.1 650 651 650.3 97 4 25 1500 1501 130 996 1000 997.9 569.2 982 984 982.4 9.7 981 984 983 104.5 4 40 50 38 30.2 38 39 38.6 3.7 37 38 37.3 0.01 36 36 36 21.4 4 40 100 77 30.5 74 76 75.1 9.1 72 73 72.6 0.05 71 71 71 22.3 4 40 150 115 31.6 109 111 110.3 15.2 105 107 106.2 0.1 104 104 104 22.9 4 40 200 146 31.8 142 144 142.7 22.2 140 141 140.1 0.2 138 138 138 26.1 4 40 250 185 31.3 177 179 178.3 29.9 175 176 175.5 0.3 175 175 175 28.1 4 40 300 222 31.5 212 215 213.3 40.4 207 209 207.9 0.4 204 204 204 31.3 4 40 350 257 31.8 247 251 248.8 50.9 242 244 243 0.6 241 241 241 31.1 4 40 400 282 62.1 278 282 280 62.2 275 276 275.3 0.8 273 273 273 51.6 4 40 800 801 68.4 550 554 551.5 194 545 547 545.9 3.6 540 541 540.1 81.6 4 40 1000 703 130.9 691 694 692.5 286.5 678 681 678.8 5.7 676 677 676.9 109 4 40 1500 1501 141.9 1027 1031 1028.4 608.3 1014 1017 1015.2 13.3 1014 1015 1014.5 129.2 4 60 50 41 30.4 39 42 40.2 4.7 38 39 38.9 0.01 37 37 37 19.4 4 60 100 79 30.7 74 78 76.3 11.8 74 74 74 0.07 72 72 72 32.6 4 60 150 117 31.2 115 119 116.2 19.5 109 110 109.8 0.1 109 109 109 28.7 4 60 200 149 31.5 147 148 147.6 27.8 143 145 143.8 0.3 142 142 142 34.6 4 60 250 251 32.3 184 188 185.8 37 179 180 179.3 0.4 177 177 177 30.9 4 60 300 230 32.5 215 217 216.3 48.4 210 212 211.3 0.6 209 210 209.4 33.1 4 60 350 260 32.8 251 252 251.1 60.7 247 249 247.8 0.9 245 245 245 34 4 60 400 293 63.3 288 292 289.5 73.7 284 288 285.5 1.1 281 281 281 56.7 4 60 800 581 74.4 571 577 573.8 219.3 558 561 559.7 5.2 556 556 556 69.1 4 60 1000 718 128.1 706 708 707.3 319.2 695 697 696.2 8.1 692 693 692.7 125.4 4 60 1500 1501 205.6 1051 1056 1053.1 653.7 1040 1043 1041.6 17.6 1035 1036 1035.9 134.7 4 80 50 41 30.5 40 42 41.1 4.1 39 40 39.7 0.02 37 37 37 20.8 4 80 100 80 31.1 78 80 79.1 14.5 76 77 76.4 0.09 74 74 74 22.4 4 80 150 113 31.5 113 115 113.9 23.7 110 112 111.2 0.2 109 109 109 26.3 4 80 200 152 32.4 151 152 151.6 33.3 147 148 147.5 0.3 144 145 144.6 29.7 4 80 250 192 32.6 185 187 186.5 44.6 183 184 183.1 0.6 180 181 180.9 31.8 4 80 300 224 36.2 219 221 220.2 57.2 217 219 217.7 0.8 215 215 215 33.7 4 80 350 258 38.7 254 256 255.5 70.7 250 252 251.3 1.2 249 249 249 37 4 80 400 299 64.5 290 294 291.4 85.8 287 289 287.9 1.5 284 285 284.6 56.3 4 80 800 594 68.8 580 585 583.2 243.9 570 573 571.4 6.5 566 568 567.4 78.9 4 80 1000 725 139 713 715 714.2 351.4 705 708 706.4 10.2 702 703 702.4 127.5 4 80 1500 1092 137.3 1062 1065 1063.6 706.5 1055 1057 1056 17.7 1056 1057 1056.7 133.2 4 100 50 42 30.7 41 42 41.5 7.8 40 41 40.7 0.02 39 39 39 16.1 4 100 100 82 32.6 82 86 83.4 17.1 77 79 77.7 0.1 75 76 75.3 24.5 4 100 150 119 32 116 119 117.2 27.6 114 116 114.5 0.2 111 112 111.2 28.5 4 100 200 156 33.5 156 159 157.6 38.6 149 150 149.7 0.4 148 148 148 31.4 4 100 250 191 33.5 190 192 191.1 50.8 185 187 186.2 0.6 182 183 182.8 33.3 4 100 300 235 34 224 226 225.4 65.5 220 222 221 1.1 218 218 218 34.7 4 100 350 265 37.2 259 262 260 80.8 256 258 256.6 1.4 252 253 252.2 36.9 4 100 400 304 75.3 297 300 298.2 97.1 292 294 292.7 1.9 289 289 289 66.3 4 100 800 585 71.4 581 583 582.2 268.6 574 576 575.1 7.8 569 572 570.2 78.2 4 100 1000 727 146.5 719 722 720.8 383.3 712 715 713.7 14.4 710 711 710.6 137.1 4 100 1500 1097 141.8 1077 1081 1079.2 827.4 1069 1071 1070 28.1 1063 1066 1064.3 143.2 § NUMERICAL RESULTS FOR BENCHMARK SET ALPHA-20 =2pt llllllllllllllllllll Comparison of all methods over Alpha-20 benchmark 3l 2lILP 4lGWSA 4lWFC 4lTSA 4-5 7-10 12-15 17-20 |Σ| n l solution t(s) best worst average t(s) best worst average t(s) best worst average t(s) Comparison of all methods over Alpha-20 benchmark 3l 2lILP 4lGWSA 4lWFC 4lTSA 4-5 7-10 12-15 17-20 |Σ| n l solution t(s) best worst average t(s) best worst average t(s) best worst average t(s) 20 10 50 39* 0.8 45 45 45 0.6 40 40 40 0.02 40 40 40 19 20 10 100 79* 7.3 84 85 84.3 4.9 79 80 79.2 0.09 79 79 79* 12.9 20 10 150 117* 2.8 122 123 122.9 9 118 118 118 0.1 117 117 117* 23.5 20 10 200 156* 16.8 160 162 160.9 14.2 156 157 156.5 0.2 156 156 156* 18.6 20 10 250 196 30.7 206 208 206.9 19.9 197 197 197 0.3 196 196 196 23.6 20 10 300 235* 22.6 243 246 244.2 28 236 237 236.1 0.6 236 236 236 24.8 20 10 350 277 31 284 286 284.8 36 277 278 277.8 0.6 277 277 277 30.4 20 10 400 313* 10.9 317 320 318.5 45.3 313 314 313.4 1.1 313 313 313* 36.8 20 10 800 625* 55.9 637 639 638.1 161.4 626 627 626.3 4.4 625 625 625* 59.7 20 10 1000 781* 96 793 799 796.3 241.4 781 782 781.5 7.5 781 781 781* 87.2 20 10 1500 1185 122 1191 1197 1193.5 526.7 1176 1177 1176.3 18.1 1176 1176 1176 103.4 20 15 50 42 30.3 46 46 46 0.8 42 43 42.6 0.01 42 42 42 20.1 20 15 100 85 30.4 86 89 88.3 5.3 83 83 83 0.06 82 82 82 14.9 20 15 150 125 30.4 129 132 130.4 3.8 123 124 123.6 0.1 123 123 123 22.4 20 15 200 168 30.4 168 170 169 15.5 164 165 164.5 0.3 164 164 164 30.2 20 15 250 214 30.5 212 214 213.2 21.7 206 207 206.6 0.4 206 206 206 25.6 20 15 300 266 30.6 253 257 254.3 29.9 246 247 246.3 0.5 246 246 246 38.6 20 15 350 308 30.7 294 297 296 38.5 289 290 289.2 1.1 288 288 288 34.2 20 15 400 340 60.8 336 337 336.3 48.3 328 328 328 1.2 327 328 327.2 80.8 20 15 800 711 61.7 665 669 667.5 170.3 655 656 655.2 5.7 655 655 655 108.1 20 15 1000 883 122.1 829 833 830.7 254.6 821 821 821 8.6 820 820 820 118.1 20 15 1500 1335 123.9 1246 1251 1247.9 545.5 1228 1229 1228.2 22.3 1229 1229 1229 140.1 20 20 50 45 304 46 46 46 1.1 43 44 43.8 0.01 44 44 44 22.3 20 20 100 90 30.3 88 90 89.1 6.2 86 86 86 0.06 87 87 87 21 20 20 150 134 30.5 132 136 134.2 3.3 127 128 127.1 0.1 127 127 127 17.1 20 20 200 181 30.5 174 177 174.7 16.9 169 170 169.5 0.2 168 168 168 21.2 20 20 250 227 31.4 217 219 217.8 23.5 211 212 211.4 0.6 211 211 211 22.4 20 20 300 271 30.8 257 259 257.8 32.2 253 254 253.1 0.6 253 253 253 27.6 20 20 350 303 31 302 304 302.7 41.2 295 296 295.1 1.1 295 295 295 30.2 20 20 400 392 61.2 344 347 345.4 51.4 337 338 337.2 1.3 337 337 337 54.7 20 20 800 739 62.2 687 689 688.2 183.2 674 675 674.5 6.1 673 673 673 58.8 20 20 1000 942 125.4 850 856 852.1 270.2 839 840 839.2 10.1 839 839 839 105 20 20 1500 1429 124.3 1271 1274 1272.4 569.5 1257 1259 1258 24.6 1258 1259 1258.2 114.2 20 25 50 46 30.3 46 47 46.3 1.3 45 45 45 0.02 45 45 45 31.9 20 25 100 91 30.5 89 91 90.3 7 87 87 87 0.07 86 86 86 15 20 25 150 137 30.5 136 138 137.5 12.4 130 131 130.2 0.1 130 130 130 21.1 20 25 200 180 30.8 176 182 179.9 18.1 172 173 172.4 0.3 172 173 172.2 24.2 20 25 250 234 30.8 221 227 223.2 25.2 215 216 215.1 0.5 215 215 215 25.3 20 25 300 274 31 265 266 265.7 34.2 258 259 258.5 0.7 258 258 258 37.8 20 25 350 335 31.1 309 310 309.4 43.9 300 301 300.6 1.1 300 301 300.2 29.2 20 25 400 373 61.4 350 352 351.1 54.3 343 344 343.7 1.4 343 343 343 59.1 20 25 800 801 65.2 669 703 701.1 191.2 684 685 684.6 6.7 684 685 684.5 66 20 25 1000 1001 126.3 863 864 863.8 284 853 853 853 11.1 853 853 853 112.6 20 25 1500 1425 125.2 1294 1299 1296.9 589.9 1280 1281 1280.3 25.8 1280 1281 1280.3 142.6 20 40 50 48 30.6 48 49 48.7 2.1 46 46 46 0.02 46 46 46 18.3 20 40 100 92 30.5 92 93 92.7 9.1 90 90 90 0.09 90 90 90 24.9 20 40 150 140 30.8 137 140 138.8 6.9 133 134 133.8 0.2 135 135 135 23.5 20 40 200 180 31 181 183 182.4 22.5 178 178 178 0.4 178 178 178 32.9 20 40 250 233 31.3 225 226 225.1 30.6 221 222 221.7 0.5 222 222 222 25.3 20 40 300 276 31.6 272 273 272.5 40.8 265 267 266.1 1.1 266 267 266.8 37.7 20 40 350 321 31.8 320 327 321.4 51.9 309 310 309.3 1.5 310 310 310 34.8 20 40 400 390 62.2 361 363 361.6 63.6 352 353 352.3 2 353 353 353 56.9 20 40 800 801 64.5 711 714 712 219.7 703 704 703.4 8.2 702 702 702 70.9 20 40 1000 1001 137.4 895 899 896.6 315.5 877 879 878.4 13.4 879 880 879.2 114.7 20 40 1500 1501 128.3 1330 1333 1331.4 625.8 1316 1317 1316.9 30.5 1316 1317 1316.4 129.9 20 60 50 47 30.8 48 48 48 3.4 46 47 46.4 0.03 46 46 46 19.4 20 60 100 94 30.8 97 97 97 6 92 92 92 0.2 92 92 92 18.7 20 60 150 139 31.2 141 142 141.9 10.9 137 138 137.1 0.2 137 137 137 31.2 20 60 200 185 31.7 186 187 186.2 28.2 181 182 181.7 0.4 182 182 182 32.2 20 60 250 239 32.1 230 234 232.3 37.5 226 227 226.2 0.9 227 227 227 33.6 20 60 300 301 32.5 274 277 275.2 49.5 270 270 270 1.2 271 271 271 31.2 20 60 350 321 32.9 319 320 319.9 61.8 314 315 314.4 1.6 315 315 315 37.3 20 60 400 381 63.4 365 368 366.5 75.8 359 359 359 2.3 361 361 361 86.7 20 60 800 801 70.2 724 727 725.3 252.1 715 717 715.5 9.9 716 717 716.1 68.1 20 60 1000 1001 128.4 910 914 911.2 355.9 896 897 896.2 16.4 896 896 896 121.8 20 60 1500 1501 132.9 1356 1359 1357.5 700.1 1340 1341 1340.6 38.2 1340 1341 1340.2 131.6 20 80 50 48 30.6 49 49 49 4 47 48 47.1 0.07 47 47 47 22.5 20 80 100 95 31.2 97 97 97 8 93 93 93 0.1 93 93 93 28.8 20 80 150 140 31.9 141 142 141.9 24.1 138 139 138.1 0.3 138 138 138 29.4 20 80 200 190 32.3 187 188 187.1 34 183 184 183.5 0.6 183 183 183 34.2 20 80 250 234 32.5 233 239 236 20.8 229 230 229.1 0.9 229 229 229 39.1 20 80 300 283 35.5 280 283 281.9 58.2 274 275 274.3 1.5 274 275 274.1 38.3 20 80 350 330 37.5 327 331 328.6 72.4 318 320 319 2.1 319 319 319 39.9 20 80 400 373 68.1 369 373 371.1 87.8 364 364 364 2.8 364 365 364.2 66.9 20 80 800 801 69.4 739 742 740.7 288 724 726 725.2 12.3 726 727 726.4 83.5 20 80 1000 1001 131.4 921 924 922.5 399.9 904 905 904.5 19.6 908 908 908 142.1 20 80 1500 1501 137.3 1367 1370 1368.3 772.1 1354 1355 1354.3 44.3 1354 1355 1354.4 144.7 20 100 50 48 30.5 49 49 49 5.1 48 48 48 0.09 48 48 48 20.6 20 100 100 97 32.7 96 97 96.9 9.9 93 94 93.8 0.1 93 94 93.8 32.1 20 100 150 144 31.8 141 142 141.2 15.7 139 139 139 0.3 139 139 139 27.9 20 100 200 190 32.1 188 191 190.4 39.5 185 185 185 0.8 185 185 185 37.1 20 100 250 236 36.5 234 236 235.1 25.3 230 231 230.5 1.1 230 230 230 37.2 20 100 300 283 35.1 280 282 280.9 67.1 276 276 276 1.6 276 276 276 39.9 20 100 350 333 39.5 328 333 330 82.9 322 322 322 2.4 322 322 322 40.6 20 100 400 375 70.9 372 373 372.2 100 367 367 367 3.2 367 367 367 72.6 20 100 800 801 71.5 743 745 744 321.3 730 731 730.2 14.6 730 731 730.2 76.8 20 100 1000 1001 146.8 923 925 924 442.9 911 912 911.8 23.1 913 913 913 138.3 20 100 1500 1501 143.7 1381 1384 1382.1 826.1 1364 1365 1364.2 52.2 1366 1367 1366.6 160.5
http://arxiv.org/abs/2407.12213v1
20240716233123
Statistical mechanics of passive Brownian particles in a fluctuating harmonic trap
[ "Derek Frydel" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
http://arxiv.org/abs/2407.12488v1
20240717111323
What's Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from the Perspective of Approximate Justice
[ "Corinna Hertweck", "Christoph Heitz", "Michele Loi" ]
cs.CY
[ "cs.CY" ]
Rethinking Algorithmic Fairness from the Perspective of Approximate Justice]What's Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from the Perspective of Approximate Justice University of Zurich & Zurich University of Applied Sciences Zurich Switzerland corinna.hertweck@zhaw.ch Zurich University of Applied Sciences Zurich Switzerland christoph.heitz@zhaw.ch AlgorithmWatch Berlin Germany loi@algorithmwatch.org § ABSTRACT In the field of algorithmic fairness, many fairness criteria have been proposed. Oftentimes, their proposal is only accompanied by a loose link to ideas from moral philosophy – which makes it difficult to understand when the proposed criteria should be used to evaluate the fairness of a decision-making system. More recently, researchers have thus retroactively tried to tie existing fairness criteria to philosophical concepts. Group fairness criteria have typically been linked to egalitarianism, a theory of distributive justice. This makes it tempting to believe that fairness criteria mathematically represent ideals of distributive justice and this is indeed how they are typically portrayed. In this paper, we will discuss why the current approach of linking algorithmic fairness and distributive justice is too simplistic and, hence, insufficient. We argue that in the context of imperfect decision-making systems – which is what we deal with in algorithmic fairness – we should not only care about what the ideal distribution of benefits/harms among individuals would look like but also about how deviations from said ideal are distributed. Our claim is that algorithmic fairness is concerned with unfairness in these deviations. This requires us to rethink the way in which we, as algorithmic fairness researchers, view distributive justice and use fairness criteria. <ccs2012> <concept> <concept_id>10010405.10010455</concept_id> <concept_desc>Applied computing Law, social and behavioral sciences</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10003456.10003457.10003567.10010990</concept_id> <concept_desc>Social and professional topics Socio-technical systems</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Applied computing Law, social and behavioral sciences [100]Computing methodologies Machine learning [100]Social and professional topics Socio-technical systems 20 February 2007 [revised]12 March 2009 [accepted]5 June 2009 [ Michele Loi July 22, 2024 ================= § INTRODUCTION In the algorithmic fairness literature, there are numerous fairness criteria and metrics that try to operationalize the complex and context-dependent construct that is fairness <cit.>. As these criteria and metrics have often been proposed without a deeper engagement with the philosophical literature, there is a wave of work that has tried to tie the proposed statistical metrics of fairness to the philosophical debate. This is particularly the case for group fairness criteria and egalitarianism (see, e.g., <cit.>). Egalitarianism is a theory of distributive justice and, as such, tells us something about how benefits/harms should be distributed among members of a society. There is thus a parallel to algorithmic decision-making systems that make decisions about individuals that result in different benefits/harms and could therefore be seen as distributing benefits/harms in a population. The fact that fairness criteria for decision-making systems have been linked to distributive justice thus seems intuitive. However, we believe that this often-made connection is blurred and leaves an important conceptual question unanswered: If theories of distributive justice are defined at the level of individuals, how do they relate to group fairness criteria defined at the level of socio-demographic groups? In this paper, we will rethink the concept of distributive justice and how it relates to algorithmic fairness. Based on this, we present our understanding of what we think is usually meant by "algorithmic fairness". The crux is this: Patterns of distributive justice like egalitarianism, sufficientarianism or maximin cannot be perfectly fulfilled in practice. If a pattern was perfectly fulfilled (e.g., for egalitarianism: all individuals receive the same outcome), then there would be no structural injustice (in how the pattern is fulfilled). If, however, the pattern is not perfectly fulfilled – which is the case with any real-world system as there will always be deviations from the ideal – then individual deviations from the ideal distribution (i.e., when an individual does not receive what they should receive under the ideal distribution[There could be multiple ideal distributions. There are, for example, multiple possible distributions that fulfill egalitarianism. One then has to decide how to measure the deviation from the ideal by deciding on one ideal. For egalitarianism, one option would be to measure the deviations from the distribution where everyone gets the mean of what everyone gets under the current distribution.]) might affect some groups more than others – in a biased way. This is what we will call structural injustices. Generally speaking, structural injustices are, for example, unfair opportunities (which are unjust according to <cit.> fair equality of opportunity principle) or systematic, structural discrimination against a certain category of people, which is also clearly unjust. [Note that <cit.> already points out the relationship between algorithmic fairness and structural injustice. She defines the latter based on the work of Iris Marion Young. However, she finds that "algorithmic fairness does not accommodate structural injustices in its current scope" <cit.> because mathematical fairness criteria do not consider the sociotechnical nature of automated decision-making systems and instead just condense fairness into a single metric that does not consider "power structures and social dynamics" <cit.>. We strongly agree with this view and urge practitioners not to misinterpret the fulfillment of statistical criteria as proof of justice or fairness. However, our paper will interpret the term "structural injustice" only in the context of what we can evaluate given the data of a decision-making system (and we will therefore sidestep questions of, e.g., procedural fairness).] In the terminology of distributions, we measure structural injustices as unfairly distributed individual deviations from an ideal distribution. We argue that this is what algorithmic fairness metrics are supposed to capture even though the algorithmic fairness literature has not made this clear so far. Note that this is our interpretation of the fairness concern in the algorithmic fairness literature because there is no agreed-upon definition of what we mean by fairness in algorithmic fairness. Our approach is different from others as algorithmic fairness is typically associated with theories of distributive justice themselves. We, however, argue that algorithmic fairness metrics do not check whether a theory of distributive justice is fulfilled, and instead, it is the deviation from said theory that fairness is concerned with. We thus rethink every theory of distributive justice as follows:[Note that we will assume that they are defined at the individual level, which is the case for the majority of them. An exception to this is Rawls's theory of justice <cit.>] There is a distributive pattern that defines how benefits / harms should be distributed between individuals. If this is perfectly fulfilled, then this is where the evaluation of the fulfillment of the theory of distributive justice ends. However, if there is a system where mistakes will be made, then it is not enough to test for the pattern of justice and we also need to test for structural injustice. We will clarify this conceptual map with a concrete discussion of what this means for distributions that are supposed to follow egalitarianism, sufficientarianism or maximin. In Section <ref>, we will retrace how algorithmic fairness, and more specifically, group fairness, has been connected to theories of distributive justice thus far. Based on the issues we find in this connection, we present our proposal for how to map algorithmic fairness to distributive justice in Section <ref>. This contains an explanation of how we view distributive justice and its relation to structural injustices. Next, we apply this mapping in Section <ref> to demonstrate what this means for algorithmic fairness criteria in the context of egalitarianism, sufficientarianism or maximin. Section <ref> highlights the necessity of our conceptual mapping by comparing our resulting fairness criteria from Section <ref> to more naive fairness criteria that are based on the current status of the debate. Section <ref> discusses how the work of <cit.> fits into our conceptual map. Finally, we discuss the implications of our work and possible future directions of research in Section <ref>. § RETRACING THE DEBATE: DISTRIBUTIVE JUSTICE AND GROUP FAIRNESS CRITERIA The parallels between distributive justice and automated decision-making systems mentioned in Section <ref> have led to the algorithmic fairness literature drawing parallels between distributive justice and algorithmic fairness. This section will retrace this development after introducing the basics of both concepts. §.§ Theories of Distributive Justice Theories of distributive justice propose how benefits and burdens should be distributed in society <cit.>. They can vary in whether they care about the just distribution between individuals or groups <cit.>. However, theories of distributive justice are typically concerned with individuals <cit.>. One notable exception from this is Rawls' second principle, which states that “social and economic inequalities are to be arranged so that they are both (a) reasonably expected to be to everyone’s advantage, and (b) attached to positions and offices open to all” <cit.>. Criterion (a) is known as the difference principle. All illustrations of the difference principle are given by considering the average expectations of different “social classes” <cit.> and what Rawls refers to as the “representative man [sic.] who is worst off” <cit.>. This representative person is a clear statistical or sociological generalization. However, the subsequent discussion within analytic philosophy focuses on a more abstract conception of justice disconnected from sociological categories. This is important to note as this makes it difficult to think about structural injustice at the group level for such theories. §.§.§ Egalitarianism and Beyond Egalitarianism is a theory of distributive justice that sees equality per se as morally desirable. According to egalitarianism, people should thus receive the same outcomes, be treated in the same way, be seen in the same way or be equal in some other way. Egalitarianism can thus take different forms depending on what one sees as the metric that ought to be equal.[Note that egalitarians advocate for equality in a qualified form. A common form of egalitarianism is, for example, “luck egalitarianism”: Both <cit.> and <cit.> defend this idea of equality (even at the cost of leveling down) as the best description of what justice is (before compromise with other values). What they mean by equality is “equality in conditions that are not the result of choice”. A plausible alternative is to say that equality is an appealing idea when it is framed in terms of equal desert. This is the “desert egalitarian” form. <cit.> frames “desert” as moral desert. Alternatively, desert can also be a placeholder for any other justification of inequality <cit.>, e.g., when used in generic sentences such as “people with more urgent needs deserve more urgent assistance by the state”.] While egalitarianism is intuitive, it has one remarkably unpalatable implication. Egalitarianism between two people can be achieved not only by increasing the benefit of one person but also by lowering the benefit of the other person (or even by lowering the benefit of both to a lower level). Thus, egalitarianism can be achieved by worsening the results for both people, which is often referred to as the “leveling down” objection. A reasonable agent may reflect that there are contexts in which achieving equality by leveling down would be considered a worsening from the point of view of fairness by most affected people. It would then seem unreasonable to impose fairness that may worsen the prospects of all the groups involved – especially when the affected groups themselves refuse it. Thus, if one cares about the social acceptability of fair prediction-based decisions, one should either abandon egalitarianism or suitably modify it in order to align it with stakeholders' view of fairness – at least to the extent that these appear to be reasonable. The most obvious way to do this is suggested by the history of the debate on the nature of justice in analytic political philosophy. Analytic political philosophers have been debating for decades the nature of fairness. Since the seminal work of John Rawls <cit.> alternatives to simple equality have emerged, such as the difference principle, that requires maximizing the resources of the least advantaged group in absolute terms. Note that the difference principle is the application of maxmin to a particular context and given specific boundary conditions. Hence, we refer to maxmin in the context of various operationalizations (where the context differs, sometimes significantly sometimes in subtle ways) from the one of Rawls’s difference principle. Sufficientarianism also does not recognize equality's intrinsic value: according to sufficientarianism, equality simply has no moral value once all people's needs are satisfied, and it only has value to the extent that it is instrumental in meeting people’s needs, or some other threshold of well-being that is equally significant for a human person as having a fundamental need fulfilled (e.g., a "non-basic" need, such as personal realization) <cit.>. This alternative becomes particularly persuasive when the quantity of a limited resource deemed sufficient aligns with the quantity necessary to prevent significant harm or catastrophe. Consider, for example, a scenario where there is a limited supply of a particular resource, such as food or medicine, which is sufficient to sustain some but not all individuals within a population. Let us assume the population consists of ten members, each requiring a minimum of five units of the resource to survive, with a total of forty units available. In such a case, for any members of the population to survive, some must receive a larger share than others. Distributing the resource equally, providing each person with four units, results in the direst outcome: the demise of everyone. <cit.> Yet other contexts might suggest other normative goals, such as prioritarianism, which we will, however, not discuss in this paper. These are situations where the particular context makes equality an unattractive normative goal that should be replaced by another pattern. Contextual pluralism seems to be necessary: According to a contextually pluralist approach, different patterns of justice may be most reasonable depending on features of the context, of the subjects affected, and the agents involved. §.§.§ Criterion-Type and Optimization-Type Theories of Distributive Justice We propose a categorization of theories of distributive justice into criterion-type and optimization-type theories. Criterion-type theories of distributive justice have a well-defined normative goal, for which we can check whether it is fulfilled. These types of theories have counterfactual-invariant goals: We can always say whether either is achieved by observing the actual distribution and it is entirely irrelevant what other distributions could be realized. This is the case for egalitarianism and sufficientarianism. By contrast, maxmin requires us to compare the actual distribution to counterfactual ones (e.g., feasible alternatives) as its goal is the optimization of a value. How well we do can only be understood in comparison to possible alternatives. We will refer to this as optimization-type theories of distributive justice.[Other examples of this are prioritarianism and utilitarianism.] For the operationalization of theories of distributive justice, which we will get to later, it is important to understand these types. §.§ Group Fairness Criteria Standard group fairness criteria such as statistical parity (P(D=1|A=a)=P(D=1|A=b)), equality of opportunity (P(D=1|Y=1, A=a)=P(D=1|Y=1, A=b)) <cit.> or positive predictive value parity (P(Y=1|D=1, A=a)=P(Y=1|D=1, A=b)) demand equality in some metric across socio-demographic groups. What changes between them is what should be equal and for whom this should be equal. Statistical parity, for example, demands equal rates of positive decisions across groups while equality of opportunity also demands that but only for the parts of the group with a positive Y label. If we assume a measure M that is measured at the group level, these fairness criteria thus take the form M(A=a)=M(A=b). In this paper, we take a utility-based approach (as previously suggested by <cit.>), meaning that instead of comparing shares of positive decisions, we assume that we want to compare their consequences as measured by the utility U of a decision. We thus assume continuous utility values and compare these continuous values between groups. We take this approach for simplicity's sake to illustrate a more general point than it would have been possible with a binary utility (which is what is often assumed in the algorithmic fairness literature). §.§ Status of Conceptual Connection and Open Questions Early work proposed group fairness criteria based on standard performance metrics such as accuracy and measures of the confusion matrix (see, e.g., <cit.>). Starting in 2016, the infamous COMPAS case <cit.> started a debate about the compatibility of fairness criteria (see <cit.>). With that came the more urgent need for an understanding of fairness criteria from a philosophical and normative perspective. Attempts to connect group fairness criteria to philosophical theories mostly connected them to forms of egalitarianism. <cit.> and <cit.>, for example, connect different group fairness criteria to different forms of egalitarianism while <cit.> gives a philosophical account of discrimination and egalitarianism to show "that ‘fairness’ as used in the fair machine learning community is best understood as a placeholder term for a variety of normative egalitarian considerations" <cit.>. More recently, <cit.> has criticized algorithmic fairness's focus on egalitarianism and discussed other theories of distributive justice that the literature could make use of. Some of these have already been implemented – also as group fairness criteria (see, e.g., <cit.>). However, in a lot of this work, the link between algorithmic fairness and distributive justice remains vague and underspecified. <cit.>, for example, writes that "Group fairness measures are related to egalitarian thinking in distributive justice" <cit.>. It seems intuitive to connect distributive justice to algorithmic fairness as both are about the distribution of benefits/harms, but an important question remains open: Theories of distributive justice typically operate at the individual level, but how do they relate to group fairness criteria that operate at the level of socio-demographic groups? We claim that this unclarity is based on a fundamental misconception about how algorithmic fairness and distributive justice relate. We address this point with our rethinking of distributive justice. § RETHINKING ALGORITHMIC FAIRNESS AS APPROXIMATE JUSTICE In this section, we present how we view distributive justice in relation to algorithmic fairness. We will also introduce the terminology that we will use in this context, such as distributive justice and structural injustice. §.§ Differentiating Theories of Distributive Justice, Structural Injustices and Distributive Patterns Broadly speaking, our rethinking of distributive justice in the context of algorithmic decision-making builds on the hypothesis that decision-making systems are never perfect and that an ideal distribution cannot be reached in the real world, so we have to allow for deviations from perfect distributive justice. However, when we allow for deviations from the ideal, there is a chance that these individual deviations are unfairly distributed. It could be that a group is systematically more likely to be a victim of disadvantageous deviations. This is what we would then consider to be a structural injustice and what – so we argue – we want to test for in algorithmic fairness. This can be done with fairness criteria that operate at an individual level (e.g., through individual fairness <cit.> as pointed out by <cit.> or through causal models <cit.>), but structural injustice is difficult to evaluate at the individual level. This is where socio-demographic groups come in as they have the practical advantage that we can more easily look for patterns of structural injustices – this is the basis of group fairness criteria. In theory, there are thus two options when we want to evaluate whether a theory of distributive justice is fulfilled: * Chosen theory of distributive justice is perfectly fulfilled (measured at the level of individuals) → no further tests necessary * Chosen theory of distributive justice is not perfectly fulfilled → (a) check if the theory of distributive justice is approximately fulfilled (measured at the level of individuals) and (b) check for signs of structural injustices (group fairness criteria measure this at the level of socio-demographic groups) Thus, whenever our chosen theory of distributive justice cannot be perfectly fulfilled (which is essentially always the case in the real world),[Note that egalitarianism, sufficientarianism and maximin are what <cit.> calls "ideal theories" that assume ideal conditions, such as that every individual follows the rules. As <cit.> criticize, algorithmic fairness focuses on ideal theories of justice instead of non-ideal ones (which are needed in the real world).] we have to check for structural injustices. One might ask why most theories of distributive justice do not account for this. Presumably, this is because most theories are not concerned with their operationalization. They instead tend to focus on idealized notions of justice – assuming the operationalization to be a technical question, not a normative one. In the context of our paper, we thus differentiate: * Theory of distributive justice: Ideal distribution of benefits/harms among individuals * Structural injustices: Unfair distribution of deviations from the perfect fulfillment of the theory of distributive justice among groups Notice that for both the theory of distributive justice and structural injustices, we have to define how something should be distributed. The difference is in what is distributed: For theories of distributive justice, it is benefits/harms themselves that are distributed by a decision-making system. For structural injustices, it is the deviations from the ideal distribution of benefits/harms. In both cases, we have to define what the ideal distribution should look like – we could, e.g., ask for equality in benefits/harms or deviations. This is what we refer to as a distributive pattern. A distributive pattern does not say anything about what should be distributed or among whom it should be distributed – it is, as the name suggests, just a pattern for how something (let us call this X) should be distributed among individuals/groups/etc. (let us call these P). The distributive pattern can, e.g., be egalitarian (i.e., demanding equality of X among P), prioritarian (i.e., demanding the maximization of the sum of X across P), sufficientarian (i.e., demanding that the X of everyone in P reaches a certain threshold t) or maximin (i.e., demanding the maximization of the minimal X among P). Note that these are – confusingly – also the names of theories of distributive justice. The difference is that the respective theories of distributive justice do say something about what P is (typically individuals) and what X is.[Although there are typically many variations of a theory of distributive justice. There are, e.g., different variations of egalitarianism that demand equality in resources, well-being or how we relate to people.] Distributive patterns are thus used by both theories of distributive justice as well as structural injustices. Note that when we derive fairness criteria to test for structural injustices under different theories of distributive justice in Section <ref>, we will focus on the egalitarian pattern to measure structural injustices. However, Section <ref> will discuss a possible alternative using the sufficientarian pattern. §.§ Terminology: Criteria and Metrics Before we dig in deeper, let us highlight the way in which we will use the terms criterion and metric in this paper, which are often used interchangeably in the literature. In our paper, we view criteria as binary conditions that can either be fulfilled or not. If a criterion is not perfectly fulfilled (which is to be expected), it is helpful to be able to quantify the deviation from this perfect state. This is what we will refer to as a metric. For example, if we expect equality in some measure M (i.e., we have an egalitarian criterion ∀ i,j ∈ P: M_i=M_j), a possible metric is the absolute difference between the maximum and minimum values of the measures: max_i ∈ P(M_i) - min_j ∈ P(M_j). Note that we can use a metric to create an approximate criterion by demanding that the criterion is sufficiently well fulfilled as measured by, e.g., the metric falling below a certain threshold t, e.g., max_i ∈ P(M_i) - min_j ∈ P(M_j) < t. An issue is that when we derive a metric for a criterion (or the other way around), we always have multiple options for doing so. These translations emphasize different aspects of what the criterion (or metric) measures, which corresponds to subtle variations in our understanding of the criterion (or metric). Note that we will use the terms criterion and metric both in the context of distributive justice, where we will call them distributive justice criterion / metric, and in the context of structural injustices, where we will call them fairness criterion / metric. For the latter, we note that a more consistent term for our paper would be "structural justice criterion / metric", which we do not use as "fairness criterion / metric" is the established term in the algorithmic fairness literature. The following list gives an overview of this terminology: * Distributive justice criterion: Measured at the individual level; tests whether the theory of distributive justice is perfectly fulfilled, which is presumably never the case in the real world * Distributive justice metric: Measured at the individual level; measures deviation from perfect fulfillment of the theory of distributive justice * Fairness criterion: Can be measured at different levels, but group fairness measures this at the level of socio-demographic groups; tests whether there are any signs of structural injustices, which we will probably always see in the real world as perfection is impossible to achieve, so while imperfect fulfillment of distributive justice tells us to look for signs of structural injustices, this would not be the way to do it in practice. * Fairness metric: Group fairness again measures this at the level of socio-demographic groups; measures deviation from perfect fulfillment of the fairness criterion Note that we cannot define a distributive justice criterion for all theories of distributive justice and that we cannot always define just a single unambiguous distributive justice metric. For optimization-type theories (see Section <ref>, we cannot define a distributive justice criterion that does not take other feasible distributions into account – only a distributive justice metric. For criterion-type theories, the distributive justice criterion can be defined, but the metric is not unambiguous, and there are multiple options for how to define it. We will revisit this point when operationalizing maximin in Section <ref>. §.§ Implications for Algorithmic Fairness In current discussions of algorithmic fairness, this differentiation has not been made clear. Instead, the concepts of distributive justice, structural injustices and distributive patterns have been mixed. We argue that algorithmic fairness has so far only looked at structural injustices but has not named this as such and has not made clear how this is related to theories of distributive justice. When we want to test whether there are structural injustices in a given distribution – as we claim we do in algorithmic fairness – we thus have to define the following: (1) What theory of distributive justice the decision-making system is supposed to follow (because what structural injustice means depends on this contextual information) and then (2) how we define structural injustice in this context. Checking for perfect fulfillment of fairness criteria is not a realistic option in practice, so practitioners need both a distributive justice metric and a fairness metric. Existing group fairness criteria apply an egalitarian distributive pattern to test for structural injustices. However, note that we are not assuming that structural injustice is necessarily defined as the the violation of an egalitarian pattern. We will deal with the question of how structural injustice could be measured with other patterns, e.g., in sufficientarian terms, in Section <ref>. Fig. <ref> outlines the mapping of a theory of distributive justice and structural injustices. Note that only a part of the process of evaluating distributive justice and structural injustice is shown there as there is no obvious procedure for this when a distributive pattern is not perfectly fulfilled. Ideally, both the distributive justice metric and the fairness metric show small enough deviations from the ideal, but these metrics might also be at odds – in which case trade-offs might be needed. Ideally, we could demand for both the chosen distributive justice metric and fairness metric to be optimized. However, they might be at odds and the question of how to balance these two is a moral one. When we then have multiple possible models or distributions to choose from, it could make sense for practitioners to quantify the priority of the distributive justice metric compared to the fairness metric to help them choose a model if the better fulfillment of the theory of distributive justice and reduced structural injustices are at odds. One could, for example, imagine a Pareto front that shows the trade-offs between the two. One option would be to make the fairness criterion a constraint and then select the model that best fulfills the theory of distributive justice one has chosen.[This is what Rawls demands in his second principle <cit.>: As Rawls puts fair equality of opportunity as a condition over the difference principle, his approach would be to filter for all distributions that do not show signs of structural injustices (violations of the fair equality of opportunity principle) and to then select the one that best fulfills the difference principle. He calls this the lexical priority of fair equality of opportunity relative to the difference principle: “The second principle of justice is lexically prior to the principle of efficiency and to that of maximizing the sum of advantages; and fair opportunity is prior to the difference principle” <cit.>. ] Our takeaway from this is that whatever theory of distributive justice we believe our model should follow sets the basis for how we measure structural injustice: These two evaluations are not independent of each other. Rather, the measure of structural injustice depends on the chosen theory of distributive justice. The algorithmic fairness literature thus has to discuss fairness criteria / metrics in the context of their respective theory of distributive justice. § FROM THEORIES OF DISTRIBUTIVE JUSTICE TO FAIRNESS CRITERIA In this section, we will show how the way in which we view distributive justice impacts algorithmic fairness. Specifically, we will go over three theories of distributive justice (egalitarianism, sufficientarianism, and maximin) and discuss how to check the fulfillment of the theory of distributive justice as well as potential structural injustices. §.§ Egalitarianism We start with egalitarianism as this is the theory of distributive justice that group fairness criteria have retroactively been tied to (see Section <ref>). §.§.§ Test for Theory of Distributive Justice Distributive Justice Criterion If egalitarianism is perfectly fulfilled, everyone gets exactly the same. We can, therefore, evaluate egalitarianism as a theory of distributive justice at the individual level with the following justice criterion: The distributive justice criterion for egalitarianism is that all (equally deserving) individuals have the same utility: ∀ i,j ∈ P: u_i = u_j. Distributive Justice Metric If the justice criterion is not perfectly fulfilled, our rethinking of distributive justice has shown that we can check whether the distributive pattern is approximately fulfilled. When we want to check whether our distribution fulfills egalitarianism well enough, we could come up with a metric that measures the deviation from perfect fulfillment. Examples of possible distributive justice metrics for egalitarianism are: * The Gini coefficient <cit.> is a popular metric to measure inequality that is usually used to measure wealth or income inequality, e.g., in a country * Variance in utilities across the entire population: Var(U) * Difference between the maximum and minimum utility: max_i ∈ P u_i - min_j ∈ P u_j * Ratio of the maximum and minimum utility: max_i ∈ P u_imin_j ∈ P u_j We could then set a threshold for the metric to define how far the distribution can deviate from perfection to still be called "approximately just". This gives us an approximate distributive justice criterion. The aim is to say "While not every utility is exactly equal, the utilities are similar enough according to our chosen metric." §.§.§ Test for Structural Injustices As we know, when the theory of distributive justice is not perfectly fulfilled, this gives rise to potential biases. This is what we would then consider to be a sign of structural injustice, which we have to check for separately. Let us first figure out what structural injustices would look like in this context. In the following, example 1 shows what structural injustice could look like for a distribution that approximately fulfills the theory of distributive justice. Example 2 then highlights that a distribution that does not show these signs of structural injustice does not have to be egalitarian though – thereby showing that there is indeed a difference in checking for an approximate just distribution according to egalitarianism and checking for the absence of structural injustice. * Example 1: Assume that individuals are graded on an exam. We pick the students that are equally deserving of a B to check the grading process. With this background information, everyone in our set should get a B. Assume further that there are two groups, group 1 and group 2. The teacher is biased against group 2, which means they give every group 2 student a C but every group 1 student the B they deserve.[Note that it does not actually matter that they get grades close to B. They could all get a D and the test for equality / low variance would still pass since all that matters is that students get (about) the same grade.] Of course, the individual-level check for egalitarianism would fail as not every student gets the same grade. However, if we assume group 2 is small in size, the Cs might just be a few outliers that do not influence the variance much. We might thus say that the distribution is just. However, we cannot tell yet if there are structural injustices. For this, we have to check the grades of group 1 and group 2. We could, for example, compare grade averages to realize that the teacher systematically grades group 2 students lower than group 1 students. We thus need the fairness criterion to check for structural injustice. * Example 2: Assume that instead of distributing bank loans based on an applicant's properties, a bank clerk just tosses a coin. Everyone has a 50-50 chance of getting the loan. While there is certainly something to be said about procedural justice here (a loan grant should probably not be decided based on a coin toss), let us focus on outcome fairness for now. Here, it seems that the resulting loans are not fair on an individual level if we think that getting a loan should at least be somewhat related to whether we can repay the loan or some other individual characteristics. If we focused on equally deserving individuals, say those who are able to repay the loan, we see that there is high variance, so our distribution would not be seen as just by the distributive justice metric. However, we also know for sure that whether you get a loan or not is not causally influenced by group membership – there is thus no structural injustice, which is what we try to figure out with our metric. A fairness criterion comparing group averages would also show us that there is no systematic bias – assuming that there are plenty of people asking for loans in both groups, so that the loan approval rates are similar in both groups. These two examples show that testing for the theory of distributive justice and testing for structural injustices are two different things and that both are necessary. Fairness Criterion Fairness criteria are supposed to detect possible structural injustices, which we defined as unfairly distributed deviations from an ideal distribution. We are thus looking for patterns of systematic differences between groups. This shows itself when the expected utilities of both groups are compared. A (notable) difference in these expectation values hints at possible structural injustices. A possible fairness criterion for egalitarianism is that the expected utilities of all groups are equal: ∀ a,b ∈ A: E(U|a)=E(U|b). This is essentially how standard group fairness criteria operate.[Except that they do not take a utility-based approach and instead compare binary decisions or outcomes across groups.] This tells us that standard group fairness criteria seem to assume an egalitarianism as the theory of distributive justice and then also use the egalitarian distributive pattern to measure potential structural injustices. Fairness Metric Since this fairness criterion is almost impossible to perfectly fulfill in practice, we also want to measure the deviation from the criterion, which we formalize as metrics. The list of possible metrics is similar to the list of possible distributive justice metrics: the Gini coefficient, variance, a difference or ratio, etc. This similarity occurs because both criteria (the distributive justice criterion and the fairness criterion) demand equality in something (utilities/expected utility) across a set (of individuals/groups). §.§ Sufficientarianism We now apply our approach to sufficientarianism to derive appropriate fairness criteria and metrics from this. This will give us to an important insight: To check for structural injustices in a sufficientarian distribution, we still use an egalitarian pattern – what changes compared to the fairness criteria for egalitarian distributions is the measure that we demand to be equal. §.§.§ Test for Theory of Distributive Justice Distributive Justice Criterion If sufficientarianism is perfectly fulfilled, everyone's utility is above the predefined minimum threshold: The distributive justice criterion for sufficientarianism is that all (equally deserving) individuals have a utility above the threshold t: ∀ i ∈ P: u_i > t. Distributive Justice Metric An intuitive distributive justice metric is the share of individuals with a utility above the threshold t. To get an approximate distributive justice criterion, this metric can be demanded to be sufficiently high. §.§.§ Test for Structural Injustices We again discuss two examples to show that an approximately just distribution does not have to go hand in hand with the absence of structural injustices to help us understand what would constitute structural injustices in the context of sufficientarianism. * Example 1: Picture a world in which 95% of individuals reach the threshold, which is assumed to be sufficient for the distribution to be seen as approximately fulfilling sufficientarianism. However, the 5% not reaching the threshold are all in group 1 while the 95% reaching the threshold are in group 2. There is clearly an issue of structural injustice despite the approximate distributive justice test passing. * Example 2: Imagine that approximately equal shares of group 1 and 2 are above the threshold, so that there is no issue of structural injustice. Now imagine, however, that overall only 1% of the entire population is above said threshold. Sufficientarianism is then not reached even though there is no issue of structural injustice. Fairness Criterion Sufficientarianism assumes that there are utility thresholds that, ideally, every individual should reach. This could, for example, be the case in the distribution of medicine where everyone needs one pill to become healthy again and half pills do not work. Then the threshold is one pill. It does not matter whether an individual receives more than one pill – they will be healthy either way. Likewise, it does not matter whether they receive half a pill or no pill as they will stay sick. We need to keep this in mind when thinking about what structural injustices are as we first need to define what deviations from the ideal distribution are. Under the ideal distribution, everyone reaches the threshold. Deviations are cases where people do not reach the threshold. It does not matter how far they fall below the threshold (a half pill does not cure the sick patient), so for the deviations, we only care whether they are below the threshold; this is thus a binary measurement. Then, the question of structural injustice is: What is an unfair distribution of these deviations? We propose the following fairness criterion: A possible fairness criterion for sufficientarianism is that the share of individuals with utilities above the threshold t_u is equal across all groups: ∀ a,b ∈ A: E(U'|a)=E(U'|b), where U'=I(U>t), so U' models the resulting utility as binary: either one is above the threshold (U'=1) or below (U'=0). Here, we notice that checking for structural injustices under sufficientarianism leads us to an egalitarian distributive pattern: To check if there is a systematic bias against a group, we check if groups are (approximately) treated equally. What changes compared to Section <ref> is what we demand to be equal: It is not E(U) but E(U'). One can then define fairness metrics and approximate fairness criteria in the same manner as we did for the egalitarian theory of distributive justice in Section <ref>. §.§ Maximin Let us now apply the same thinking to maximin. Remember that maximin is an optimization-type theory of distributive justice (see Section <ref>), which has an influence on the operationalization. §.§.§ Test for Theory of Distributive Justice Distributive Justice Metric If we follow maximin, the utility of the worst-off individual min_i ∈ P(u_i) is maximized. However, assuming that decision-making systems are not perfect, we might not want to focus on individuals. To avoid looking at the single worst-off individual as there could always be an outlier, we propose measuring the expected utility of the worst-off 5% of individuals. Note that here we only derive a metric and not a criterion due to maximin being a optimization-type theory of distributive justice.[We could turn it into a criterion by asking that among all possible distributions R, we choose the one that maximizes our metric.] §.§.§ Test for Structural Injustices How does structural injustice play out for maximin? Imagine the following two distributions of wages per hour of work where all people whose names start with the letter "A" are in group A and all people whose names start with the letter "B" are in the marginalized group B. Assume that group A and B are roughly equal in their natural talents and motivations relevant to the job. Assume also that the distributions exhibited here are statistically significant, so we assume that the inequality is due to unequal opportunities. * Anna: 10€, Berta: 20€, Anton: 30€, Basti: 40€, Adriana: 50€, Barbara: 60€ * Berta: 10€, Basti: 20€, Barbara: 30€, Anna: 40€, Anton: 50€, Adriana: 60€ Both distributions are equally good according to maximin as the worst-off person always has 10€ (and using the approximate version, the worst-off 5% are also equal in their utility). Given that maximin cares about the worst-off individual, this theory of justice assigns moral value to the one that is worst-off. This gives us reason to believe that someone who subscribes to maximin and cares about structural injustices should find the first distribution preferable to the second one.[Note that this cannot be avoided by demanding a leximin distribution as the two distributions are also equally good according to leximin. Yet, since leximin cares about those who are worst off, someone who cares about structural injustices should again prefer the first distribution.] There are multiple possible explanations as to why they should prefer the first distribution: * The worst-off individuals of the second distribution are all in group B. * The worst-off individuals of group B (Berta, Basti) are worse off than the worst-off individuals of group A (Anna, Anton) in the second distribution. To evaluate structural injustices, we do not just check what group the single worst-off individual belongs to: of course, this individual has to belong to some group, and this does not yet constitute a structural injustice against that group. We argue that it only becomes problematic once we see a pattern at the group level. We, therefore, look at the 5% worst-off individuals (in the population/per group) – similar to the distributive justice metric. Fairness Criterion From this, we could come up with different fairness criteria: * Fairness demands that it is (approximately) equally likely for members of each group to be in the worst-off 5%. * Fairness demands that the expected utility of the worst-off 5% in each group have an (approximately) equal expected utility. Again, these fairness criteria only tell us something about fairness and not about which distribution is more just according to maximin. Imagine, for example, a distribution where half of the worst-off 5% are in group A and the other half in group B – or a distribution where the worst-off 5% of group A have the same expected utility as the worst-off 5% of group B. A second distribution does not fulfill these, but the worst-off individual is (or the worst-off 5% are) better off in distribution 2. In that case our fairness criterion would tell us to prefer distribution 1, but maximin would actually require distribution 2. This again shows that the criteria and metrics that we use to evaluate the justness of a distribution are different from the criteria and metrics that we use to evaluate structural injustices. Analogous to fairness criteria for an egalitarian (Section <ref>) and a sufficientarian (Section <ref>) theory of distributive justice, we can turn the fairness criteria for maximin into fairness metrics and approximate fairness criteria. Note that by picking two options to translate maximin to a fairness criterion, we get a multitude of fairness metrics as there are also multiple options for translating each of these fairness criteria into a metric. What does this tell us and how should we choose between them? We argue that they highlight different aspects of what matters about maximin and structural injustices. Depending on one's views on this, one can choose different paths and end up with a different criterion or metric. § COMPARISON TO NAIVE OPERATIONALIZATIONS As previously discussed, examples to illustrate theories of distributive justice are often given at the individual level. However, group fairness criteria are defined at the group level. Our approach shows fairness criteria depend on the theory of distributive justice they were created for. This section will highlight the necessity of our approach by comparing the resulting fairness criteria to a more naive approach that simply takes the theory of justice defined at the individual level and replaces the term "individual" with "group" and "utility" with "expected utility". Such an approach would result in the following fairness criteria: * Egalitarianism: Every individual has the same utility. → Every group has the same expected utility. * Sufficientarianism: Every individual's utility is above the threshold. → Every group's expected utility is above the threshold. * Maximin: Maximize the utility of the worst-off individual. → Maximize the expected utility of the worst-off group. These naive operationalizations of theories of justice as group-level fairness criteria are a tempting way to test for structural injustice – after all, Section <ref> showed that it seems to work for egalitarianism. In this section, we will discuss why the naive operationalization of theories of justice as group-level fairness criteria can turn out to be logically inconsistent with said theory of distributive justice and mismatch our intuitions about said theory. §.§ Sufficientarianism Our approach to operationalizing sufficientarianism results in a different fairness criterion than the naive operationalization.[One might be tempted to think that <cit.> suggests such a naive operationalization of sufficientarianism: instead of demanding that all individuals reach a certain threshold, <cit.>'s fairness criterion demands that all groups reach a certain threshold. However, we argue that <cit.> should actually be understood differently. Section <ref> will therefore discuss a more charitable interpretation of <cit.>'s work as well as how their work fits into our understanding of distributive justice.] As we will show in this section, the naive operationalization is not logically coherent with the idea of sufficientarianism. To see this, imagine two groups. In group 1, 95% of the members are above the threshold. In group 2, only 5% are above the threshold. This is illustrated in Fig. <ref>. Now imagine further that the 95% above the threshold in group 1 are just barely above the threshold while the 5% below the threshold are far below the threshold. The resulting average group utility is below the threshold, so the naive operationalization of sufficientarianism would constitute that group 1 does not fulfill the sufficientarian requirement. In group 2, however, the 95% below the threshold are barely below the threshold while the 5% above the threshold are far above the threshold. Here, the resulting average utility is above the threshold, meaning the naive operationalization would see sufficientarianism to be fulfilled – despite 95% of the group not reaching the threshold. Assuming that the threshold represents what is needed to survive, this naive operationalization would assume that group 2 is better off than group 1 even though 95% of individuals in group 2 are starving while it is "only" 5% in group 1 that are starving. This means that this naive operationalization does not tell us anything about structural injustices. Clearly, group 1 is better off according to sufficientarianism and group 2 is the disadvantaged group. Our fairness criterion, which checks if approximately equal shares from all groups are above the threshold, accounts for this. Moreover, this naive operationalization is not even fit to check if sufficientarianism is fulfilled. This is where our distributive justice metric is necessary, which checks if a large enough share of the population reaches the threshold. §.§ Maximin Both <cit.>'s[<cit.> briefly discusses maximin as an alternative to egalitarianism citing the leveling down objection. They suggest that "the max-min distribution deviates from equality only when this makes the worst off group better off", but do not discuss why maximin can or should be analyzed at the group level.] and <cit.>'s[<cit.> integrates maximin into a formalized fairness criterion, which has already been used in other influential papers such as <cit.>. To be precise, they actually use minimax, which is just the counterpart of maximin for situations in which what we compare across groups is something negative or harmful that groups want to minimize. The idea is thus to look at the group with the maximum value and compare this maximum value across possible distributions/models to then choose the minimum. To make it easier to follow <cit.> in the context of our paper where we use maximin, we will adjust the following explanation of <cit.> to make it maximin: <cit.> suggests a multi-objective optimization that finds the distribution where the group with the lowest accuracy is maximized under the condition that this distribution is also Pareto-efficient across all groups' accuracy scores. This means that one would not select a distribution where the accuracy score of any group could be improved without worsening the accuracy score of the worst-off group. Fairness is thus measured as the worst accuracy score across all groups. The paper does not give a moral justification for why they care about accuracy scores.] maximin fairness metric correspond to the naive version of translating maximin to the socio-demographic group setting that we described at the beginning of Section <ref>. The issue with this operationalization is that if the worst-off individuals are in a group with a lot of very well-off individuals, then the actual worst-off individuals are ignored in the distribution process. Imagine, for example, that the 5% worst-off individuals are distributed across groups that are, on average, fairly well-off as seen for group 1 in Fig. <ref>. Now imagine that another group (group 2 in Fig. <ref>) has a lower average utility, but the worst-off individuals from that group are doing much better than the worst-off individuals from group 1. The worst-off individuals from group 1 are then ignored as we only maximize the utility of the group that is, on average, worst-off, so group 2. As the previous discussion of how to evaluate structural injustice for maxmin showed, there are other fairness criteria that are – so we argue – more in the spirit of maximin. § IS GROUP FAIRNESS INHERENTLY EGALITARIAN? In Section <ref>, we defined fairness criteria under different theories of distributive justice. This helps us check whether a sufficientarian or maximin decision-making system treats people of a disadvantaged group unfairly. You may have noticed that for sufficientarianism, we demanded equality in the share of groups' members above the threshold t while for maximin, we proposed two fairness criteria that both demand equality in a measure between groups. One might ask if there are alternatives to this egalitarian fairness approach – in particular because this type of egalitarianism still carries the risk of leveling down: When such a fairness criterion is enforced, it could actually happen that the share of members above the threshold is lowered in all groups (for sufficientarianism) or that, e.g., the expected utility of the worst-off individuals is lowered (for maximin). <cit.> have therefore suggested another approach, which does not check for equality but instead demands some measure to reach a certain level. They call this type of fairness constraint "leveling up" and describe it as follows: "[If] we believe that people are being harmed by low selection rates, precision, or recall, instead of enforcing that these properties be equalised across groups, we can instead require that every group has, at least, a minimal selection rate, precision, or recall." <cit.> As mentioned in Section <ref>, this is reminiscent of sufficientarianism. However, <cit.> never portray their proposed fairness criterion as an operationalization of sufficientarianism. Instead, they portray it in a way that is analogous to the way egalitarianism is used in group fairness criteria: Not as a criterion to check if this theory of distributive justice is fulfilled but rather as a fairness criterion to check if there are structural injustices at the group level. In fact, "leveling up" does not represent a single fairness criterion but contains multiple possible fairness criteria: Leveling up does not specify what ought to be above the minimum threshold[They mention multiple options such as the true positive rate, the false positive rate, etc. While their examples focus on comparing binary decisions across groups ("The family of post-processing methods we consider tune a separate offset for each group, which alters the proportion of individuals in each group that receive a positive decision." <cit.>), they do not explicitly discuss their approach as being limited to binary decisions. We, therefore, interpret their proposal as being extendable to compare utility values.] – just how egalitarianism does not specify what ought to be equal (the equality of what debate <cit.>). Therefore, leveling up could ask for, e.g., the selection rates to be above a threshold t_leveling-up (∀ a ∈ A: P(D=1|A=a) > t_leveling-up) or for the true positive rate to be above the threshold (∀ a ∈ A: P(D=1|Y=1, A=a) > t_leveling-up) – just like egalitarianism contains multiple fairness criteria depending on what measure M is chosen. A formalization of this making use of the generic measure M and the threshold t_leveling-up is ∀ a ∈ A: M(A=a) > t_leveling-up. We can combine our operationalization of fairness criteria for egalitarianism, sufficientarianism and maximin with <cit.>'s leveling up. For an egalitarian distribution, we can demand that the expected utilities of groups reach a minimum threshold t_leveling-up: ∀ a ∈ A: E(U|A=a) > t_leveling-up. Instead of demanding equal expected utilities, we would demand a minimum level for the expected utility of each group. In the case of sufficientarianism, we can demand the share of individuals with utilities above the threshold t to reach a certain threshold t_leveling-up for all groups: ∀ a ∈ A: E(U>t|A=a) > t_leveling-up instead of demanding the share of individuals above the threshold t to be equal across groups (E(U>t|A=a)=E(U>t|A=b)). This means that while we want equal shares of members from all groups to reach the threshold, we can now forgo equality if it harms a/all group and instead just demand that groups bring at least a certain share t_leveling-up of individuals above the required threshold t. This notion is useful if you believe that it is, e.g., sufficient to get 90% of individuals above the threshold in every group and it does not matter if one group gets 100% of their members above the threshold while another group only gets the required 90% above the threshold. If you believe that this is a sign of structural injustice, however, you should check for equal shares and thus use the egalitarian distributive pattern to test for structural injustices. Similarly, we can adapt our fairness criteria for maximin. For the second fairness criterion (the expected utility of the worst-off 5% in each group should be (approximately) equal across groups), we could, for example, demand that the expected utility of the worst-off 5% in each group reaches a minimum threshold t_leveling-up. There is thus a question of what we mean by structural injustice: Do we speak of structural injustice when the harms of deviating from a distribution that fulfills our chosen theory of distributive justice are unequally distributed? This would apply an egalitarian approach to structural injustice. Do we not actually care about equality in the distribution of deviations and instead care about not having too strong of a harmful deviation for each group? Then, we might want to take a sufficientarian approach to structural injustice (i.e., <cit.>'s "leveling up"). When choosing a group-level fairness criterion or metric, we thus first have to decide on what theory of distributive justice we find most appropriate. Then, we need to decide what we view as structural injustices under the chosen type of distribution.[Rawls's second principle <cit.>, for example, tells us that he views group-level fairness for maximin as egalitarian – represented by what he calls "fair equality of opportunity".] However, one could reasonably disagree on this question as it does not have a straightforward answer and depends on how one views what constitutes structural injustices. § CONCLUSION Group fairness criteria are typically described as "egalitarian". A lot of the algorithmic fairness literature, therefore, seems to assume that fairness criteria are the operationalization of theories of distributive justice. However, we argue that this is not true: While theories of distributive justice demand a certain distribution of benefits/harms among individuals, we claim that fairness criteria actually test whether the inevitable deviations from said ideal are unfair to certain groups, i.e., systematically biased against them. This test for a systematic bias in the deviations often uses an egalitarian distributive pattern and is therefore confused with the egalitarian theory of distributive justice. We show how this refined conceptual thinking affects what fairness criteria we use when we have a distribution that is supposed to follow theories of distributive justice other than egalitarianism. Implications In current discussions of algorithmic fairness, the concepts of theories of distributive justice, structural injustices and distributive patterns are often confused. Yet, recognizing these concepts and their relationships is crucial as the measurement of structural injustices depends on the chosen theory of distributive justice. When testing for structural injustices, this means first defining the theory of distributive justice and also testing whether it is fulfilled. When proposing new fairness criteria, this means discussing the theory of distributive justice in the context in which these fairness criteria can be used. Our paper is thus a call for the algorithmic fairness literature to clearly differentiate between theories of distributive justice and structural injustices. Future work should look into the subtle normative differences between the multiple fairness metrics that can be derived from a single fairness criterion and the other way around. This could provide valuable guidance to practitioners. It would also be interesting to apply the demonstrated approach to derive fairness criteria for other theories of distributive justice such as prioritarianism. Thank you to Joachim Baumann for his valuable feedback and input on several versions of this draft. We also want to thank Marcello Di Bello for his thoughtful feedback on this paper. This work was supported by the National Research Programme “Digital Transformation” (NRP 77) of the Swiss National Science Foundation (SNSF), grant number 187473. ACM-Reference-Format
http://arxiv.org/abs/2407.12649v1
20240717152258
Learning Gaussian Operations and the Matchgate Hierarchy
[ "Joshua Cudby", "Sergii Strelchuk" ]
quant-ph
[ "quant-ph" ]
Adaptive Finite Blocklength for Low Access Delay in 6G Wireless Networks Yixin Zhang^†, Wenchi Cheng^†, and Wei Zhang^  ^†State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, China ^School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, Australia E-mail: {yixinzhang@stu.xidian.edu.cn, wccheng@xidian.edu.cn, w.zhang@unsw.edu.au} This work was supported in part by the National Key Research and Development Program of China under Grant 2021YFC3002102 and in part by the Key R&D Plan of Shaanxi Province under Grant 2022ZDLGY05-09. Received 04 December 2023 / Accepted 15 July 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Learning an unknown quantum process is a central task for validation of the functioning of near-term devices. The task is generally hard, requiring exponentially many measurements if no prior assumptions are made on the process. However, an interesting feature of the classically-simulable Clifford group is that unknown Clifford operations may be efficiently determined from a black-box implementation. We extend this result to the important class of fermionic Gaussian operations. These operations have received much attention due to their close links to fermionic linear optics. We then introduce an infinite family of unitary gates, called the Matchgate Hierarchy, with a similar structure to the Clifford Hierarchy. We show that the Clifford Hierarchy is contained within the Matchgate Hierarchy and how operations at any level of the hierarchy can be efficiently learned. Quantum Computing, Matchgates, Tomography, Learning. § INTRODUCTION Quantum process tomography is the task of determining an unknown quantum operator by applying it to certain known states. In general, this task requires a number of measurements which grows exponentially with the number of qubits <cit.>. This complexity can be improved if there is some prior knowledge about the process. For example, restricting to the set of Pauli matrices reduces the query complexity to 1 for any number of qubits, for essentially the same reason as superdense coding <cit.> is possible. A generalisation of learning Paulis was given by Low <cit.>, where they demonstrated how to learn any unknown Clifford Hierarchy operation. The Clifford Hierarchy 𝒞_k was introduced in <cit.> as an infinite family of unitary gates on any number of qubits which can be fault-tolerantly implemented using gate teleportation as a primitive. Denoting the Pauli group by 𝒞_1, the hierarchy is defined recursively for k ≥ 2 by: 𝒞_k { U : U 𝒞_1 U^†⊆𝒞_k-1} / U(1). The Clifford group is the second level of the hierarchy, 𝒞_2. It is well-known that circuits consisting only of Clifford gates and a single-qubit output measurement are classically simulable <cit.>. The Clifford Hierarchy has received much attention due to its connections to fault-tolerance <cit.>, magic state distillation <cit.> and learning <cit.>. The Clifford Hierarchy provides an elegant way to operationally describe the power of practically relevant classes of unitary operations: levels in the hierarchy represent a certain degree of hardness when implementing them. Most pertinently to this work, it was shown in <cit.> that any Clifford gate C on n qubits may be learned (up to phase) using 𝒪(n) queries to black boxes for C and C^†. Moreover, they showed that any gate in 𝒞_k may similarly be learned, now requiring 𝒪((2n)^k) queries to the oracles. The learning algorithms are based on the observation that knowledge of U σ_p U^† for a set {σ_p} which generates 𝒞_1 suffices to learn U. Another class of important quantum operations is the set of (fermionic) Gaussian operations, introduced in <cit.> and studied in <cit.> and more recently in <cit.>. The authors of <cit.> described a deep relation between Gaussian operations and the physical theory of free fermions. It is then that authors first established that computational capabilities of unassisted fermionic linear optics can be precisely captured by these operations. Despite the fact that these operations are classically efficiently simulatable, their unique structure provides a complete characterisation of log-space bounded universal unitary quantum computation <cit.>. For a system of n free fermions with creation operators a_1^†, …, a_n^† we define 2n Majorana operators: γ_2l-1 a_l + a_l^†, γ_2l -i(a_l - a_l^†), where l ∈ [n] {1, …, n}. The Majorana operators are Hermitian, square to the identity and anti-commute with each other. For a set of indices S ⊆ [2n], we denote by γ_S the Majorana monomial consisting of the product of the Majorana operators indexed by S in increasing order. When multiplying Majorana monomials together, we will use the notation γ_S γ_S' = (±) γ_S Δ S', where Δ denotes the symmetric difference of two sets and the possible factor of -1 comes from commuting the Majorana operators past each other. We may choose the Jordan-Wigner representation <cit.> of the Majorana operators, giving γ_2l - 1 = (∏_i = 1^l - 1 Z_i) X_l, γ_2l = (∏_i = 1^l - 1 Z_i) Y_l, where Z_i denotes the n-qubit operator which acts as the 1-qubit Pauli Z on the ith qubit and the identity elsewhere, and similarly for X_i and Y_i. Gaussian operations are the set of unitaries M given by exponentials of (fermionic) quadratic Hamiltonians: H = i ∑_μ, ν h_μνγ_μγ_ν, M = e^iH with h = (h_μν) w.l.o.g. a real, anti-symmetric matrix. It can be shown <cit.> that Gaussian operations preserve the linear span of γ_μ under conjugation: M γ_μ M^† = ∑_ν = 1^2n Q_μνγ_ν for a special orthogonal matrix Q = (Q_μν) ∈ SO(2n), given by Q = e^4h. An immediate consequence of (<ref>) is that: M_Q γ_S M_Q^† = ∑_S' = S(Q|_S, S') γ_S', where the sum is over subsets of [2n] of equal size to S, and Q|_S, S' denotes the restriction of Q to rows indexed by S and columns indexed by S'. In particular, the number of Majorana operators is preserved in each term. Since the Majorana monomials form a basis of linear operators on n qubits, M is fully determined by (<ref>) up to overall phase. We therefore label Gaussian operations by their corresponding orthogonal matrix, writing M_Q for the operation defined by (<ref>). Gaussian operations may always be expressed as circuits of certain 2-qubit parity-preserving gates, known as matchgates. The matrices representing these gates have the following form: G(A, B) [ p 0 0 q; 0 w x 0; 0 y z 0; r 0 0 s ], A = [ p q; r s ], B = [ w x; y z ], where (A) = (B). In particular, a Gaussian operation M on n qubits is expressible as a circuit of at most 𝒪(n^3) matchgates G(A, B) <cit.>. The set of Gaussian unitaries on n modes forms a group, usually referred to as the matchgate group and denoted by 𝒢_n. The set of (possibly mixed) Gaussian states are given by the closure of Gibbs states of quadratic Hamiltonians: ρ_h = e^-H/(e^-H). For pure states, this coincides with the set of output states after applying some Gaussian operation to |0⟩. For any density matrix ρ, we define the correlation matrix Γ(ρ) = (Γ(ρ)_jk) by Γ(ρ)_jki/2⟨ [γ_j, γ_k] ⟩_ρ = i/2( [γ_j, γ_k] ρ). Γ is easily seen to be real and anti-symmetric. For a Gaussian state ρ_h, it can also be checked that Γ(ρ_h) = tan(2h). Since tan is injective for anti-symmetric matrices, we see that Gaussian states are uniquely defined by their correlation matrix. In this work, we present an algorithm which (approximately) determines unknown Gaussian operations in polynomial query complexity. We then recursively define an infinite family of gates, mimicking the Clifford Hierarchy for the fermionic setting, which we call the Matchgate Hierarchy. We then generalise our learning algorithm to show that elements from this hierarchy can also be efficiently determined. Since the group of Gaussian operations is continuous, learning an unknown member of the group can only be done up to some finite precision. Further, we are unable to distinguish physically equivalent unitaries which differ only by a global phase. We, therefore, consider the two metrics used in <cit.>, which we refer to as the phase-sensitive and phase-insensitive distances respectively: D^+(U_1, U_2) 1/√(2 d)U_1 - U_2_2 ≡√(1 - (d^-1(U_1^† U_2))), D(U_1, U_2) 1/√(2 d^2)U_1 ⊗ U_1^* - U_2 ⊗ U_2^*_2 ≡√(1 - d^-1(U_1^† U_2)^2), where A_2 = √((A^† A)). Although D is not a true distance, we can check that: D(U_1, U_2) = 0 implies U_1 = e^iϕU_2 for some phase ϕ; D is unitarily invariant; and D satisfies the triangle inequality. Since D is phase-insensitive, it is a useful measure for the quality of our estimates for unknown Gaussian unitaries. § LEARNING GAUSSIAN OPERATIONS Now suppose that M = exp(iH) is an unknown Gaussian operation. If we have access to copies of the Gibbs state ρ_h, it is easy to learn h and therefore M. Since Γ(ρ_h) = tan(2h), we only need to compute estimates for the n(2n-1) independent entries of Γ(ρ_h) to obtain an estimate for h. This process requires 𝒪(n(2n-1)/ϵ^2) copies of the Gibbs state to obtain a precision of ϵ. Instead, suppose we only have access to the gates M and M^† as black boxes. We consider a 3-step learning process. * We perform a direct sampling process of each row of Q in turn, obtaining a guess Q̃, which is identical to Q up to additive error ϵ and entry-wise incorrect signs. * We compute estimates for Γ(ϕ)_jk for certain pure Gaussian states |ϕ⟩ = M X_l|0⟩, refining our guess to Q, which equals Q up to overall signs for each row and additive error. * We perform one final measurement to correct the signs for each row, leading to an additive error estimate of Q. The above protocol leads to the following result: Given oracle access to an unknown operation M ∈𝒢_n and its conjugate M^†, an estimate M' of M satisfying D(M, M') = 𝒪(n^3η) can be determined w.h.p. with query complexity 𝒪̃((n/η)^2), for any η = 𝒪(n^-6). We prove the correctness and complexity of the protocol below. §.§ Finding unsigned entries of the orthogonal matrix We first note that the set { (γ_S M_Q ⊗ I) |ψ⟩ : S ⊆ [2n] } forms an orthonormal basis: ⟨ψ| (M_Q^†γ_S'^† ⊗ I)( γ_S M_Q ⊗ I) |ψ⟩ = (M_Q^†γ_S'^†γ_S M_Q) = ∑_P = S' ⊕ S (±) (Q|_S ⊕ S', P) (γ_P) = δ_S, S' since (γ_S) = 0 for any S ≠∅. Consider measuring the state (M_Q γ_μ⊗ I) |ψ⟩ with respect to {( γ_S M_Q ⊗ I) |ψ⟩ : S ⊆ [2n] }. We have: ℙ(S) = ⟨ψ| (M_Q^†γ_S^†⊗ I) (M_Q γ_μ⊗ I) |ψ⟩^2 = 2^-n(M_Q^†γ_S^† M_Q γ_μ)^2 = 2^-n∑_S' = S(Q|_S, S') (γ_S'γ_μ) ^2 = Q_μν^2 S = {ν} 0 otherwise. Suppose we take K such measurements and report estimates for the squared entries Q_μν^2 according to the observed frequencies. By the Hoeffding inequality and a union bound, we can learn each squared entry within additive error η using K = 𝒪(log(n) / η^2 ) measurements w.h.p. By taking the positive square roots, this is equivalent to learning the unsigned entries ± Q_μν up to additive error η / 2. Repeating for each row requires a total of 𝒪(n log(n) / η^2) measurements. §.§ Fixing signs up to row-wise sign errors We now have a matrix consisting of the unsigned entries of Q with additive errors. Let the matrix be Q̃ = Q̃_μν. Let the necessary sign corrections be t_μ s_μν, where t_μ, s_μν∈{± 1 } and we have factored out whole-row sign corrections for each row. Explicitly, Q_μν s_μνQ̃_μν + η_μν Q_μν = t_μQ_μν = t_μ s_μνQ̃_μν + t_μη_μν, where η_μν < η and s_μ,1 = +1. We will estimate certain 2-point correlation functions: Γ^(0)_jk ⟨0|M^† (i γ_j γ_k) M|0⟩, Γ^(l)_jk ⟨0|X_l M^† (i γ_j γ_k) M X_l|0⟩. We may compute an explicit expression for Γ^(0)_jk: Γ^(0)_jk = i ∑_a < b(Q|_{a, b}, {j, k}) ⟨0|γ_a γ_b |0⟩ = i ∑_l = 1^n(Q|_{2l-1, 2l}, {j, k}) ⟨0| i Z_l |0⟩ = - ∑_l = 1^n(Q|_{2l-1, 2l}, {j, k}). Similarly, Γ^(l)_jk = i ∑_l' = 1^n(Q|_{2l'-1, 2l'}, {jk}) ⟨0| i X_l Z_l' X_l |0⟩ =(Q|_{2l-1, 2l}, {jk}) - ∑_l' ≠ l(Q|_{2l'-1, 2l'}, {jk}) We therefore see that: Γ^(l)_jk - Γ^(0)_jk/2 = (Q|_{2l-1, 2l}, {jk}) Setting j = 1 and expanding, we have: C_lkΓ^(l)_1k - Γ^(0)_1k/2 = Q_2l-1, 1Q_2l, k - Q_2l, 1 Q_2l-1, k For clarity, we first consider the error-free case, where η_μν = 0. Then our matrix of unsigned entries of Q allows us to compute 4 guesses for the value of C_lk (recalling that we fixed s_μ, 1 = +1), D_lk^±± t_2l-1t_2l( Q_2l-1, 1 (±Q_2l, k) - Q_2l, 1 (±Q_2l-1, k) ). If none of the 4 D_lk^±± values are zero, then they are all distinct, and one of them exactly equals C_lk. We would therefore fix the signs of Q_2l, k and Q_2l-1, k in terms of t_2l-1t_2l. The set of matrices where any D^±±_lk = 0 has measure 0 and can safely be ignored. In the errorful case, we have instead only approximations of D_lk^±±, given by: D̃_lk^±± t_2l-1t_2l( Q̃_2l-1, 1 (±Q̃_2l, k) - Q̃_2l, 1 (±Q̃_2l-1, k) ). Furthermore, we only have an estimate C̃_lk of C_lk up to additive error ϵ. Obtaining these for each C_lk w.h.p. requires 𝒪(n^2 log(n)/ϵ^2) queries by the Hoeffding inequality and a union bound. The true value D̃_lk^s_2l-1, k s_2l, k satisfies: D̃_lk^s_2l-1, k s_2l, k - C̃_lk≤ 4η + ϵ + 𝒪(η^2), using Q_μν≤ 1. Conversely, incorrect choices of sign are such that: D̃_lk^±± - C̃_lk≥2F - (4η + ϵ), where F ∈{Q_2l, 1Q_2l-1, k, Q_2l-1, 1Q_2l, k, C_lk} for the 3 incorrect cases. If 4η+ϵ < min{Q_2l, 1Q_2l-1, k, Q_2l-1, 1Q_2l, k, C_lk}, then we are guaranteed that the true choice of D̃_lk^±± is the closest to C̃_lk. In Appendix <ref>, we prove that ℙ(F < κ) = 𝒪(κ^1/3). Taking a union bound over the 2n^2 such calculations that we perform, we have that κ = 𝒪(n^-6) suffices to ensure that all the quantities are larger than κ with high probability. Therefore, ϵ, η = 𝒪(n^-6) suffices to successfully learn the matrix s w.h.p., fixing the signs up to overall row-wise errors. §.§ Fixing row-wise sign errors Assuming the above protocol succeeds, we have some matrix Q, where Q_μν = t_μ Q_μν. For any set of rows R where t_μ = -1, M_Q is straightforwardly related to M_Q. Let M_R M_Q γ_R. Then M_R acts as: M_Rγ_μ M_R^† = M_Q γ_R γ_μγ_R^† M_Q^† = (-1)^R - 1_μ∈ R M_Q γ_μ M_Q^† = (-1)^R∑_ν (-1)^1_μ∈ R Q_μν. If R is even, then M_Q = M_R. If R is odd, then M_Q = M_R^c, where R^c is the complement of R in [2n]. Let P be R or R^c corresponding to the 2 cases. We can find the set P by a single-shot distinguishing measurement. Note that knowledge of Q is sufficient to write down an exact, poly-sized circuit of 2-qubit Gaussian operators which implements M_Q^† <cit.>. If we prepare (M_Q^†⊗ I)|ψ⟩ and measure w.r.t. the basis {(γ_S M_Q^†⊗ I)|ψ⟩}, we note that: ℙ(S) = ⟨ψ|(M_Q γ_S M_Q^†⊗ I )|ψ⟩^2 = 2^-n ( M_Q γ_S γ_P^† M_Q^†)^2 = 1 S = P, 0 otherwise. So the outcome of this measurement allows us to correct the remaining errors in the sign of each row of Q, leaving us with the true matrix Q up to additive error η. In Appendix <ref>, we prove that, w.h.p., this is sufficient to obtain an estimate M_Q' of M_Q satisfying: D(M_Q, M_Q') = 𝒪(n^3 η). §.§ Complexity The total number of queries is easily seen to be 𝒪(n log(n) /η^2 + n^2log(n) / ϵ^2 + 1). Taking η = ϵ = 𝒪(n^-6), we obtain an estimate of Q up to additive error η using 𝒪((n/η)^2 log(n)) queries. Thus the learning task may be completed in (high-degree) polynomial query complexity. § MATCHGATE HIERARCHY In the spirit of the Clifford Hierarchy, for any number of qubits n, we define an infinite family of gates ℳ_k satisfying ℳ_k ⊆ℳ_k+1 for each k, which we call the Matchgate Hierarchy. We define the set Γ_1 {γ_μ : μ∈ [2n]}. The γ_μ play the role of the Pauli generators {σ_x_i, σ_z_i}. We then recursively define the Matchgate Hierarchy by: ℳ_1 { M ∈ U(2^n) : M = ∑_μ a_μγ_μ ; a_μ∈ℝ} ℳ_k { M ∈ U(2^n) : M Γ_1 M^†⊆ℳ_k-1}. The condition that ℳ_1 gates are unitary immediately gives ∑_μ a_μ^2 = 1. We note that the set ℳ_2 consists of gates M for which, for each μ∈ [2n], M γ_μ M^† = ∑_ν = 1^2n Q_μνγ_ν for some Q ∈Mat(2n, ℝ). We may evaluate the anti-commutator of Mγ_μ M^† and Mγ_ν M^† in 2 ways: { M γ_μ M^†, M γ_ν M^†} = M {γ_μ, γ_ν} M^† = δ_μν M M^† = δ_μν { M γ_μ M^†, M γ_ν M^†} = ∑_σ, τ Q_μσ Q_ντ{γ_σ, γ_τ} = ∑_σ Q_μσ Q^T_σν = (QQ^T)_μν So we see that (QQ^T)_μν = δ_μν, implying that Q ∈ O(2n). Such operations are sometimes referred to as extended Gaussian operations. As in the Clifford case, it is not straightforward to characterise the higher levels of the Hierarchy. We do, however, have a link between the Matchgate Hierarchy and the Clifford Hierarchy: For any integer k ≥ 1, 𝒞_k ⊆ℳ_k+1. We proceed by induction. For the base case, note that 𝒞_1 is the set of Pauli strings, which exactly coincides with the set of Majorana monomials {γ_S : S ⊆ [2n] } up to factors of ±1 or ± i. Since γ_S is easily seen to be a Gaussian operation, we have 𝒞_1 ⊆ℳ_2. For the inductive step, note that γ_μ∈𝒞_1 for any μ∈ [2n]. Therefore, for any C ∈𝒞_k, we have Cγ_μ C^†∈𝒞_k-1 by definition of the Clifford hierarchy. By the inductive hypothesis, 𝒞_k-1⊆ℳ_k. Since this is true for all μ∈ [2n], we have C ∈ℳ_k+1. A noteworthy implication of <ref> is that SWAP ∈ℳ_3. SWAP is a universality-enabling gate for matchgates and is in some sense the maximally non-matchgate 2-qubit gate. This is of potential interest for fault-tolerant implementation of quantum protocols based on a “Matchgate + SWAP” universal gate set. § LEARNING MATCHGATE HIERARCHY OPERATIONS The learning algorithm for Gaussian operations can be generalised to learning any operation from the ℳ_k hierarchy. We will require two Lemmas, given in <cit.>, which trivially extend to the current setting. For unitaries U_1, U_2, if for all μ∈ [2n] D^+(U_1 γ_μ U_1^†, U_2 γ_μ U_2^†) ≤δ then for any S ⊂ [2n] D^+(U_1 γ_S U_1^†, U_2 γ_S U_2^†) ≤ 2nδ The proof follows by induction on the number of Majorana operators in γ_S and the triangle inequality. For unitaries U_1, U_2, if for all S ⊂ [2n] D^+(U_1 γ_S U_1^†, U_2 γ_S U_2^†) ≤ 2nδ then D(U_1, U_2) ≤ 2nδ The proof follows from the group of Paulis being a 1-design, and the fact that, under the Jordan-Wigner representation, each γ_S is simply a Pauli string up to a factor of i. Given oracle access to an unknown operation M ∈ℳ_k for k ≥ 2 and its conjugate M^†, an estimate M' of M satisfying D(M, M') = 𝒪(n^3η) can be determined w.h.p. with query complexity 𝒪̃(4^3(k-2)· n^2(k-2)·(n/η)^2) for any η = 𝒪(n^-(6-k)), o(1). The proof is by induction, where the base case is given by <ref> for k = 2. For the inductive step, suppose we have a learning algorithm for members of ℳ_k, which outputs estimates with phase-insensitive precision δ. Let M ∈ℳ_k+1, and for any μ∈ [2n], let M γ_μ M^† = α_μ M_μ for some α_μ = ± 1. We may express M_μ in the Hermitian basis of Majorana monomials γ_S i^f(S)γ_S, where the factors of i are chosen to make γ_S^† = γ_S. Let M_μ = ∑_S c_S γ_S. Since M_μ is Hermitian, c_S ∈ℝ for each S, and we choose α_μ so that c_∅∈ℝ_+. The M_μ are elements of ℳ_k, so we may (phase-insensitively) learn them, obtaining estimates M_μ” satisfying D(M_μ, M_μ”) ≤δ. We wish to find a phase-sensitive estimate by exploiting the Hermiticity of M_μ. Let M_μ” = ∑ c_S”γ_S. The vector of coefficients c” may be written c” = λ c + χ c^⊥, for some c^⊥ satisfying ⟨ c, c^⊥⟩ = 0 for the usual complex inner product ⟨·, ·⟩. It is easy to check that D(M_μ, M_μ”) ≤δ leads to ⟨ c, c”⟩^2 ≥ 1 - δ^2, which in turn implies λ^2 ≥ 1- δ^2. We wish to modify the phase of M_μ” in order to obtain a phase-sensitive estimate. The optimal phase to multiply by is -(λ), but this is unknown. We approximate it by θ -(λ c_S + χ c_S^⊥) for any S. Some elementary trigonometry gives: (λ c_S + χ c_S^⊥) - (λ)≤δ/√(1 - δ^2). Then we may compute the relevant real trace for D^+(M_μ, e^-iθM_μ”): ((M_μ^† e^-i θM_μ”) ) = d( λ e^i((λ) - θ)) = dλcos((λ) - θ) ≥ d√(1- δ^2)·(1- δ^2/1 - δ^2) = d1-2δ^2/√(1-δ^2). Substituting into the definition of D^+, we see that D^+(M_μ, e^-iθM_μ”) ≤ 2δ. Now let M” be the operator such that M”γ_μ M”^† = M_μ”. Also, let M' be such that M' γ_μ M' = M_μ, i.e. all the phases α_μ are set to +1. Similarly to the level-2 case, M' = M γ_S for some set S ⊆ [2n]. By <ref>, we see: D(M', e^-iθM”) ≤ 4nδ. We may identify the phases α_μ by preparing (M^† M”⊗ I)|ψ⟩ and measuring w.r.t. the basis { (γ_S'^†⊗ I)|ψ⟩}. Then the probability of obtaining outcome S is: ℙ(S) = ⟨ψ|(γ_S ⊗ I)(γ_S^† M'^† M”)|ψ⟩^2 = d^-1(M'^† M”)^2 ≥ 1 - 16n^2 δ^2, using (<ref>). The outcome of the measurement allows us to correct our estimate to e^-iθM”γ_S, satisfying w.h.p. D(M, e^-iθM”γ_S) ≤ 4nδ. Obtaining the same precision as the k = 2 case thus requires setting η' = η / (4n)^k-2. For each additional level in the hierarchy, we must learn 2n operators from the previous level, requiring 4n queries to prepare them. Therefore, the query complexity at level k, 𝒬(k), is given by: 𝒬(k) =𝒪((4n)^k-2·( n/η/ (4n)^k-2)^2 ) = 4^3(k-2)· n^2(k-2)· (n/η)^2 as claimed. § ACKNOWLEDGEMENTS JC thanks Wilfred Salmon for helpful discussions. SS acknowledges support from the Royal Society University Research Fellowship and “Quantum simulation algorithms for quantum chromodynamics” grant (ST/W006251/1) and EPSRC Reliable and Robust Quantum Computing grant (EP/W032635/1). IEEEtran § ERROR BOUNDS FOR FIXING SIGNS We wish to obtain upper bounds on ℙ(F < κ) for F ∈{Q_2l-1, 1Q_2l, k, Q_2l, 1 Q_2l-1, k, Q_2l-1, 1 Q_2l, k - Q_2l, 1 Q_2l-1, k}, for any l ∈ [n], k ∈ [2n] ∖{1}, where the probability is over the Haar distribution on the orthogonal matrices. We will fix l = 1, k = 2 and use a union bound to obtain the full result. Since Q is orthogonal, the first and second rows of Q are random orthogonal unit vectors. We may obtain them by performing Gram-Schmidt orthonormalisation on 2 random points on the unit sphere. Let the two random points be given by the vectors 𝐯_1, 𝐯_2. We choose to parameterise them in polar coordinates by 𝐯_𝐢 = ( sin(θ^i_1), cos(θ^i_1) cos(θ^i_2), …), where θ^i_1 ∼ U[0, 2π) and θ^i_2 ∼ U[0, π), and all angles are distributed independently. We are free to choose the ordering in the above parametrisation, i.e. which coordinate contains the sin(θ^i_1) term. We first consider F_1 Q_11Q_22, with F_2 Q_21Q_12 following similarly. We have ℙ(F_1 < κ ) ≤ℙ(min(cos(θ^1_1), cos(θ^2_1)) < √(κ)) ≤ 2 ℙ(cos(θ^1_1) < √(κ)) = 𝒪(√(κ)), to leading order. We now consider F_3 Q_11 Q_22 - Q_21 Q_12 = (Q|_{1, 2}, {1, 2}). To begin, we consider the probability of this term being small if we did not perform the Gram-Schmidt procedure, denoting the corresponding quantity F̃_̃3̃. Since the determinant of a 2-by-2 matrix is the area of the parallelogram defined by the rows of the matrix, we can express F̃_̃3̃ as F̃_̃3̃ = Π (𝐯_1)Π (𝐯_2)sin(ϕ), where Π is the orthogonal projection onto the first 2 coordinates and ϕ is the angle between the projected vectors in the plane. Noting that Π (𝐯_𝐢) = cos(θ^i_2) and ϕ∼ U[0, 2π), and using the shorthand c(·) = cos(·) and s(·) = sin(·), we may compute ℙ(F̃_̃3̃ < κ) = ℙ( c(θ^1_2)c(θ^i_2)s(ϕ) < κ) =ℙ( *c(θ^1_2)c(θ^i_2)s(ϕ) < κ | *c(θ^1_2)c(θ^i_2) < ζ) ·ℙ(c(θ^1_2)c(θ^i_2) < ζ) + ℙ( *c(θ^1_2)c(θ^i_2)s(ϕ) < κ | *c(θ^1_2)c(θ^i_2)≥ζ) ·ℙ(c(θ^1_2)c(θ^i_2)≥ζ) ≤ℙ(c(θ^1_2)c(θ^i_2) < ζ) + ℙ( *ζ s(ϕ) < κ) = 𝒪(√(ζ)) + 𝒪(κ/ζ) to leading order. Optimising over ζ, we obtain the bound ℙ(F̃_̃3̃ < κ) ≤𝒪(κ^1/3). We now consider the effect of orthonormalisation of 𝐯_2 on the quantity F_3. Let the resulting vector be ṽ_2 = λ𝐯_2 + μ𝐯_1, where λ (1- (𝐯_1·𝐯_2)^2)^-0.5. The corresponding determinant in the plane of the first two coordinates is simply F_3 = λF̃_̃3̃, since the 𝐯_1 part of ṽ_2 does not contribute. Since λ∈ [1, ∞), we have ℙ(F_3 < κ) ≤ℙ(F̃_̃3̃ < κ) = 𝒪(κ^1/3). § ERROR BOUNDS FOR ESTIMATES OF UNKNOWN GAUSSIAN OPERATIONS Given a matrix Q' which approximates Q up to term-wise additive error η, we wish to bound the “distance” between the corresponding unitary operators M_Q and M_Q' as defined by Equations (<ref>) and (<ref>). We first consider the error in the anti-symmetric matrix h which generates Q via Q = exp(4h). Let h' = h + 1/4E, where E is the error term whose norm we will seek to bound. We use the series expansion of the matrix logarithm: E = 4(h' - h) = log(Q + η) - log(Q) = ∫_0^∞ dz {I/Q + zIηI/Q + zI} + 𝒪(η_2^2). Therefore, to leading order, E_2 ≤∫_0^∞ dz {I/Q + zI_2 η_2 I/Q + zI_2 }. We recall that for any matrix A, A_2 = √(∑_i σ_i^2(A)) = √(∑_i σ_i^-2(A^-1)), where the σ_i(A) are the singular values of A. Letting the eigenangles of Q be {θ_i}_i=1^2n, it is easy to check that σ_i(Q+zI) = √(1 + 2cos(θ_i)z + z^2). Then Equation (<ref>) becomes E_2 ≤η_2 ∫_0^∞ dz {∑_i 1/1 + 2cos(θ_i)z + z^2} = η_2 ( ∑_i θ_i/sin(θ_i)). Since the eigenangles of a Haar-random orthogonal matrix in 2n dimensions are uniformly distributed (in complex conjugate pairs) on the unit disc <cit.>, we may bound the probability of the term in brackets being overly large: ℙ(∑_i θ_i/sin(θ_i)≥κ) ≤ℙ( ⋃_i θ_i/sin(θ_i)≥κ/2n) ≤ 2n ℙ( ∑_i θ/sin(θ)≥κ/2n) ≤ 4n ℙ( θ∈[π - 2nπ/κ, π)) = 4n^2/κ. Taking κ = Ω(n^2) then implies ∑_i θ_i/sin(θ_i) < κ w.h.p. We may also bound η_2 using the entry-wise expression for the 2-norm: η_2 = √(∑_μνη_μν^2)≤√(4n^2 ·η^2) = 2nη. Therefore, E_2 = 𝒪(n^3 η) w.h.p. We are now prepared to compute the desired distance, D(M_Q, M_Q') = √(1 - d^-1(M_Q^† M_Q')^2). Using (exp(A+B))≤(exp(A)exp(B)) for Hermitian matrices A, B <cit.>, we find: (M_Q^† M_Q') = (exp(-iH)exp(iH')) ≥(exp(-iH + iH')) = (exp(-1/4∑_μνE_μνγ_μγ_ν)) = (I - 1/4∑_μνE_μνγ_μγ_ν) + 1/32∑_μνστ E_μνE_στ + 𝒪(E_2^3)) ) = d(1 - 1/32∑_μνE_μν^2 + 𝒪(E_2^4)), recalling that (γ_S) = 0 for S ≠∅. Finally, to leading order we have: D(M_Q, M_Q') = √(1- 1 - 1/32∑_μνE_μν^2^2) ≤√(1/16∑_μνE_μν^2) ≤1/4√(∑_μνE_μν^2 ) = 1/4E_2 = 𝒪(n^3 η).
http://arxiv.org/abs/2407.12522v1
20240717130625
Struct-X: Enhancing Large Language Models Reasoning with Structured Data
[ "Xiaoyu Tan", "Haoyu Wang", "Xihe Qiu", "Yuan Cheng", "Yinghui Xu", "Wei Chu", "Yuan Qi" ]
cs.CL
[ "cs.CL", "cs.AI" ]
A Scheduler for Real-Time Service in Wi-Fi 8 Multi-AP Networks with Parameterized Spatial Reuse Kirill Chemrov, Dmitry Bankov, Andrey Lyakhov, Evgeny Khorov Kirill Chemrov, Dmitry Bankov, Andrey Lyakhov, and Evgeny Khorov are with the Institute for Information Transmission Problems of the Russian Academy of Sciences, Moscow 127051, Russia (emails: lastname@wireless.iitp.ru). July 22, 2024 ============================================================================================================================================================================================================================================================================================== § ABSTRACT Structured data, rich in logical and relational information, has the potential to enhance the reasoning abilities of large language models (LLMs). Still, its integration poses a challenge due to the risk of overwhelming LLMs with excessive tokens and irrelevant context information. To address this, we propose , a novel framework that operates through five key phases: “read-model-fill-reflect-reason” efficiently enabling LLMs to utilize structured data. It begins by encoding structured data into a topological space using graph embeddings, followed by filling in missing entity information with knowledge retrieval modules, and filtering out irrelevant tokens via a self-supervised module. The final phase involves constructing a topological network with selected tokens to further reduce the total token length for more effective LLM inference. Additionally, includes an Auxiliary Module trained to generate prompts, aiding LLMs in analyzing structured data. Extensive experiments on benchmarks, including the knowledge graph question-answer task and the long document reading comprehension task, show that notably improves LLM reasoning, demonstrating the effectiveness of structured data augmentation in improving LLM inference with complex input context. The code has been open-sourced and can be found in Appendix <ref>. § INTRODUCTION In recent years, significant advancements have been made in the field of large language models (LLMs), particularly in natural language understanding <cit.>. This progress has been largely driven by extensive pre-training on vast text corpora <cit.>, which has enhanced their generation capabilities. These advancements are often viewed as critical steps towards the development of artificial general intelligence (AGI) <cit.>. During the deployment of LLMs as general-purpose assistants for a variety of real-world applications, it becomes necessary for LLMs to process multimodal inputs. Among these inputs, structured data, like structured knowledge graphs (KGs), is particularly important <cit.>. These graphs, with their rich repository of entity relationships and hierarchical knowledge, have the potential to significantly enhance the reasoning capabilities of LLMs, leading to more precise and reliable inferences. However, in real-world applications, the effective utilization of structured knowledge in LLMs presents a significant challenge <cit.>. A common approach is to flatten the structured information into a lengthy text sequence before inputting it into LLMs <cit.>. However, this method often introduces an excessive amount of task-irrelevant context. Excess information can overwhelm the models, thereby impairing inference efficiency and accuracy <cit.>. Additionally, it hinders the ability of LLMs to accurately comprehend and represent the complex knowledge embedded within structured data <cit.>. To address this issue, various approaches have been explored. Some studies have focused on converting knowledge graph triples into textual statements <cit.>, while others have emphasized incorporating knowledge graph embeddings <cit.>. Additionally, efforts are underway to embed knowledge graph entities and relations directly into the encoder layers of LLMs <cit.>. More previous work is summarized in Appendix <ref>. However, these methods primarily concentrate on converting the structural data of knowledge graphs into different formats. They tend to overlook the need to reduce the information density of this structural data, which often includes task-irrelevant information. Moreover, these approaches face challenges in preserving the global topological structure of knowledge graphs, a critical aspect that warrants further attention. In addition to the issues of redundant information and the lack of a global topological structure in knowledge graphs, another significant challenge is the high sparsity of these graphs <cit.>, characterized by missing semantic connections between entities. This sparsity presents a challenge for leveraging structural data in LLMs <cit.>. LLMs tend to prioritize explicit semantic connections presented in the context while overlooking implicit connections, which are crucial for enhancing inference performance. Although current research, such as <cit.> and <cit.>, has been directed towards automatic knowledge completion and data augmentation to boost overall performance, these approaches tend to overlook the aforementioned challenges of redundancy and topological structure representation in utilizing structural data. To overcome the existing bottlenecks discussed above, we introduce , a novel framework designed to utilize Structured data to enhance the interaction and compleX reasoning capabilities of LLMs. This framework is centered around a workflow of “read-model-fill-reflect-reason”. It employs the transformation of structured data into a topological space, achieved through the application of graph embeddings. This is followed by the augmentation of incomplete entity information utilizing knowledge retrieval modules. Subsequently, a self-retrieved generation module called Self-Reg is employed to eliminate irrelevant tokens. The final stage encompasses the development of a topological network incorporating the chosen tokens, which serves to diminish the overall token length, thereby enhancing the efficacy of LLM inference. Furthermore, an Auxiliary Module is also designed in , which adjusts prompts based on the loss, guiding the LLM generation. Extensive evaluation of knowledge graph QA and reading comprehension benchmarks have proven 's superior reasoning abilities. These tests confirm that augmenting LLMs with structured data can significantly improve their inference skills in complex context environments. We refer interested readers to Appendix <ref> for more information about 's interaction examples. The code of has also been open-sourced and can be found in Appendix <ref>. The main contributions of this paper include: * We propose a novelty framework that implements a process of “read-model-fill-reflect-reason” on structured data, enabling LLMs to perform effective complex reasoning over structured data. * We design a knowledge learning and filtering process to dynamically fill in structured knowledge gaps, coupled with a self-retrieved generation module called Self-Reg to filter and verify the relevance of retrieved knowledge, retaining valuable token information to alleviate learning burdens on LLMs. * We construct specialized graph network encoders to fully learn the potential features of associated tokens and enable efficient cross-layer message passing in Transformers. We also devise an original Auxiliary Module for generating coherent prompts and improving answer responses. § PRELIMINARIES The task of text generation in LLMs involves creating a sequence of output y = [y_1, ..., y_T], where T represents the total number of tokens <cit.>, based on a given input prompt x. This process is often modeled in an autoregressively manner, which estimates the likelihood of each token, where y_<t represents the tokens that come before the current sequence [y_1, ..., y_t-1] <cit.>. Enhancements to this process can be made by incorporating relevant information from external documents D into the input, thereby refining the model's predictions <cit.>. Moreover, we can develop a novel decoding strategy that produces critique tokens 𝒞 alongside the main text output. These tokens are generated at each step and are designed to enable the LLMs to self-evaluate aspects such as relevance, factuality, and completeness of the generated content in Table <ref>. p(y, 𝒞|x) = ∏_t=1^T p(y_t, 𝒞_t| x, y_<t, 𝒞_<t), where the critique token 𝒞_t depends on all preceding text and critiques. We define four types of critique tokens: !1.6exIFReT - predicts if retrieval is needed, !1.6exIFReL - assesses passage relevance, !1.6exIFSuP - checks output is supported and !1.6exIFUsE - decides whether it is useful. These critique tokens enable better control of the decoding process through re-ranking or constraints <cit.>. For instance, the probability of a desirable !1.6exIFReL token can upweight certain outputs. As an example, the attention distillation critique token is computed between the input x and response y_t. This critiques the attention alignment between x and y_t. By generating such reflective signals, !1.6exIFReT can adapt its decoding strategy over time based on its critiques. The !1.6exIFReT approach allows customization of model behavior through constraints on desired critique tokens. The detailed algorithm implementation process can be found in Appendix <ref>. p(!1.6exIFReT = y | x) = f_ϕ(x), which predicts whether passage retrieval is needed (y) given the input x using a scoring function f_ϕ parameterized by ϕ. The relevance scoring between a passage p and the input is s_rel = g_θ(x, p) ·!1.6exIFReL(x, p), where g_θ produces a relevance score modulated by the !1.6exIFReL gate value. The factual consistency between a response y and passage p is evaluated by s_con = h_ψ(y, p) ⊙σ(!1.6exIFSuP(y, p)), where ⊙ is element-wise production and σ is the sigmoid activation function. The overall utility is decided using u = !1.6exIFUsE(x, y). § METHODS §.§ Topological Knowledge Injection We first implement “read-model-fill” process and we start by processing input KGs using a graph attention encoder (GAE) that consists of L layers <cit.>. The initial node features, denoted as h_v^(0), are set up using information obtained from the KG completion module <cit.>. After processing through L layers, we obtain the final node embeddings, h_v^(L), which effectively represent both the semantic and structural information of the KGs. These encoded graph embeddings, h_v^(L), are then partially masked at a specific rate, p_mask, to assist in learning about missing knowledge. This masking process can be mathematically represented as h_v = M(h_v^(L)), where M(·) symbolizes the masking operation. The masked nodes, denoted as h_v, are then fed into the knowledge retrieval module, R(h_v), which is explained in the following section. This module plays a crucial role in supplementing the missing information, thereby facilitating the generation of complete graph embeddings h_v <cit.>. To address the gaps in entity information within the structured knowledge graph, we have developed a knowledge learning module, denoted as F. This module is designed to retrieve pertinent facts from the knowledge base to enhance the masked node embeddings, h̃_v <cit.>. More specifically, for each masked node, we calculate a similarity score between its embedding h̃_v and all tail entities t that are part of the set E. This is achieved using the scoring function f_score, which can be represented as: s(v,t) = f_score(h̃_v, t). This process enables us to efficiently fill in the missing information in the knowledge graph. The scoring function evaluates both feature and topological similarities within the graph in Figure <ref>. It selects the top K entities t based on the highest scores and retrieves the related facts (h, r, t) from the knowledge base. To incorporate these facts into the node embeddings, a relation-aware aggregation function f_agg is used. This function accumulates relevant knowledge for each node, using a score threshold τ to filter out irrelevant facts <cit.>. The aggregation function adeptly manages various relations in structured knowledge by considering information from retrieved triples in a relation-aware manner. Additionally, before being input into the Transformer encoder, one linear layer o concatenates and processes embeddings from all GAT layers. h̅_v = f_agg({h̃_v}∪{(h, r, t) | s(v,t) > τ}), e_v = o([h_v^(1), ..., h_v^(L), h_v]) The o merges inputs and reduces dimensionality. This process retains rich multi-scale structural and semantic features at various depths. The output e_v is flattened and prepared into sequences to replace token embeddings for the Transformer encoder input, as suggested by <cit.>. The refined node embeddings h̅_v, enriched with retrieved entity information, supply additional knowledge for reasoning in the downstream LLMs. §.§ Knowledge and Information Retrieval Here we perform the “reflect” process. The module R(h_v) retrieves relevant knowledge absent in masked graph node inputs h_v. Related entities can be dynamically discovered by matching tail entities t to each masked node using a similarity scoring function f_score(h_v, t) considering both feature and topological similarity <cit.>. Related facts (h, r, t) are recalled to fill gaps. After concatenating retrieved knowledge sequences for all nodes ordered by similarity scores, we employ a pruning algorithm leveraging multi-head self-attention distillation and thresholds to filter out lower-weighted tokens. The remaining dense sequence provides supplemental external knowledge to complete masked graph node inputs. To filter and verify the relevance of retrieved knowledge, we design a self-retrieved generation module SelfReg_ψ(k) parameterized by ψ that takes as input the retrieved knowledge sequences k and outputs a filtered subset k̂ containing only the most valuable tokens <cit.>. Specifically, SelfReg_ψ first encodes the knowledge sequence k = (x_1, x_2, …, x_N) using a Transformer encoder to obtain representations h_i = f_enc(x_i). Next, we compute an importance score for each token s_i = σ(f_score(h_i)), where f_score is a scoring network and σ is a sigmoid activation function. To train the scoring network in a self-supervised manner, we create corrupted knowledge sequences k̃ by randomly masking or shuffling some tokens. A contrastive loss is implemented to assign higher scores s_i to tokens from the original k versus corrupted k̃: ℒ_contrast = ∑_imax(0, s_i - s̃_̃ĩ + Δ), where Δ is a margin hyperparameter. This drives the model to identify the most valuable knowledge. Finally, we filter the sequence by discarding tokens scoring below a threshold of τ to retain only the most relevant phrases, significantly reducing the learning burden when provided as supplements to the LLMs. k̂ = {x_i | s_i > τ}. The filtered relevant knowledge k̂ provides targeted assistance to improve reasoning without overwhelming the LLMs with extraneous and irrelevant information. §.§ Graph Topology Encoder Here we perform the “reason” phase. To capture semantic and structural interactions between entities within the KGs, we use a specialized graph encoder in Figure <ref>, denoted as E_θ(G), which is parameterized by θ. This KG is represented as G=(V,E), where V is the set of node entities and E is the set of relation edges <cit.>. For each entity node v_i in V, we first derive its initial feature representation h_v_i^(0), which is a vector in a high-dimensional space. The graph encoder works through a series of L layers, each layer enhancing the node representations through message passing. This process can be described as: h_v_i^(l+1) = f_θ({h_v_j^(l): v_j ∈𝒩(v_i)}), ∀ v_i ∈ V, where 𝒩(v_i) refers to the neighboring nodes of v_i, and f_θ(·) is a function that aggregates information from these neighbors to update the node's embedding. To focus on the most relevant semantic connections, we use a graph self-attention layer within f_θ(·). This layer calculates attention weights as follows a_ij = exp(⟨ q_i, k_j ⟩)/∑_v_k ∈𝒩(v_i)exp(⟨ q_i, k_k ⟩), where q_i and k_j are derived from the embeddings of the nodes. This method allows the model to selectively emphasize the most informative signals from neighboring nodes <cit.>. After processing through L layers, we obtain refined node embeddings z_v_i = h_v_i^(L), which encapsulate both semantic and structural information of the graph. To make these embeddings more manageable for downstream tasks, we compress them through a trainable down-projection layer: e_v_i = 𝐖_d z_v_i, 𝐖_d ∈ℝ^d_h × d_e, d_e < d_z. This step reduces the dimensionality of the embeddings to d_e, which is smaller than the original d_z. The resulting condensed embeddings e_v_i still retain crucial token-level interactions but are more concise, making them better suited for training models for specific tasks. This approach ensures that while the size of the input sequence is significantly reduced, the essential semantic and structural features of the knowledge graph are preserved for subsequent reasoning. §.§ Auxiliary Module To further guide the LLMs in effectively reasoning over the structured input with a knowledge graph, we have developed an Auxiliary Module. This module is designed to create dynamic prompts that enhance the coherence of answers generated by LLMs. It functions by analyzing the LLM's predicted answer, denoted as ŷ, along with the current loss, L. Based on these inputs, it generates a refined prompt, p', which is then used for a new round of inference. We use the pre-trained Bert model (i.e., bert-base-NER) <cit.> to construct this Auxiliary Module, symbolized as G and parameterized by θ_g. This generator crafts the prompt text, taking into account the input values p' = G(L, ŷ; θ_g). The generator is trained jointly with the overall system using policy gradient methods to maximize the expected reward R of producing coherent answers: J(θ_g) = 𝔼p' ∼ G[R(p')], ∇θ_g J(θ_g) = 𝔼p' ∼ G[∇θ_glog G(p'|L, ŷ;θ_g)R(p')]. This reward function is designed to encourage the LLMs to generate responses that are not only fluent but also logically consistent, particularly when using the updated prompt. This feature enables the module to dynamically adjust prompts based on the current performance, thereby offering new approaches to improve the quality of answers. Throughout the training process, the module progressively learns to produce more effective prompts, leading to enhanced accuracy and coherence in the LLM's reasoning. For more detailed experimental testing and analysis, please refer to Appendix <ref>. § EXPERIMENT §.§ Datasets and Tasks We assess the performance of our proposed framework on four open-source benchmark datasets designed for knowledge graph reasoning and multi-hop reasoning abilities on graphs. Task1:WebQSP contains 4,737 QA pairs where the questions require logical reasoning over a knowledge graph derived from Wikipedia to infer the correct answer. The knowledge graph consists of 5,719 entities and 2,150 relations. Task2:MetaQA comprises a set of more complex compositional questions constructed from an underlying knowledge graph with a vocabulary of 300 entities and 100 relations. It has a total of 1,200 unique questions that test the multi-hop, logical, and comparative reasoning abilities of models. Task3:Family Tree Age Consider a family tree G = (V, E), where each individual v_i in V is associated with a description d_i specifying their age. The objective of this task is to identify the triplet comprising an individual, one of their grandparents, and a grand-uncle/grand-aunt by marriage that collectively has the highest cumulative age. Task4:Travel Route Optimization Let G = (V, E) be a graph representing connected cities, where each city v_i in V has a description d_i with the travel toll or tax. The LLM must plan the route from a source to a destination city that minimizes the total toll paid. For all datasets, we incorporate the encoded graph representations into Llama2, which has been pre-trained on BookCorpus and English Wikipedia. Appendix <ref> is a case analysis of the experimental results on the datasets. §.§ Implementation Details Baseline * Embedding-based Model: We compare against representative embedding models for knowledge graph reasoning including TransE <cit.>, DistMult <cit.>, EmbedKGQA <cit.>, ComplEx <cit.>, and RotatE <cit.>. * Open-source LLM: We evaluate reasoning capabilities of widely-used pre-trained language models accessible through open APIs, including Llama2 [7B & 13B] <cit.> and Alpaca [7B & 13B] <cit.> which are openly available LLMs up to 13 billion parameters. * LLM-based Fine-tuning: To assess the performance of LM fine-tuning approaches, we include as baselines KG-LlaMA and KG-Alpaca <cit.>, KG-BERT <cit.>, PKGC <cit.> and vanilla IT <cit.> which incorporate techniques to enhance LMs using annotated KG datasets or self-supervision. Our implementation is in PyTorch and we run experiments on NVIDIA A100 GPUs. More details of training can be found in Appendix <ref>. §.§ Main Results The results presented in Table <ref> indicate that consistently outperforms existing baseline methods across various datasets. Specifically, in the WebQSP benchmark, achieves an accuracy of 75.13%, which is 2.65% higher than the previously best-performing method, KoPA. Additionally, shows modest improvements in precision and recall, with increases of 9.51% and 10.96%, respectively, compared to Vanilla IT. In the more challenging MetaQA dataset, 's performance is notably better, surpassing the state-of-the-art accuracy scores by 1.84% and achieving a 1.68% higher precision. Furthermore, demonstrates significant advancements in specialized tasks such as Family Tree and Travel Route, where it exceeds the top baseline results by 3.36% and 5.34% in accuracy, respectively. Compared to embedding models such as TransE, DistMult, and EmbedKGQA, also shows promising improvements in reasoning abilities by integrating both semantic and topological structures of knowledge graphs. For instance, against RotatE's accuracy of 74.55% on the WebQSP dataset, achieves higher performance with a 75.13% accuracy, an increase of 0.58%. The difference is slightly more pronounced on the MetaQA dataset, where exceeds RotatE's score of 78.19% by 1.44% in accuracy. In scenarios requiring complex reasoning inferences, demonstrates enhanced capabilities, outperforming peak embedding model accuracy by a notable margin of 16.74% in Task4. The results also show that can enhance the capabilities of the Llama2, which itself achieves a 27.13% accuracy on the WebQSP benchmark. This enhancement is achieved through masking graph embeddings and using topology matching to retrieve relevant facts, thus addressing the gaps in factual knowledge that Llama2 requires. By overcoming these deficiencies in LLMs, significantly improves performance, increasing accuracy by 47.4%. This indicates the effectiveness of structured augmentation, which is not present in the Llama2. Further, filters out less important tokens using the Self-Reg module, ensuring focus on the most relevant information. In comparison to previous methods like KG-BERT fine-tuning, StructX offers essential enhancements, particularly in complex reasoning tasks, as evidenced by increases of up to 10.24% in accuracy and 5.61% in recall. Based on the experimental results, the “reflect” process also plays a crucial role in enhancing reasoning capabilities. This process involves the !1.6exIFReT(x), which selectively gathers evidence as needed, and the !1.6exIFReL(x,p), which filters less relevant passages using relevance scores from Eq.<ref> to improve context for LLMs. Additionally, the !1.6exIFSuP(y,p) and !1.6exIFUsE(x,y) ensure passage-response consistency and assess overall utility, contributing to higher quality results. For further case studies and experiments on this topic, readers are directed to Appendix <ref>. §.§ Ablation Study §.§.§ Q1: Different functional modules Table <ref> shows that each component of plays a crucial role in enhancing various reasoning capabilities. For 1-hop single fact questions, while all versions of are effective, the complete model excels with a 91.3% accuracy due to its ability to perform combinatorial reasoning using multi-head attention. This is key for interpreting semantic connections. In 2-hop and 3-hop multi-step reasoning, the absence of knowledge retrieval and injection modules results in a significant performance drop, with decreases of 7.4% and 7.3% respectively. However, the full model, utilizing these modules, reaches 92.7% and 93.8% accuracy by effectively traversing distant nodes. The graph topology encoder also proves vital; its omission leads to a 5.8% decline in location-based reasoning, highlighting its importance in connecting nodes and facilitating spatial/hierarchical reasoning through message passing. Furthermore, the lower accuracy without the Auxiliary Module underlines its utility in guiding coherent inference across multiple steps. §.§.§ Q2:Filtering and reflection mechanism Table <ref> compares reasoning performance with the following variants: StructX_No Filtering: Directly injects all retrieved knowledge without filtering. StructX_Random Filtering: Randomly removes of retrieved tokens. StructX_Reg Filtering: Uses the proposed Self-Reg module to score and filter tokens. Across WebQSP and MetaQA datasets, incorporating filtering mechanisms leads to consistent gains over no filtering baselines. Randomly removing tokens brings minor improvements, showing that some knowledge reduction is beneficial. However, learned filtering with Self-Reg leads to more substantial gains. Comparing different Self-Reg cutting ratios, 40% filtering seems to achieve the optimal trade-off, maximizing accuracy and recall. More aggressive 60% cutting starts to degrade performance likely due to removing pertinent facts. On the other hand, light 20% filtering retains more distracting information. By balancing knowledge breadth and depth, 40% Self-Reg filtering enhances language model inference without overwhelming models. By scoring and removing extraneous tokens based on contextual representations, Self-Reg retains the essence to augment language models without diverting attention. §.§.§ Q3: Learning by Auxiliary Module The results in Table <ref> demonstrate that incorporating the Auxiliary Module leads to significant performance gains over the base model without this component. We observe absolute improvements of 3.9% in accuracy, 2.58% in precision, and 5.72% in recall after implementing the Auxiliary Module. This validates its efficacy in providing adaptive prompts that elicit more accurate and logically coherent reasoning from the LLM when inference is made over structured knowledge graphs. The gains over the previous best model, PKGC are also substantial, at 10.57% higher accuracy. Hence, the auxiliary module proves important for multi-hop reasoning and steering deductions in the right direction over complex topological structures. The consistent benefits confirm that modeling explicit prompt-answering mechanisms customized for structured reasoning tasks is an effective approach. §.§.§ Q4:Knowledge injection variants To validate the contributions of different components of our knowledge injection mechanism, we conduct an ablation study with the following variants: StructX_No Injection: The base LLM (i.e., Llama2) without any graph representation injection. StructX_Embeddings Only: Encoded graph embeddings are directly injected without any masking or knowledge retrieval. StructX_Masking Only: Graph embeddings are masked but missing facts are not filled via retrieval. StructX_Retrieval Only: Masked embeddings are completed with the knowledge retrieval module but without graph encoding. We compare reasoning performance on WebQSP and MetaQA benchmarks against these reduced injection variants. The results in Table <ref> demonstrate clear improvements from collectively incorporating all knowledge injection components compared to ablated variants. The full model with topological encoding, masking, and retrieval achieves 1.68% and 1.31% higher accuracy over the best partial variant on WebQSP and MetaQA respectively. This confirms that each mechanism provides unique benefits - topological encoding better retain intricate connections, masking identifies missing facts, and retrieval fills knowledge gaps. The experiment proves that dynamic masking and retrieval to address inherent incompleteness in structured data are most impactful. Variants without these processes show worse performance as they fail to overcome language models' factual deficiencies. § CONCLUSION In this paper, we introduce , a groundbreaking framework designed to enhance LLMs in complex reasoning tasks. applies an efficient “read-model-fill-reflect-reason” methodology to structured data. It is adept at learning graph embeddings that are sensitive to geometric contexts, capturing the content of entities as well as their topological relationships. This enables to effectively infer missing facts about entities by matching similar topological features. Furthermore, it enhances the LLMs by distributing multi-scale features, which bolsters the representation of underlying connections that are not explicitly apparent. excels in tasks such as knowledge graph-based QA tasks and reading comprehension, especially in scenarios that require multi-hop logical reasoning. § LIMITIATION The knowledge graph encoding may not fully capture complex relationships beyond structural topology, the auxiliary module's prompting could be overly biased by the current loss landscape. Exploring more expressive graph representations and smarter prompting strategies could potentially address these limitations. § STRUCTX INTERACTION EXAMPLES [ colback=white, colframe=black, sharp corners, title=Instructions, fonttitle=] Please indicate whether referring to external documents, improves the quality of the generated response. Please respond with either [Yes] or [No] and provide a brief explanation. Instruction: Identify the shortest path between two nodes in this knowledge graph. Need retrieval? !1.5ex[Yes] Explanation: can ingest the graph structure and topology to reason about paths. But retrieving additional facts on edge distances or weights can supplement its understanding for more accurate optimization. Instruction: Determine which family tree node has the oldest relative based on date descriptions. Need retrieval? !1.5ex[No] Explanation: encodes the hierarchical tree relations and date informations directly without needing external evidence. Retrieval may introduce unnecessary details. Instruction: Analyze the impacts of this new tax policy based on economic concepts. Need retrieval? !1.5ex[Yes] Explanation: While has some linguistic capabilities, retrieving domain knowledge on economics and regulations will improve understanding of entities and contextual impacts for better analysis. Instruction: Summarize the key events in this 5-page history passage. Need retrieval? !1.5ex[No] Explanation: is designed to ingest long document passages directly through encoders. No need for external info. Instruction: Compare the costs of different flight options based on stop, mileage and fare data. Need retrieval? !1.5ex[No] Explanation: can encode and reason over structured data tables natively. External retrieval of similar data is unneeded. § RELATED WORK Prior efforts have explored various techniques to enhance language models with structured knowledge. Pan et al. <cit.> directly converted knowledge graph triples into textual statements as inputs. Liu et al. <cit.> embedded knowledge graphs into hidden layers of BERT models. Harnoune et al.<cit.> utilized memory modules to provide facts for multi-hop reasoning. Wu et al. <cit.> designed graph recurrent networks to propagate information over knowledge graphs. Roux et al.<cit.> employed reinforcement learning for path-finding in knowledge-augmented environments. Tu et al. <cit.> used graph convolutional networks to distill subgraph knowledge. Liu et al. <cit.> pruned knowledge graph inputs via self-supervised learning objectives. Honovich et al.<cit.> focused on improving factual consistency in conditional text generation. Shah et al. <cit.> iteratively expanded anchor sets for more comprehensive grounding. Xue et al. <cit.> optimized path selection on heterogeneous knowledge sources. While providing valuable techniques, existing methods lack consideration for efficiently filtering and condensing structured knowledge supplements tailored for language model integration. Our work aims to address these limitations through specialized contextual encoding, retrieval, and distillation components in the proposed framework. § CASE STUDY [colback=lightgray!20, boxrule=1pt, sharp corners, fontupper=] Family Relations John is Nikhil's paternal grandfather. Nikhil's aunt Meera is John's daughter. John was born in 1950. Nikhil was born in 2000. Meera was born in 1975. Raj is Nikhil’s maternal grandfather. Raj has a brother named Shekhar. Raj was born in 1948. Shekhar was born in 1946. Question: What is the age difference between the oldest and youngest family members described? Response: Based on the family tree details provided: - The oldest member is Shekhar, Nikhil's grand uncle, born in 1946. - The youngest member is Nikhil, born in 2000. - Their age difference is 1946 - 2000 = 54 years. So the age difference between the oldest and youngest family members is 54 years. Correct Answer: 54 years old. In this family tree example, first encodes the hierarchical relations between members like “John is Nikhil's paternal grandfather" and “Meera is John's daughter" as structured knowledge. The connections allow tracing ancestry and descendent paths. Age details like “John was born in 1950" are also injected. With encoded topology and dates, can traverse the tree to identify the oldest member, Shekhar and the youngest, Nikhil, before subtracting their years to find the 54-year difference. This showcases interpreting familial links and performing numerical reasoning over injected facts. By attending over neighborhood nodes, focuses on the most relevant semantic connections. The topology learning distinguishes close and distant relatives to assess ages. Retrieved date details fill knowledge gaps for calculation. Overall, this case validates 's capabilities in encoding complex structure topology and performing multi-step inference by combining connection reasoning and data-driven deduction. The example proves can encode intricate hierarchical structures and use encoded topology to trace relationships and inject valuable factual knowledge. By learning contextual representations and connections in structured data, successfully interprets semantic links between entities and integrates supplementary date details for numerical reasoning over multiple inference steps. This supports complex reasoning across topological dimensions. § EXPERIMENTAL PARAMETER SETTINGS The Variable Description Details in Table <ref> and hyperparameters in Table <ref> provide concrete configuration details for when evaluated on the four benchmark datasets. We can observe some key modeling choices - all models use a 4-layer graph encoder to learn topological representations, apply 30-40% node masking for knowledge gap simulation, and dedicate 256 dimensions to the Auxiliary Module for steering prompt/answer generation. Training hyperparameters are also shown, including batch sizes of 16-32, learning rates around 1e-4, and 10-20 training epochs. The number of tunable parameters indicates comparable model complexity across datasets. § SELF-REG MODULE IFReT Module This module decides if passage retrieval is needed using a scoring function: !1.6exIFReT(x) = f_ϕ(x) Where x is the input text, and f_ϕ outputs a binary decision on whether to activate retrieval given x, parameterized by ϕ. For example, if the input is x: "Tell me more about Van Gogh's paintings", the module may predict !1.6exIFReT(x)=1, indicating that retrieval would be useful to supplement details about Van Gogh's works. IFReL Module This module scores the relevance of a retrieved passage p using: s_rel = g_θ(x,p) ·σ(!1.6exIFReL(x,p)) Where g_θ produces a relevance score between input text x and passage p, modulated by the !1.6exIFReL(x,p) gate value passed through a sigmoid σ. For instance, if a retrieved passage discusses Surrealism instead of Van Gogh, the model can set a lower !1.6exIFReL(x,p) score to downweight it. IFSuP Module This evaluates the factual consistency between response y and passage p: s_con = h_ψ(y,p) ⊙σ(!1.6exIFSuP(y,p)) Where ⊙ is element-wise production, h_ψ calculates consistency between y and p, controlled via !1.6exIFSuP. This helps verify if details in y like dates or places align with the evidence in p. IFUsE Module This directly outputs a usefulness score u between input x and response y: u = !1.6exIFUsE(x,y) For example, u may be lower if y fails to answer the query in x about Van Gogh's paintings. The modules apply self-supervision for relevance, coherence, and consistency. The !1.6exIFReT(x) module for selective passage retrieval plays a key role in improving accuracy by retrieving evidence only when needed, avoiding unnecessary information. For instance, in closed-domain QA, achieves higher recall by learning a tight retrieval threshold via !1.6exIFReT(x), while open-ended generation benefits from more selective retrieval. Furthermore, the !1.6exIFReL(x,p) module filters out lower-quality passages that are less relevant, as quantified by the relevance score s_rel in Eq.<ref> and calibrated by !1.6exIFReL gates. This enhances the contextual signals passed to the language model. The !1.6exIFSuP(y,p) and !1.6exIFUsE(x,y) critiques help further verify passage-response consistency and overall utility, ensuring higher quality outputs. Figure <ref> shows the performance of the model before and after applying different levels of knowledge filtering in question-answering comprehension tasks. The filtering ratio varies between 0 and 60%, and the improvement in fit between the response of the large language model and the real-world answer is used as the criterion for determining the effectiveness. Firstly, when there is no filtering (0%), the fitting degree R^2 is around 0.7. This is the original level when injecting all knowledge. Subsequently, we observed that as the filtering ratio increased, the fitting degree R^2 showed a trend of first increasing and then decreasing. When filtering out about 40% of low correlation knowledge, the model accuracy reaches a peak of around 0.82. This indicates that through algorithms such as Self Reg, the model has learned to recognize the most critical knowledge for the current question and answer. Overfiltering knowledge actually makes the model unable to learn comprehensively. However, continuing to increase the filtration ratio to 40-60% will result in a reversal and decline in the fit. The model has lost some useful knowledge, and the contextual information is insufficient for the model to make accurate inferences. Therefore, we validated and demonstrated that appropriate knowledge filtering can improve the effectiveness of question answering, but a balance needs to be found between denoising and preserving information. The Self Reg class module demonstrates a satisfactory fit, suggesting optimal model use at approximately 40% of the filtering points. The Retrieve module decides when passage retrieval is needed. The Relevant module filters out irrelevant passages. The Coherent module verifies whether the generated response is coherent with the input. § PRELIMINARIES We first introduce the primary knowledge of knowledge graphs, text generation in LLMs, and information retrieval. §.§ Knowledge Graphs and Graph Networks A knowledge graph (KG) is defined as 𝒢 = {(h, r, t)}, with “head” h and “tail” t entities from ℰ and relation type r from ℛ. Each triplet represents unique knowledge and such knowledge representation in KG can enhance LLMs reasoning <cit.>. Graph neural networks (GNNs) process graphs 𝒢 = (V, E) with nodes V and edges E, learning node representations by message passing, combining node features and graph topology <cit.>. GNNs encode KGs' topology and structure. Node v's feature vector at layer l is h_v^(l), with neighboring nodes 𝒩(v) and edge feature e_vu, where the function M^(l)(·) aggregates neighboring node information, and σ(·) is an activation function: h_v^(l+1) = σ(∑_u ∈𝒩(v) M^(l)(h_v^(l), h_u^(l), e_vu)), Graph attention networks (GAT), a subclass of graph networks, leverage self-attention mechanisms <cit.>, similar to the Transformer architecture, to enhance node representations <cit.>. Each layer projects node features into queries Q, keys K, and values V, with attention coefficients calculated between connected nodes, following the Transformer model <cit.>. These coefficients are used to aggregate neighboring value vectors, updating the node feature representation h^'_v <cit.>: e_vu=LeakyReLU(a^⊤[𝐖h_v | 𝐖 h_u]), h^'v=σ(∑u∈𝒩(v)𝐕h_u). Through learning to focus on the most relevant semantic connections, our networks can refine node embeddings efficiently. The model constructs contextual node representations in KGs using graph attention layers <cit.>, and subsequently integrates this structural knowledge into language models. This process improves the overall understanding and reasoning capabilities in tasks like semantic analysis and knowledge inference <cit.>. §.§ Implementation Details Training details We optimize model parameters using Adam optimizer with a learning rate of 1e-4, batch size of 32, and train for a maximum of 20 epochs. For testing, model accuracy is evaluated by the exact match of the predicted response with ground truth answers in the datasets. We report average accuracy over 5 runs with different random seeds and report the average value. § PERFORMANCE OF CORE COMPONENTS OF AUXILIARY MODULES The results in Table <ref> clearly demonstrate Bert-base-NER's superiority as the Auxiliary Module, with over 6-7% performance gains in accuracy, precision, and recall compared to alternatives. In contrast, the whole-word masked BERT-large model gives even poorer results than no Auxiliary Module, while the multilingual sentiment BERT model remains insufficient. The likely explanation lies in the Named Entity Recognition pre-training of Bert-base-NER, which equips the model with a finer-grained understanding of named entities and relational reasoning - highly valuable for multi-hop questions over knowledge graphs. By steering prompt/answering iterations towards logically consistent outputs, it provides vital signals previously lacking. Meanwhile, whole-word masking seems to hinder BERT-large from learning compositional word structures crucial for precisely interpreting relations. Although also a BERT model, the sentiment classification tuning causes multilingual BERT to underperform on topological tasks. The significant gaps quantified via controlled ablation experiments validate that selective BERT-tuning surpasses superior architectural variants when specifically matched to complex reasoning tasks involving entities and relations. § EXPERIMENTAL RESULTS ILLUSTRATION Figure <ref> illustrates performance on four reasoning tasks under variants of the framework with different components disabled. For multi-hop QA (Fig <ref>), removing the knowledge retrieval module causes an evident drop of 8.2% in accuracy. This validates its role in traversing distant nodes by filling gaps. Omitting the encoding also degrades topological reasoning, seen in the 4.3% lower location QA (Fig <ref>) accuracy without the graph encoder. For complex inferences (Fig <ref>), the Auxiliary Module contributes to steering coherent deductions, reflected by 6.5% weaker decomposition scores when absent. Finally, inconsistent responses increase by 4.9% (Fig <ref>) after disabling relevance filtering, confirming the module's benefits for factual verification. Across diverse reasoning dimensions, targeted ablation of key components consistently reveals measurable impacts validating their individual contributions. The knowledge retrieval module facilitates connecting facts between entities, the graph encoder grounds topological representations, the Auxiliary Module directs logical inferences, and relevance filtering retains consistency. As a novel framework, combines complementary mechanisms for enhancing language models' structured reasoning capacities. The ablation analyses quantify these modular effects through performance differentials on specialized probing tasks. § CODE The code can be accessed at the following URL:<https://github.com/AnonymousBoy123/anonymous_structx>. 2pt Input Tell me more about the painter Van Gogh Output [!1.8exIFReT =Yes] Reference [35] [!1.8exIFReL = Relevant] Van Gogh was a famous Dutch painter during the post-impressionist period. [!1.8exIFSuP = Partially Supported] He was known for paintings like the Starry Night and Sunflowers. [!1.8exIFReT = Continue] However, he struggled with mental illness and remained poor and virtually unknown throughout his life. [ !1.8exIFSeP = Fully Supported] Most of his best works were produced in the last two years before his death. [!1.8exIFUsE =4] Reference [35] Vincent Willem van Gogh was a Dutch post-impressionist painter who is among the most famous and influential figures in the history of Western art. In a decade, he created about 2,100 artworks, including around 860 oil paintings, most of which date from the last two years of his life. They include landscapes, still lifes, portraits and self-portraits, and are characterised by bold colours and dramatic, impulsive and expressive brushwork that contributed to the foundations of modern art. He was little known during his lifetime, and many of his masterpieces were not appreciated until the 20th century. By the late 1920s, he had become one of the most celebrated artists in history. Nonetheless, mental illness plagued him throughout his life, and after he shot himself in the chest with a revolver at age 37, he succumbed to his injuries two days later. Correctness of !1.8exIFReL and !1.8exIFSuP !1.8exIFSuP is incorrect in claiming full support about his lack of fame during life, when the reference clearly states he only gained appreciation after death. So !1.8exIFSuP should be partially supported. !1.8exIFReL is appropriately marked as relevant overall. 2pt
http://arxiv.org/abs/2407.12593v1
20240717141635
EvSign: Sign Language Recognition and Translation with Streaming Events
[ "Pengyu Zhang", "Hao Yin", "Zeren Wang", "Wenyue Chen", "Shengming Li", "Dong Wang", "Huchuan Lu", "and Xu Jia" ]
cs.CV
[ "cs.CV" ]
EvSign: Sign Language Recognition and Translation with Streaming Events [1]Work was done at Dalian University of Technology. [4]Corresponding author: jiayushenyang@gmail.com P. Zhang et al. Dalian University of Technology National University of Singapore EvSign: Sign Language Recognition and Translation with Streaming Events Pengyu Zhang1,2[1] Hao Yin1 Zeren Wang1 Wenyue Chen1 Shengming Li1 Dong Wang1 Huchuan Lu1 Xu Jia1[4] July 22, 2024 ============================================================================================================== § ABSTRACT Sign language is one of the most effective communication tools for people with hearing difficulties. Most existing works focus on improving the performance of sign language tasks on RGB videos, which may suffer from degraded recording conditions, such as fast movement of hands with motion blur and textured signer's appearance. The bio-inspired event camera, which asynchronously captures brightness change with high speed, could naturally perceive dynamic hand movements, providing rich manual clues for sign language tasks. In this work, we aim at exploring the potential of event camera in continuous sign language recognition (CSLR) and sign language translation (SLT). To promote the research, we first collect an event-based benchmark EvSign for those tasks with both gloss and spoken language annotations. EvSign dataset offers a substantial amount of high-quality event streams and an extensive vocabulary of glosses and words, thereby facilitating the development of sign language tasks. In addition, we propose an efficient transformer-based framework for event-based SLR and SLT tasks, which fully leverages the advantages of streaming events. The sparse backbone is employed to extract visual features from sparse events. Then, the temporal coherence is effectively utilized through the proposed local token fusion and gloss-aware temporal aggregation modules. Extensive experimental results are reported on both simulated (PHOENIX14T) and EvSign datasets. Our method performs favorably against existing state-of-the-art approaches with only 0.34% computational cost (0.84G FLOPS per video) and 44.2% network parameters. The project is available at https://zhang-pengyu.github.io/EVSign/https://zhang-pengyu.github.io/EVSign. § INTRODUCTION As a main communication medium employed by the deaf community, sign language conveys multi-cue information by manual features (, hand shape, movement and pose) and non-manuals features, including facial expression, mouth gesture and other body movements <cit.>. According to the outputs, sign language based tasks can be mainly categorized into sign language recognition (SLR) and sign language translation (SLT). In SLR, the minimal lexical component, namely gloss, is predicted. SLR can be further categorized into isolated SLR (ISLR) and continuous SLR (CSLR). The purpose of SLT is to fully translate sign language into spoken language, which is often considered as a sequence-to-sequence learning problem. Recent works are based on videos captured by conventional frame-based sensors, which suffer from challenging scenarios, including severe motion blur and cluttering distractors. As shown in Fig. <ref>(a), the information will be degraded in extreme conditions, such as fast hand and arm movement, thereby leading to limited performances. To this end, existing works <cit.> emphasize temporal modeling, which leverages temporal cues in pixel- and frame-wise representations via 3D convolution and Long-Short Term Memory (LSTM) networks. Furthermore, STMC <cit.> and CorrNet <cit.> are designed to construct discriminative spatial representation, focusing on body trajectories. Event camera, a biologically-inspired sensor, detects the variation of intensity along time. Rather than encoding visual appearance with still images, it generates sparse and asynchronous event stream with extremely high temporal resolution <cit.> (1M Hz 120 Hz), high dynamic range and low latency, which is ideally suited for extracting motion cues <cit.>. As shown in Fig. <ref>(b), event camera could benefit sign language tasks from four perspectives. First, event data can capture richer motion information, thereby facilitating limb movements modeling effectively. Second, conventional images may be degraded due to the rapid movements, while event camera can record the sharp boundary for further processing. Third, the event stream contains less redundant information, such as background and textured clothing, which can boost efficiency and avoid distractor interference. Fourth, from a privacy protection perspective, event cameras can avoid collecting static facial information. There have been several attempts <cit.> to leverage event camera for sign language tasks. However, current event-based sign language datasets <cit.> only provide sign videos for ISLR. The limited vocabulary size and frame length cannot meet the requirements of real-world applications. Furthermore, the designed methods <cit.> are based on the networks originally designed for frame sequences, such as AlexNet <cit.> and ResNet <cit.>, which do not fully leverage the advantage of event data. To unveil the power of event-based sign language tasks, we collect an event-based Chinese sign language benchmark, namely EvSign. To the best of our knowledge, it is the first dataset designed for event-based CSLR and SLT tasks. More than 6.7K high-resolution event streams are collected in EvSign, which is of comparable scale to the existing RGB-based SLR datasets. The large corpus with native expressions, precise gloss and spoken language annotations can promote the development of CSLR and SLT tasks. Moreover, we present a transformer-based framework to make full use of the advantage of event data. First, a sparse backbone is employed to efficiently compute visual features on event data to obtain visual features efficiently. Then, temporal information is modeled via the proposed local token fusion and gloss-aware temporal aggregation modules, where the visual tokens are firstly combined to model local motion and reduce the computational cost. Subsequently, we aggregate the temporal information during the whole video into fused tokens hierarchically, which is then used for gloss and word prediction. Our method achieves very competitive performance on both synthetic and real datasets for both tasks with only 0.34% computational cost. To sum up, our contributions can be concluded as three aspects: * We propose the first benchmark for event-based CSLR and SLT tasks, which contains high-quality event streams, comprehensive corpus and precise gloss and spoken language annotation. * We design a transformer-based algorithm for both tasks, which fully leverages the characteristics of event data. * Experiments on both synthesized PHOENIX14T and EvSign datasets demonstrate that our method achieves favorable performance with only 0.34% FLOPS and 44.2% parameters against existing algorithms. § RELATED WORK §.§ RGB-based sign language recognition Sign language recognition can be categorized into two main directions: isolated SLR (ISLR)<cit.> and continuous SLR (CSLR) <cit.>. As for ISLR, word-level prediction is performed, while CSLR aims to predict a series of glosses from longer-term videos, which has become the primary focus of research due to its closer alignment with real-world applications. To tackle CSLR, researchers mainly work on extracting discriminative spatial and temporal information. In the early stage, hand-crafted features <cit.> are utilized to extract spatio-temporal representation. HMM-based algorithms <cit.> are designed to predict gloss progressively. Recently, deep-based frameworks have been proposed, focusing on leveraging motion-aware information <cit.>. CTC loss <cit.> is adopted to temporally align the classification results with unsegmented sequences, achieving end-to-end training. In addition, some works explore the use of other modalities to provide a complementary cue to visual signals, including skeleton <cit.> and depth <cit.>,  In this paper, we exploit the effectiveness of a bio-inspired sensor, , event camera, for CSLR task. §.§ RGB-based sign language translation Compared to SLR, Sign Language Translation (SLT) aims to generate spoken language translations from sign language videos in a progressive manner. Camgoz  <cit.> first introduce SLT task and formalize it into Neural Machine Translation in an end-to-end setting, which extracts gloss features through a CNN model and SLT using a sequence-to-sequence model. Subsequent works <cit.> have focused primarily on how to better extract spatial and temporal features. Some studies have attempted gloss-free methods <cit.> to generate sentences without relying on gloss-level features, extensive experiments have shown that directly implementing an end-to-end Sign2Text model yields inferior results compared to using glosses as the intermediate supervision in the Sign2Gloss2Text model. Therefore, the current implementation of SLT task is predominantly based on the Sign2Gloss2Text approach. Recent works mainly focus on vision based SLT, while SLT with other modalities have not been fully exploited. §.§ Event-based sign language recognition Event camera captures the intensity variation of each pixel, recording the trajectory of fast-moving objects at high temporal resolution. Due to its property, it can provide sufficient temporal information, which is suitable for modeling object motion. A few attempts contribute to ISLR <cit.>. Vasudevan  <cit.> propose an event-based Spanish sign language dataset, namely SL-Animals-DVS, consisting of 1,102 event streams regarding animals. Two Spiking Neural Networks (SNN) <cit.> are used for evaluation. Wang  <cit.> consider the event camera as a novel sensor in ISLR and collect an American sign language dataset, which contains 56 words. Shi  <cit.> design an event sampling strategy to select key event segments according to event distribution. The selected events are then fed to a CNN to obtain classification results. They also provide a synthetic dataset N-WLASL, where the event is collected by shooting an LCD monitor to record the videos from WLASL <cit.>. Above all, three main challenges limit the development of SLR. First, existing benchmarks are in small vocabulary, which cannot fully exploit the potential of sign language recognition. Second, the event data is collected using the out-of-date sensors with low spatial resolution in those datasets, leading to missing details in hand gesture and subtle movement. Third, all the datasets are designed for ISLR. It can solely be used for specific applications and cannot be generalized in real scenes. Therefore, it is crucial to collect a larger-scale dataset to promote the development of sign language tasks. § EVSIGN BENCHMARK §.§ Benchmark Statistics We use the DVXplorer-S-Duo camera from iniVation, which is a binocular camera capable of simultaneously capturing both event and RGB data. The spatial size of event stream is 640 × 480. We also record the RGB data with the size of 480 × 320 at 25 FPS for visualization and annotation. To fit the practical usage, the corpus is sourced around daily life, such as shopping, education, medical care, travel and social communication, etc. The glosses are sampled from the Chinese national sign language dictionary <cit.> and CSL-Daily <cit.>, and are then reorganized into a spoken sentence. To avoid differences in expression, we further provide glosses and sentences to signers for adjustment to suit the deaf community. We recruit 9 professional volunteers from the deaf community, who are familiar with general sign language for data collection. We employ a two-step manner to avoid data ambiguity. When collecting sign data, the signers first watch a reference video and then start to perform the action. After recording, other three signers vote to determine whether the sign expression is precise and easy to understand. For each sample in the corpus, there are about three signers to perform the action. We separate the sign videos into training, development and test subsets, which contain 5,570, 553 and 650 clips, respectively. As shown in Table <ref>, the proposed dataset significantly surpasses existing datasets in vocabulary size, task scope, and data resolution, which provides a comprehensive corpus to exploit the power of event data in handling sign language tasks. §.§ Annotation In EvSign, both sign gloss and spoken language annotations are provided. First, annotators identify all the glosses according to <cit.> in the RGB videos. We note that several signs may express the same meaning. Thus, the authors further revise the annotation to ensure that each sign language corresponds to a unique gloss annotation. Finally, the spoken language annotations are updated according to the gloss annotation. We employ tokenization method in HanLP [https://github.com/hankcs/HanLP] to separate a sentence into words. As shown in Table <ref>, EvSign provides 1.3K unique signs and 1.8K words, which cover various aspects of our daily life. Furthermore, more than 35K and 53K gloss and text annotations are totally labeled. §.§ Evaluation Metrics We provide two evaluation protocols for both SLR and SLT. As for SLR evaluation, we use Word Error Rate (WER) as the metric, which is widely used in sign language and speech recognition. WER measures the similarity of reference and hypothesis, which is based on the minimum number of operations required to convert the prediction into the reference sentence as: WER = #sub + #ins + #del/#ref where #sub, #ins and #del are the number of basic operations, including substitution, insertion and deletion. #ref represents the number of words in the reference sentence. Lower WER indicates better performance. For SLT evaluation, we employ ROUGE <cit.> and BLEU <cit.> as evaluation metrics. Here, BLEU is calculated with n-grams from 1 to 4 and ROUGE-L <cit.> is used as our metric. The higher the ROUGE and BLEU scores, the better the performance. § METHODOLOGY §.§ Overview As shown in Fig. <ref>, the event stream E = { e_i }_i=1^N is firstly split to P segments evenly and converted to a set of event representation 𝐄 = {𝐄_1, 𝐄_2,..., 𝐄_P }∈ℝ^P × B × H × W, , voxel grid <cit.>, where B denotes the bin size in voxel grid. Taken E as input, the proposed network is to jointly predict all the glosses 𝒢 = {g_1, g_2,...,g_Z} and translate them into spoken language 𝒲 = {w_1, w_2,...,w_U} in a sequence-to-sequence manner. Z and U are the number of glosses and words in the sign language video. Our method contains five main parts, including sparse backbone (SConv), Local Token Fusion (LTF), Gloss-Aware Temporal Aggregation (GATA) and two task heads, , recognition head and translation head. First, SConv generates a set of visual tokens. Then, we employ LTF to fuse local motion from adjacent timestamps thereby reducing the number of tokens. Moreover, the temporal information is decoupled into intra-gloss and inter-gloss cues, which are learned hierarchically by GATA module. Specifically, we propose a gloss-aware mask attention to dynamically fuse the comprehensive motion information from visual tokens into the fused tokens. It measures the token's similarity in time and feature spaces, which can be aware of various action lengths. Furthermore, the global coherence among tokens from different glosses is learned via inter-gloss temporal aggregation, thereby obtaining the gloss-aware tokens. Finally, those tokens are sent to recognition and translation heads to predict the probability of target gloss sequence and spoken language. §.§ Overall framework Sparse backbone (SConv). Due to the sparse property of event data, we build a sparse convolutional network <cit.> with the architecture of ResNet18 <cit.>. The backbone can process the event representation to obtain the visual tokens O^v = { o^v_1, o^v_2,..., o^v_P }∈ℝ^P × C. The sparse backbone is able to fully leverage the characteristics of data sparsity, thus significantly reducing the computational load. Compared to regular convolution layers, sparse backbone can also better maintain the sparsity at feature level, leading to a sharper boundary <cit.>. Local Token Fusion (LTF). Local motion integration is crucial for long-term temporal modeling, which can first build an effective representation for continuous actions within a short duration and reduce computational load <cit.>. To construct a powerful features for local motion, we introduce LTF to fuse neighboring visual tokens, which contains two multi-head self-attention within local window (W-MSA) <cit.> and two max pooling (MaxPool) layers. A regular window partitioning scheme is adopted to all the visual tokens, where each window covers I tokens. The self-attention within non-overlapping window is calculated to aggregate local motion information. Then, we introduce a max pooling operation to reduce the number of tokens. We adopt another W-MSA and MaxPool with the same window size and ratio to capture a longer-term movement, thus obtaining the fused tokens O^f∈ℝ^L × C. The LTF can be formulated as, O^v = MaxPool(W-MSA( O^v) + O^v) O^f = MaxPool(W-MSA( O^v) + O^v) where the number of tokens L = P/γ, while γ represents the downsampling ratio. Gloss-Aware Temporal Aggregation (GATA). Global temporal modeling is a key step to exploit the correspondence of continuous signs in long-term videos. Existing methods <cit.> learn the global motion cue by 1D temporal convolution and BiLSTM, which ignores the varying durations of different signs. Simply applying temporal modeling among the fixed frames will learn a non-optimal representation and involve redundant information from different glosses. In this work, we decouple the temporal information into intra-gloss and inter-gloss cues and model them hierarchically via the proposed GATA module, which consists of Gloss-Aware Mask Attention (GAMA) and Inter-Gloss Temporal Aggregation (IGTA). As for intra-gloss temporal aggregation, we aim to aggregate the gloss-level information from O^v into O^f. To achieve this, we propose GAMA by introducing a gloss-aware mask M∈ℝ^L × P, GAMA( Q, K, V, M) =softmax( Q K^T/√(d)⊙ M) V where, Q, K, V are the query, key and value defined in cross-attention. d is the dimension of query or key. We claim that the tokens belonging to the same class tend to have highly-relevant representations. The mask M can be considered as an attention weight, which measures the similarity between the fused and visual tokens. Thus, we first calculate the token similarity ρ∈ℝ^L × P, ρ = ψ_f( O^f) ψ_v( O^v)^T where, ψ_f() and ψ_v() are the linear embedding functions for the fused and visual tokens. Also, we add a distance constrain δ∈ℝ^L × P in time space to avoid computing attention between different glosses of the same category, δ_i,j = K(t_i^f, t_j^v; σ) where K() is a Radial Basis Function (RBF) kernel with the parameter σ. Since the precise timestamp for each token is not accessible, we introduce pseudo timestamps t_i^f, t_j^v∈ℝ^1 to represent the relative temporal position for fused and visual tokens, respectively. The pseudo timestamp for j-th visual token is set to t_j^v = j. For the fused tokens, we calculate the pseudo timestamp as the average of the pseudo timestamps of the fused tokens, defined as t^f_i = ∑_x=i×γ^x=(i+1)×γ x. Finally, the mask is defined as M = N(ρ) ⊙ N(δ), where the ⊙ represents the element-wise product. N is a zero-one normalization. We extend the gloss-aware mask attention into multiple heads and all the attention heads share the same mask M. In addition, a Feed Forward Network (FFN) module is utilized following GAMA to enhance the representation, which consists of two linear transformations with ReLU activation. Following SLT <cit.>, we add a positional encoding (PE) process to the inputs. Thus, the intra-gloss temporal aggregation process can be summarized as follows, Q = O^f + P_q;   K = O^v + P_k;   V = O^v Ô^f = O^f + GAMA( Q, K, V) O^f = Ô^f + FFN(Ô^f) where P_q∈ℝ^L × C and P_k∈ℝ^P × C are the temporal positional encodings generated by a predefined sine function. After obtaining intra-gloss tokens O^f, we apply inter-gloss temporal aggregation to model the global motion via a multi-head self-attention (MSA), X= O^f + P_x O = O^f + MSA( X) where the P_x∈ℝ^L × C is also the temporal positional encoding. After applying GATA, we obtain the gloss-aware tokens O∈ℝ^L × C, which learns the motion cues among all glosses comprehensively and is sent to the following task heads for predicting glosses and words. Task heads. Following the existing methods, we employ a classifier F_reg as Recognition Head (RH) to predict the logits G = { g_1, g_2,... g_L}∈ℝ^L× Y, where Y denotes the size of the sign language vocabulary with adding a `blank' class. F_reg consists of a fully-connected layer with a softmax activation. As for handling translation task, the translation head (TH) is to sequentially generate logits of spoken language sentences W = {p(w_1), p(w_2),..., p(w_U), p(<eos>)} conditioned by the gloss-aware tokens O, which is an auto-regressive transformer decoder. We adopt the same translation head in  <cit.>. Due to the page limitation, Please refer to <cit.> and the supplementary material for details. § EXPERIMENTS §.§ Datasets and evaluation protocol Dataset. We conduct analysis on synthetic (PHOENIX14T) and real (EvSign) event-based sign language benchmarks. As an extension of PHOENIX14 <cit.>, PHOENIX14T introduces German spoken language annotation and has become the primary benchmark for CSLR and SLT. It consists of 8,247 videos with a vocabulary of 1,066 and 2,887 sign and words, respectively. The number of videos for training, development and test are 7,096, 519 and 642. Based on PHOENIX14T, we build a synthetic dataset for further research using an event simulator (V2E <cit.>), where the video frames are firstly interpolated to 350 frames per second by SuperSloMo <cit.> and used for event generation. Evaluation protocol. We focus on both SLR and SLT tasks as follows: * SLR: It only predicts the sign gloss from sign language videos without introducing any additional information. Specifically, we implement a variant (Ours_S2G[We use subscript to indicate which task is focused. S2G denotes the method is trained and evaluated solely on SLR task while S2GT represents the method is trained with the supervision of both gloss and spoken language and then used for SLT evaluation.]) by removing the translation branch and retrain our method for fair comparison. In this setting, we select four public-available methods including VAC <cit.>, CorrNet <cit.>, TLP <cit.>, SEN <cit.> for comparison. All the methods are trained from scratch according to the original setting. We present WER as a metric for evaluation. * SLT: We follow the Sign2 (Gloss+Text) protocol defined in <cit.>, which is to jointly learn both recognition and translation branches in an end-to-end manner. We compare Ours_S2GT with SLT <cit.>. Additionally, we equip the recent SLR algorithm with our translation head (CorrNet+TH, VAC+TH), which are trained under this SLT comparison. Both ROUGE-L and BLEU-X metrics are employed to quantitatively assess these methods. §.§ Training Details The feature dimension C is 1024. The downsample ratio γ in LTF module is 4. We adopt the widely-used CTC loss <cit.> for SLR supervision. Inspired by VAC <cit.>, we set an additional recognition branch F'_reg on the fused tokens O^f. Two CTC losses (L_inter and L_final) are applied to both intermediate and final outputs against the ground truth. The total loss L_SLR for SLR protocol is, L_SLR = λ_inter L_inter + λ_final L_final Under the SLT protocol, we add a cross-entropy loss L_ce to supervise the output of translation head. Thus the overall loss of our method can be summarized as, L_SLT = L_SLR + λ_ce L_ce The weights for those three losses λ_inter, λ_final and λ_ce are all set to 1. The parameters σ in RBF is set to 16 in our experiment. The bin size of voxel grid B is 5. We adopt Adam <cit.> optimizer with cosine annealing strategy to adjust the learning rate. The initial learning rate, weight decay and batch size are set to 3e^-5, 0.001 and 2, respectively. We train our method for 200 epochs to achieve convergence. To align the setting with existing methods, all the RGB frames are cropped to 320 × 320 to remove the useless background and then resized to 256 × 256. As for event data, we firstly generate the voxel grid. Then, the voxel grid is cropped to 480 × 480 and resized to 256 × 256. Other competitors are trained using their own settings. Note that we modify the input channel of the first convolutional layer to fit the event input. All the models are trained and tested on a single NVIDIA RTX 3090 GPU with 24G RAM. §.§ Quantitative Results on Sign Language Recognition Table <ref> provides quantitative results on PHOENIX14T and EvSign datasets. Compared to existing methods that utilize RGB data, all the algorithms working with streaming event show lower WER consistently on EvSign dataset with real event, revealing the power of event stream in handling sign language recognition. However, the advantages of event data are not reflected in the results on the simulated dataset. The reason is that the video frames used for event synthesis are of poor quality with severe blur and limited frame rate. Compared to event-based competitors, our method shows superior performance on both synthetic PHOENIX14T and EvSign datasets. Notably, our method has significant advantage in both computational cost and number of parameters, due to the concise architecture and sparse data processing. We also evaluate the computational efficiency using FLOPS and number of parameters (Params) on EvSign dataset[Since the FLOPS and Params fluctuate based on data sparsity and sequence lengths, we calculate their averages across videos in Dev and Test sets of EvSign.]. Compared to the most recent method (CorrNet <cit.>), Ours_S2G achieves 0.79% and 1.26% improvement with respect to WER on development and test sets of EvSign with only 0.34% FLOPS and 44.2% parameters. §.§ Quantitative Results on Sign Language Translation Table <ref> and <ref> show the comparison results for SLT on PHOENIX14T and EvSign datasets. As shown in Table <ref>, our method achieves the best performance except BLEU-4 in the development set. On the other hand, the results on Phoenix14T (Table <ref>) cannot demonstrate the effectiveness of event camera. Compared with SLT, our method achieves 1.06% and 0.89% improvement in terms of ROUGE in development and test sets, respectively. Specifically, our method also exhibits significant advantages in terms of computational and parameter efficiency. Furthermore, methods with event streams are consistently better than those with RGB frames, which demonstrates the potential of event data in SLT task. §.§ Further Analysis Ablation study. As shown in Table <ref>, we conduct ablation analysis on both PHOENIX14T and EvSign. we introduce a modified VAC <cit.> as our baseline (B), with removing the visual alignment loss. Compared to the baseline with RGB input, the event-based baseline achieves better performance on real event, which reveals the potential of event in handling CSLR tasks. All of the proposed modules contribute positively to the recognition performance on both datasets. The sparse backbone (SConv) can significantly drop the computational load and parameters while maintaining recognition accuracy, which can fully leverage the sparsity of event data. The simple yet effective LTF module obtains 0.47% and 0.53% on the test set of PHOENIX14T and EvSign, respectively. Compared to the BiLSTM, the designed GATA module can learn the temporal cues more comprehensively, leading to 0.81% and 1.12% WER decrease on the test sets. The final model obtains the best performance with regard to all the metrics. Compared to B(EV), the final model achieves 1.16% and 1.39% improvement on the test sets, which can serve as a strong baseline for further research. Visualization of gloss-aware mask. As shown in Fig. <ref>, we provide a visualization results of SLR task and the gloss-aware mask in GATA module. With the guidance of GATA module, our method learns comprehensive motion cues, thus predicting all the glosses correctly. We visualize the gloss-aware mask between the tokens in corresponding gloss and visual tokens. It demonstrates that the gloss-aware mask can precisely provide a intra-gloss correlation without any supervision, achieving gloss-aware temporal token aggregation. Analysis of token aggregation strategy. We compare several token aggregation strategies and the performance on EvSign dataset is shown in Table <ref>(a). We implement the `w/o aggregation' by removing the selection module, where the visual tokens are directly sent to GATA modules to output the probability of gloss. The inferior performance indicates the necessity of token aggregation module. The useless information may lead to a delete or insert error, which significantly affects the recognition performance. We also compare the simplest selection strategies, which are denoted as MaxPooling and AvgPooling. Those methods can decrease the WER, while selection in a soft manner (AvgPooling) is better than the hard one (MaxPooling). This indicates the local fusion is necessary for learning discriminative tokens for temporal modeling. Both 1D-CNN designed in <cit.> and our method apply local aggregation before selection, leading to favorable performances. Our method achieves 0.22% and 0.2% improvement than 1D-CNN, which shows the effectiveness of LTF module. Analysis of temporal aggregation module. Temporal information module is the key component in sign language tasks. To this end, we compare various methods to demonstrate the capabilities of our method in temporal modeling. As shown in Table <ref>(b), we set our method without intra-gloss temporal aggregation (GAMA) as baseline. The method solely relies on inter-gloss temporal modeling via global self-attention. ρ-only and δ-only GAMA denote that the mask M in Eq. (<ref>) is set to ρ and δ in Eq. (<ref>) and Eq. (<ref>). Compared with the baseline, ρ-only and δ-only GAMA achieve 1.4% and 1.31% performance gain in the test set. Furthermore, we compare the effectiveness of soft and hard mask in GAMA. GAMA-hard means that we binarize the learned M with a threshold 1e^-3. we find that the hard mask will lead to a performance decrease. Soft manners exhibit greater flexibility, resulting in improved aggregation performance. § CONCLUSION In this paper, we unveil the power of events in sign language tasks. We first build a comprehensive dataset for event-based sign language recognition and translation. The dataset contains more than 6.7K sign videos captured by high-quality event camera, which covers most of daily topics. It can greatly promote the development of event-based vision and sign language related tasks. Furthermore, we propose a transformer-based framework for both tasks by fully exploiting the sparsity and high temporal resolution characteristics of events. The proposed gloss-aware temporal aggregation module could effectively model temporal information in a global-and-local manner and a gloss-aware representation is computed for SLR and SLT tasks. Our method shows favorable performance on the sign language datasets with synthetic and real events with only 0.34% FLOPS and 44.2% network parameters.     Acknowledgment The research was partially supported by the National Natural Science Foundation of China, (grants No. 62106036, U23B2010, 62206040, 62293540, 62293542), the Dalian Science and Technology Innovation Fund (grant No. 2023JJ11CG001). splncs04
http://arxiv.org/abs/2407.12998v1
20240717203614
Surgical Robot Transformer (SRT): Imitation Learning for Surgical Tasks
[ "Ji Woong Kim", "Tony Z. Zhao", "Samuel Schmidgall", "Anton Deguet", "Marin Kobilarov", "Chelsea Finn", "Axel Krieger" ]
cs.RO
[ "cs.RO" ]
oldmaketitlemaketitle [ Dominykas Norgilas Received -; accepted - ========================== § ABSTRACT We explore whether surgical manipulation tasks can be learned on the da Vinci robot via imitation learning. However, the da Vinci system presents unique challenges which hinder straight-forward implementation of imitation learning. Notably, its forward kinematics is inconsistent due to imprecise joint measurements, and naively training a policy using such approximate kinematics data often leads to task failure. To overcome this limitation, we introduce a relative action formulation which enables successful policy training and deployment using its approximate kinematics data. A promising outcome of this approach is that the large repository of clinical data, which contains approximate kinematics, may be directly utilized for robot learning without further corrections. We demonstrate our findings through successful execution of three fundamental surgical tasks, including tissue manipulation, needle handling, and knot-tying. § INTRODUCTION Recently, large-scale imitation learning has shown great promise in creating generalist systems for manipulation tasks <cit.>. Prior research in this area has mostly focused on learning day-to-day household activities. However, an under-explored area with high potential is the surgical domain, particularly with the use of Intuitive Surgical's da Vinci robot. These robots are deployed globally and possess immense scaling potential: as of 2021, over 10 million surgeries have been performed using 6,500 da Vinci systems in 67 countries, with 55,000 surgeons trained on the system <cit.>. Often, the video and kinematics data are recorded for post-operative analysis, resulting in a large repository of demonstration data. Utilizing such large scale data holds significant potential for building generalist systems for autonomous surgery  <cit.>. However, robot learning on the da Vinci presents unique challenges. The hardware suffers from inaccurate forward kinematics due to potentiometer-based joint measurements, hysteresis, and overall flexibility and slack in its mechanism <cit.>. These limitations result in the robot's failure to perform simple visual-servoing tasks <cit.>. As we discover in this work, naively training a policy using such approximate kinematics data almost always leads to task failure. For instance, a policy trained to output absolute end-effector poses, which is a common strategy to train robot policies, achieves near-zero success rates across all tasks explored in this work, including tissue lift, needle pickup and handover, and knot-tying (Fig. 1). To achieve robot learning at scale, we must devise a strategy that leverages such approximate kinematics data effectively. Towards this end, we present an approach for robot learning on the da Vinci using its approximate kinematics data. Intuitively, our approach is based on the observation that the relative motion of the robot is much more consistent than its absolute forward kinematics. We thus model policy actions as differential motion and further explore its variants to design the most effective action representation for the da Vinci. We find that training an imitation learning algorithm using such relative formulation shows robustness to various configuration changes to the robot, even those known to significantly disrupt the robot's forward kinematics. Specifically, the da Vinci tools can be removed and reinstalled and all the robot joints can be freely moved, including the notoriously inaccurate set-up joints <cit.>, without significantly impacting policy performance. Additionally, we explore the use of wrist cameras in the surgical workflow. While not commonly employed in clinical settings, wrist cameras have demonstrated effectiveness in improving policy performance and facilitating generalization to out-of-distribution scenarios, such as varying workspace heights or unfamiliar visual distractions <cit.>. We thus evaluate their impact on performance and practical potential by designing removable brackets that enable easy sharing across various surgical instruments. Overall, our results indicate that the relative motion on the da Vinci is more consistent than its absolute motion. Following this result, we further observe that a carefully chosen relative action representation can sufficiently train policies that achieve high success rates in surgical manipulation tasks. Additionally, using wrist cameras significantly improves policy performance, especially during phases of the procedure when precise depth estimation is crucial. In robustness tests, our model demonstrates the ability to generalize to novel scenarios, such as in the presence of unseen 3D suture pads and animal tissues, showing promise for future extensions into pre-clinical research. Our main contributions are: (i) a successful demonstration of imitation learning on the da Vinci while using its approximate kinematics data and without requiring further kinematics corrections, while drastically outperforming the baseline approach; (ii) experiments showing that imitation learning can effectively learn complex surgical tasks and generalize to novel scenarios such as in the presence of unseen realistic tissue; (iii) ablative experiments demonstrating the importance of wrist cameras for learning surgical manipulation tasks. § RELATED WORK Manipulation and Imitation Learning Imitation learning enables robots to learn from expert demonstrations <cit.>. Behavioral cloning (BC) is a simple instantiation of imitation learning that directly predicts actions from observations. Early works tackle this problem through the lens of motor primitives <cit.>. With the development of deep learning and generative modeling, different architectures and training objectives have been proposed to model the demonstrations end-to-end. This includes the use of ConvNets or ViT <cit.> for image processing <cit.>, RNN or transformers for fusing history of observations <cit.>, tokenization of the action space <cit.>, generative modeling techniques such as energy-based models <cit.>, diffusion <cit.> and VAEs <cit.>. Prior works also focus on the few-shot aspect of imitation learning,  <cit.>, language conditioning <cit.>, co-training <cit.>, retrieval <cit.>, using play data <cit.>, using human videos <cit.>, and exploiting task-specific structures <cit.>. However, most of these prior works focus on table-top manipulation in home settings. Surgical tasks, on the other hand, pose a unique set of challenges. They require precise manipulation of deformable objects, involve hard perception problems with inconsistent lighting and occlusions, and surgical robots may often have inaccurate proprioception and hysteresis <cit.> that are less pronounced in industrial arms. While in principle end-to-end imitation learning could capture these variations implicitly, it is unclear what design choices are important to enable effective learning in this regime. We also note that in the dVRK community, the inaccuracies of the robot have been addressed via hand-eye calibration <cit.>. However, hand-eye calibration is effective if it is performed during both data collection and inference, in which case ground-truth kinematics data would be available at all times. In our scenario, however, we assume that the demonstration dataset has been already been collected without hand-eye calibration e.g., the large-scale clinical data and thus assume that precise ground-truth kinematics is not available during training. While hand-eye calibration can still be performed during inference and may possibly help, it is not a fundamental solution to the problem. Autonomous Surgery Prior works in autonomous surgery primarily focus on designing task-specific policies for specific tasks, such as for suturing <cit.>, endoscope control <cit.>, navigation <cit.>, and tissue manipulation <cit.>. Such developments have led to impressive demonstrations such as automating intestinal anastomosis (suturing of two tubular structures) in-vivo on a pig <cit.>. However, these methods do not typically scale well across various tasks or generalize well to varying environmental conditions. In contrast, end-to-end imitation learning offers a relatively simple solution to these shortcoming by only requiring good robot demonstrations. While prior works have also explored the use of imitation learning for surgical tasks <cit.>, its application to complex manipulation tasks like knot-tying remains unexplored, and practical design choices for implementing on the da Vinci have not been addressed. § TECHNICAL APPROACH Consider the dVRK system, as illustrated in Fig. <ref>, which includes both the robot and a teleoperation console for user interaction. The dVRK features an endoscopic camera maipulator (ECM) and two patient side manipulators (PSM1, PSM2) sharing the same robot base. Each arm is a sequential combination of set-up joints (SUJ) which are passive, followed by active joints which are motorized (Fig. <ref>). The passive joints are notoriously inaccurate due to using only potentiometers for joint measurements. The active joints use both potentiometers and motor encoders, providing improved precision. However, in general, the use of potentiometers throughout all the joints causes the forward kinematics of the arms to be inaccurate, even up to 5cm error <cit.>. Using the dVRK console, the user collects many demonstrations of a task, acquiring a dataset D = {τ_1, ..., τ_N}, where each trajectory τ_i = {(o_1, x_1, a_1), ..., (o_T, x_T, a_T)} is a collection of observation o_t, proprioception x_t, and actions a_t, collected at time step t. Specifically, the observation o_t includes left/right surgical endoscope images and left/right wrist camera images, totaling four images (Fig. <ref>), proprioception is the current pose of the PSMs w.r.t surgical endoscope tip frame denoted as x_t = {g_t^l, g_t^r}, where g = (p, R) ∈ SE(3) and {l, r} denotes the left and right grippers respectively, and actions are the commanded desired waypoints to reach specified via teleoperation control, denoted as a_t = {ĝ_t^l, ĝ_t^r }. Our objective is to learn surgical manipulation tasks via imitation learning. Given the robot’s inaccurate forward kinematics, choosing the appropriate action representation is crucial. To illustrate this, we investigate three action representations: camera-centric, tool-centric, and hybrid-relative as shown in Fig. <ref>. The camera-centric approach serves as a baseline, highlighting the limitations of modeling actions as absolute poses of the end-effectors. The tool-centric approach offer an improved formulation by modeling actions as relative motion, leading to higher success rates. The hybrid-relative approach further improves beyond tool-centric approach by modeling translation actions with respect to a fixed reference frame, further improving accuracy in translation movements. These approaches are detailed below: * Camera-centric actions: We model camera-centric actions as absolute poses of the end-effectors w.r.t the endoscope tip frame. The setup is similar to how position-based visual servoing applications (PBVS) are implemented and is a natural choice on the dVRK. Specifically, the objective is to learn a policy π that, given an observation o_t at time t, predicts an action sequence A_t, C = (a_t, ..., a_t+C), where C denotes the action prediction horizon. The policy can thus be defined as π: o_t ↦ A_t, C. This formulation is visually shown in Fig. <ref>. * Tool-centric actions: We model tool-centric actions as relative motion w.r.t the current end-effector frame, which is a moving body frame. Tool-centric actions can be defined as: A^tool_t, C = { (g_t^i)^T ĝ_s^i | s ∈ [t, t+C]; i ∈{l, r}} Intuitively, the desired poses ĝ_s^i are subtracted by the current end-effector poses g_t^i using the SE(3) subtraction rule, for each time up to horizon C and for each corresponding left and right grippers. There are largely two benefits to adopting this action representation. A relative motion formulation is used, which we show later in Section <ref>, is more consistent compared to the absolute forward kinematics of the arms. Also, the subtraction cancels out the endoscope forward kinematics terms and the actions can be expressed in terms of the PSMs forward kinematics only. This effectively reduces the margin for errors since less joints are involved in representing actions. However, one caveat of this approach is that the delta motion is defined w.r.t a moving reference frame. This requires the policy to implicitly localize the current end-effector orientation from image observations, and output translations and rotations along the localized principal axes, which can be a challenging task. The policy objective can be defined as π: o_t ↦ A^tool_t, C. This formulation is visually shown in Fig. <ref>. * Hybrid Relative Actions: Similar to tool-centric actions, hybrid relative actions are modeled as relative motion but w.r.t two different reference frames. Specifically, the delta translations are defined w.r.t endoscope tip frame and delta rotations are defined w.r.t the current end-effector frame. This formulation can be defined as follows: A^hybrid_t, C = {ĝ_s^i ⊖ g_t^i | s ∈ [t, t+C]; i ∈{l, r}} Where the subtraction operation ⊖ defined as: ĝ_s^i ⊖ g_t^i = ( p̂_s^i - p_t^i, (R_t^i)^T R̂_s^i ) Intuitively, the subtraction is performed between the corresponding translation and rotation elements i.e. vector subtraction for positions and SO(3) subtraction for rotations. A key distinction of this approach from the tool-centric approach lies in modeling the delta translation with respect to the fixed frame of the endoscope-tip, rather than the moving frame of the end-effector. This approach removes the burden for the policy to localize the end-effector's orientation to generate delta translations along the localized axes, thereby improving the quality of translation motion. The policy can be defined as π: o_t ↦ A^hybrid_t, C. This formulation is visually shown in Fig. <ref>. § IMPLEMENTATION DETAILS To train our policies, we use action chunking with transformers (ACT) <cit.> and diffusion policy <cit.>. The policies were trained using the endoscope and wrist cameras images as input, which are all downsized to image size of 224 × 224 × 3. The original input size of the surgical endoscope images were 1024 × 1280 × 3 and the wrist images were 480 × 640 × 3. Kinematics data is not provided as input as commonly done in other imitation learning approaches because it is generally inconsistent due to the design limitations of the dVRK. The policy outputs include the end-effector (delta) position, (delta) orientation, and jaw angle for both arms. We leave further specific implementation details in Appendix <ref>. § EXPERIMENTS In our experiments, we aim to understand the following key questions: (1) is imitation learning sufficient to learn challenging surgical manipulation tasks? (2) Is the dVRK's relative motion more consistent than its absolute forward kinematics? (3) Are using wrist cameras critical to achieving high success rates? (4) How well does our proposed model generalize to unseen novel scenarios? To answer these questions, we compare the task success rates of the policies trained using various action representations. We also directly compare the consistency of relative versus absolute motion by tracking a reference trajectory using the various action representations and comparing their tracking errors. We also explore the importance of wrist cameras by comparing the policy performance with and without them. Finally, we consider whether the proposed models can generalize to novel unseen scenarios, such as in the presence of animal tissues. These experiments are explored in the context of three tasks: lift tissue, needle pickup and handover, and knot-tying. Experiment Setup During data collection, the robot is set up in a reference configuration as shown in Fig. <ref>. In this configuration, 224 trials were collected for tissue lift, 250 trials for needle pickup and handover, and 500 trials for knot-tying, all collected by a single user across multiple days. During all experiments, a dome simulating the human abdomen (Fig. <ref>) was used to roughly place the arms and the endoscope in an approximately similar location using the same holes. The placement is only approximate because the holes are much larger than the endoscope and tool shaft size, and the tools have to be manually placed into the holes by moving the set-up joints. Evaluating the Consistency of Relative Motion vs. Absolute Forward Kinematics In this section we seek to understand whether relative motion on the dVRK is more consistent than its absolute forward kinematics. To test our hypothesis, we teleoperate a reference trajectory e.g., an infinity sign as shown in Fig. <ref>. This trajectory is then represented in various action representations using the formulas presented in Section <ref>. Then, we place the end-effector in the same initial pose and replay the trajectories using the various action representations under different robot configurations. These different configurations include shifting the robot workspace to the left and to the right side (Fig. <ref>). These workspace shifts cause the robot set-up joints to move, which are the joints prone to cause large joint measurement errors due to using only potentiometers for joint measurements. To compare their tracking errors, the replayed trajectories are annotated at the end-effector (i.e. control point) in image coordinates and plotted, as shown in the bottom row of Fig. <ref>. The plots in Fig. <ref> and the numeric RMSE results in Table <ref> show that in the reference configuration, all action representation precisely reconstruct the reference trajectory, since the set-up joints have not yet moved. However, when the robot configuration is changed by moving the set-up joints and new erroneous joint measurments are obtained, the camera-centric action representation fails to reconstruct the reference trajectory. Also, this error is not consistent for different robot configurations as shown in the trajectory plots in Fig. <ref>. For relative action representations, which include tool-centric and hybrid-relative action formulations, the reference trajectory is repeated more consistently, and their numeric errors do not vary significantly as observed in Table <ref>. In summary, this experiment shows that relative motion on the dVRK is more consistent compared to its absolute forward kinematics in the presence of inconsistent joint measurement errors. Policy Performance Using Various Action Representations We evaluate the policy performance using the various action representations on tissue lift, needle pickup and handover, and knot-tying as shown in Table <ref>. The camera-centric action representation performed poorly across all three tasks. Because the joint measurements of the dVRK are inconsistent, the end-effectors almost always failed to reach the target objects (e.g., tissue corner, needle, and string) and often dangerously collided with the underlying rubber pads. The policy trained using tool-centric action representation showed improved performance across all three tasks. However, during needle pickup + handover when large rotations were involved, the handover phase of the task often failed. In particular, after picking up the needle, the left gripper had to make a ∼90 degree rotation to transfer the needle to the opposing arm (Fig. <ref>). During this phase of the motion, the orientations of the grippers appeared correct, however, the translation motion appeared incorrect and seemed to be the cause of task failure. We conjecture this reason was due to grounding the policy actions to a moving end-effector frame. The policy is required to localize the moving end-effector orientation using image observations and generate delta translations along the localized principal axes of direction, which can be a challenging task. To fix this issue, the hybrid relative motion action was used, which grounds the translation motion to a fixed frame of the endoscope-tip. This formulation improved the translation errors during the aforementioned needle handover phase and ultimately achieved the highest success rates across all three tasks. This best performing action representation was also implemented on diffusion policy, however, the performance was not as high as ACT. Evaluating the Importance of Wrist Camera We also evaluate the importance of adding wrist cameras and their contribution to task success. To demonstrate this, we trained policies without wrist cameras on the needle pickup and handover and knot-tying tasks using the hybrid relative action formulation (Table <ref>). Overall, we observed that omitting wrist cameras lead to significant drop in performance. We conjecture that wrist cameras aid in scenarios where precise depth estimation is necessary. For instance, during the needle pickup and handover task, specifically during the latter phase transferring the needle, the wrist views clearly showed whether the needle was being navigated into the opposing grippers from afar. This additional view may have provided better context for task success. However, we observed that this level of information was difficult to discern from the third-person endoscopic view. Evaluating Generalization We also evaluate the ability of our models to generalize to novel scenarios, such as under more clinically relevant background using animal tissues (pork and chicken) and an unseen 3D suture pad. Most of this evaluation remains qualitative and greater details are elaborated in Appendix <ref>. In terms of quantitative results, we evaluate the hybrid-relative action formulation on the needle pick-up and handover task on a pig loin background. We observe that its overall success rate is quite high (Table <ref>), however, the quality of the motion and the accuracy of the needle grasps were much lower compared to those observed in the core experiments. In terms of qualitative results, we observe multiple instances of successful knot-tying achieved on pork tissue, and needle grasps on chicken background and on an unseen 3D pad (Appendix <ref>). § LIMITATIONS AND CONCLUSION In this work, we opt for using off-the-shelf large wrist cameras which are not clinically relevant. However, the cameras may be replaced with much smaller ones (1-2mm diameter) and its mount can be further optimized by integrating quick-release mechanisms for swift transfer between surgical tools. Also, our model is limited as it can only act based on current observations and does not have the ability to modulate different behavior based on human instruction. We hope to address these issues in future work to further advance the autonomy of surgical robots. In summary, we demonstrated an approach for imitation learning on the dVRK using its approximate kinematics data, without providing further post-processing corrections. The key idea of our approach was to rely on the more consistent relative motion of the robot, achieved by modeling policy actions as relative motion such as tool-centric and hybrid-relative actions. As mentioned in the introduction, we believe that our work is a step towards leveraging the large repository of approximate surgical data for robot learning at scale, without providing further kinematics corrections. We believe more research in this direction can further guide the path towards building general-purpose systems towards autonomous surgery. § IMPLEMENTATION DETAILS For ACT, main modifications include changing the input layers to accept four images, which include left/right surgical endoscope views and left/right wrist camera views. The output dimensions are also revised to generate end-effector poses, which amounts to a 10-dim vector for each arm (position [3] + orientation [6] + jaw angle [1] = 10), thus amounting to a 20-dim vector total for both arms. The orientation was modeled using a 6D rotation representation following <cit.>, where the 6 elements corrrespond to the first two columns of the rotation matrix. Since the network predictions may not generate orthonormal vectors, Gram-Schmidt process is performed to convert them to orthonormal vectors, and a cross product of the two vectors are performed to generate the remaining third column of the rotation matrix. For diffusion policy, similar modifications are made such as changing the input and the output dimensions of the network appropriately. The specific hyperparameters for training are shown in Table <ref> and <ref>. § GENERALIZATION TO NOVEL SETTINGS
http://arxiv.org/abs/2407.12174v1
20240716205743
Non-thermal Observations of a Flare Loop-top using IRIS Fe XXI: Implications for Turbulence and Electron Acceleration
[ "William Ashfield IV", "Vanessa Polito", "Sijie Yu", "Hannah Collier", "Laura Hayes" ]
astro-ph.SR
[ "astro-ph.SR" ]
July 16th, 2024 ApJ 0000-0002-6368-939X]William Ashfield IV Bay Area Environmental Research Institute, NASA Research Park, Moffett Field, CA 94035-0001, USA Lockheed Martin Solar and Astrophysics Laboratory, Building 203, 3251 Hanover Street, Palo Alto, CA 94304, USA 0000-0002-4980-7126]Vanessa Polito Lockheed Martin Solar and Astrophysics Laboratory, Building 203, 3251 Hanover Street, Palo Alto, CA 94304, USA Oregon State University, Department of Physics, 301 Weniger Hall, Corvallis, 97331, OR, USA 0000-0003-2872-2614]Sijie Yu Center for Solar-Terrestrial Research, New Jersey Institute of Technology, 323 M L King Jr Boulevard, Newark, NJ 07102-1982, USA 0000-0001-5592-8023]Hannah Collier University of Applied Sciences and Arts Northwestern Switzerland, Bahnhofstrasse 6, 5210 Windisch, Switzerland ETH Zürich, Rämistrasse 101, 8092 Zürich Switzerland 0000-0002-6835-2390]Laura A. Hayes European Space Agency, ESTEC, Keplerlaan 1 - 2201 AZ, Noordwijk, The Netherlands § ABSTRACT The excess broadening of high-temperature spectral lines, long observed near the tops of flare arcades, is widely considered to result from magnetohydrodynamic (MHD) turbulence. According to different theories, plasma turbulence is also believed to be a candidate mechanism for particle acceleration during solar flares. However, the degree to which this broadening is connected to the acceleration of non-thermal electrons remains largely unexplored outside of recent work, and many observations have been limited by limited spatial resolution and cadence. Using the Interface Region Imaging Spectrometer (IRIS), we present spatially resolved observations of loop-top broadenings using hot (≈11 MK) Fe xxi 1354.1 Å line emission at ≈9 s cadence during the 2022 March 30 X1.3 flare. We find non-thermal velocities upwards of 65 km s^-1 that decay linearly with time, indicating the presence and subsequent dissipation of plasma turbulence. Moreover, the initial Fe xxi signal was found to be co-spatial and co-temporal with microwave emission measured by the Expanded Owens Valley Solar Array (EOVSA), placing a population of non-thermal electrons in the same region as the loop-top turbulence. Evidence of electron acceleration at this time is further supported by hard X-ray measurements from the Spectrometer/Telescope for Imaging X-rays (STIX) aboard Solar Orbiter. Using the decay of non-thermal broadenings as a proxy for turbulent dissipation, we found the rate of energy dissipation to be consistent with the power of non-thermal electrons deposited into the chromosphere, suggesting a possible connection between turbulence and electron acceleration. § INTRODUCTION According to the standard model of solar flares, magnetic free energy stored in the corona is released through the reconfiguration of highly stressed field lines via magnetic reconnection. This energy is ultimately converted to other forms and transported throughout the solar atmosphere <cit.>, driving the phenomena observed with these events. One such transport mechanism at the heart of the standard model is the acceleration of non-thermal electrons from the reconnection site to the lower atmosphere, which in turn allows for the heating and ablation of chromospheric plasma upwards into newly reconnected flare loops during the evaporation process. While indirect evidence for this process is observed as the increases in soft X-ray (SXR) and extrema ultraviolet (EUV) emission of flare loops <cit.>, direct evidence of non-thermal electron deposition manifests in the form of hard X-ray emission (HXR) at the location of flare footpoints in the chromosphere <cit.>. Although there is much evidence for non-thermal electron transport from the corona to the lower atmosphere <cit.>, the mechanism behind particle acceleration is still in question. Popular theories regarding this process are linked to the dynamics of the reconnection process, such as the formation of magnetic islands and plasmoids <cit.>, or termination shocks following the impact of reconnection outflows at the base of the current sheet <cit.>. These outflows, especially in the regions stemming from the flare arcade loop-top (LT) and above, are also likely to generate magnetohydrodynamic (MHD) turbulence given the large Reynolds number of coronal plasma, as has been seen in several numerical studies <cit.>,in addition to observation <cit.>. The cascade of energy to smaller scales in MHD turbulence can also energize electrons on the kinetic scale, providing a stochastic transfer of energy to electrons that would result in their acceleration <cit.>. Recent microwave (MW) spectral imaging data taken with the Expanded Owens Valley Solar Array <cit.> has brought about a paradigm shift in high-energy flare physics, allowing for spatially and temporally resolved measurements of non-thermal electrons and magnetic fields through gyrosynchrotron emission of accelerated electrons in the flaring corona. Although recent work using EOVSA has provided evidence for the acceleration site location near the post-flare LTs <cit.>, additional diagnostics are necessary to constrain the acceleration mechanisms involved. Given the high-temperature nature of flare plasma, spectroscopic observations of ultraviolet (UV) emission lines — sensitive to such high-temperature plasma — have shed much light on the properties of plasma dynamics during flares, such as mass flows and electron densities (see reviews by <cit.> and <cit.>). An observable of UV spectra that continues to remain of particular interest to the flare community is the degree to which measured line profiles broaden beyond their respective thermal widths. Due to the fluctuations in the plasma velocity field, either from large-scale bulk plasma flows or microscopic ion perturbations, the presence of enhanced line broadening is widely believed to be a signature of MHD turbulence <cit.>. While the signatures of turbulence in solar flares remain a topic of active research, investigations have started exploring the evolution of non-thermal broadenings in the context of turbulent dissipation. Starting with <cit.>, measurements in non-thermal broadening were found to decay over time, which was used to infer a rate of turbulent kinetic energy dissipation. This energy was found to be consistent with the power of non-thermal elections as measured by RHESSI, indicating a possible connection between MHD turbulence and the coronal electron acceleration site. However, the study was limited to values of non-thermal line width averaged over the total emission region using the Extreme-ultraviolet Imaging Spectrometer (EIS). A follow-up study by <cit.> mapped the spatial distribution and evolution of non-thermal velocities of the same event at a resolution of 4, confirming that turbulence as inferred from such velocities is not necessarily localized, but can vary along the LT and loop-leg regions of a post flare arcade. The most recent work by <cit.> used observations from the Interface Region Imaging Spectrometer <cit.> to study the evolution of non-thermal broadenings in the IRIS 1354.08 Å line and their subsequent decay in the context of MHD turbulence. Using a three-dimensional MHD simulation, they found turbulent bulk plasma flows in the current sheet and flare LT regions were responsible for the non-thermal broadening of the emission line. Despite this progress, the decay of non-thermal broadenings and their subsequent connection to the acceleration site of non-thermal electrons remains largely unexplored. The present investigation undertakes a multi-instrument approach to study the evolution of non-thermal signatures associated with the March 30th, 2022 X1.3 flare. Using observations with high spatial and spectral resolution measured by IRIS, we find instances where the spectral line displays relatively large non-thermal widths at the flare LT, with non-thermal velocities reaching upwards of 65 km s^-1. By tracking the evolution of these broadenings, we find two periods of time where the non-thermal broadenings decrease linearly with time, an evolution characteristic of MHD turbulent decay. Moreover, the initial signal is co-spatial and co-temporal with gyrosynchrotron microwave emission as measured with EOVSA. When combined with HXR measurements taken from the Spectrometer/Telescope for Imaging X-rays (STIX) aboard Solar Orbiter (SolO), we find this activity indicates a population of non-thermal electrons along the loop in conjunction with electron deposition in the chromosphere. As these non-thermal signatures exist in unison, we aim to explore their possible relationships. The paper is organized as follows. An overview of the flare and the observational data used in this study is presented in Section <ref>, followed by an analysis of the line profiles and their implications for the flare LT dynamics presented in Section <ref>. Signatures of electron acceleration using EOVSA and STIX in relation to the initial emission are analyzed in Section <ref>. We discuss the implications of the decay of non-thermal broadenings in terms of kinetic energy dissipation and its connection to the acceleration site in Section <ref>. A brief summary of the results and conclusions is given in Section <ref>. § FLARE OVERVIEW The March 30th 2022 event (SOL2022-03-30T17:23:06) was an X1.3 GOES class eruptive flare in NOAA active region AR 12975, located on the NW quadrant of the solar disk as viewed from Earth. From the GOES X-ray Sensor (XRS) 1-8 Å filter light curve in Figure <ref>, the flare began at ≈17:20 UT, peaked at 17:37:42 UT, and lasted roughly 1.5 hours. Throughout its lifetime, the flare was well observed in EUV by Atmospheric Imaging Assembly aboard the Solar Dynamics Observatory <cit.>, 4-150 keV X-rays by SolO/STIX <cit.>, and 1-18 GHz MW by EOVSA <cit.>. Beginning in the impulsive phase, STIX observed high-energy signatures in three discrete HXR emission pulsations, or phases <cit.>. Figure <ref> shows the HXR emission count rate over the 32-76 keV energy bin at a 0.5 s cadence. Given SolO's near-perihelion location at 0.33 AU, reference times have been adjusted to Earth's reference time at UT. Likewise, full-disk integrated MW dynamic spectra taken with EOVSA show three distinct intensity phases. The bottom panel of Figure 1 plots MW flux in three GHz bins covering the effective broadband range of EOVSA, each correlating with the phases and individual bursts seen in the 32-76 keV STIX channels. As gyrosynchrotron radiation from coronal electrons dominates EOVSA MW emission, the degree to which the MWs phases correlate with the HXR emission emphasizes its use as a valuable diagnostic for electron acceleration.The following work focuses on times occurring over the second and third energy phases, which coincide with MW emissions occurring in proximity to the IRIS SG slit. A more in-depth investigation of HXR and MW pulsations during the first phase can be found in recent work by <cit.>. Observations taken with IRIS <cit.> captured most of the flare's evolution in the UV, with the observing run ending at 17:54:31 (see the green curve in Figure <ref>). Running in a sit-and-stare mode (OBSID 3660259102), the spacecraft captured slit-jaw images (SJI) at a cadence of 28.1 seconds in the 1330 Å channel. A flare line list spectral readout including the O  i window (1351.3 - 1356.46 Å) had a variable exposure time ranging from 0.8-8.0 seconds and a cadence of 3.9-14.7 s. The spatial resolution of the SJI and spectrograph (SG) slit was 0.166 per pixel, with a 0.33 wide slit. The FUV1 channel (1331.7–1358.4) was binned spectrally by two onboard, resulting in a resolution of 26 mÅ. The IRIS field of view (FOV) was favorably positioned over the post-flare loop arcade during its observing run, as shown in the right panels in Figure <ref>. Here, the arcade is given by images of AIA 131 Å and illustrates the morphology of hot coronal plasma over time. Because the event was a so-called two-ribbon flare, with the parallel ribbons roughly running in the east-west direction, the location of the slit allowed for complete spectral measurements of the southern ribbon and the westward edge of the northern ribbon. Moreover, the slit position aids in the identification of the LT emission, as spectra emanating between the two ribbons could belong to arcade plasma. The following section explores the behavior of spectra in this region. § LOOP-TOP FE XXI EMISSION The IRIS 1354.1 Å emission line, formed at temperatures of ≈11.5 MK, allows for spectrographic observations of the flaring corona. Like many optically thin spectra, is frequently observed to have a Gaussian line profile, given an isothermal Maxwellian distribution of plasma ions <cit.>. Following an initial verification of the line profile shapes, we perform Gaussian fits to the spectra to infer plasma motion diagnostics stemming from the arcade LT. Prior to fitting, level 2 raster data were processed starting at 17:24:31 UT, preceding the start of the impulsive phase, and lasting until the end of the observation run 30 minutes later. The lower IRIS SG fiducial mark, visible as dark bands on SG detector images, overlapped regions of interest in our observations and was excluded by ignoring pixels 142-144 (314.4-314.9) throughout our analysis to ensure no related intensity artifacts. The SG data was further normalized by exposure time to remove any effects of variable exposure time and cleaned via the de-spiking procedure in SolarSoft to remove bad pixels <cit.>. Lastly, because the signal was relatively weak throughout the observation, an additional spectral binning of two was applied to the SG data to reduce noise for the fitting process, resulting in a 52 mÅ spectral resolution. Although LT emission is the focus of this work, we aim to fit spectra along the entire SG slit, including emission from the flare ribbons. Previous observations have shown that ribbon emission is blended with other chromospheric and photospheric lines in the O i channel, including the predominant the C i 1354.3 Å line <cit.>. We therefore employed the multi-Gaussian routine[Further details can be found at <https://www.pyoung.org/quick_guides/iris_auto_fit.html> ] in SolarSoft to account for the presence of these lines. Prior to the routine, weak signals with wavelength-integrated intensity ±0.5Å of the line below a threshold of 50 DN Å s^-1 were ignored from the fit. Likewise, the fit was discarded from the results if the Gaussian amplitude, or peak intensity, was less than twice the measured background intensity at a given time. Figure <ref> shows time-distance stack plots of the resulting plasma diagnostics inferred using spectral line parameters along the IRIS SG slit. The top three panels (a-c) show peak intensity, Doppler velocity, and non-thermal line width, respectively. Doppler velocities were calculated using the differences between Gaussian centroid positions and the rest wavelength, taken here to be 1354.08 Å following several studies of this line <cit.>. Non-thermal widths were calculated using the FWHM of the Gaussian fits, where FWHM = 2√(2ln2) σ for Gaussian line width σ. By decomposing the line width into thermal, non-thermal, and instrumental components, the FWHM can be expressed as FWHM = √( 4ln2(λ_0/c)^2 (v_th^2 + v_nth^2 ) + FWHM_IRIS^2 ), where v_nth is the non-thermal broadening expressed as a velocity. The thermal velocity v_th is given by the Doppler broadening of the line, √(2k_b T_Fe/m_i), for ion mass m_i=56 and formation temperature T_Fe. Here, we take Log_10 T_Fe[K] = 7.06, the temperature at the contribution function maximum calculated using Chianti v10 <cit.>, which gives a thermal width of 0.439 Å = 58.4 km s^-1. Broadening due to instrumental effects is given by FWHM_IRIS = 26 mÅ. The flight time of IRIS through the Southern Atlantic Anomaly (SAA) during the observation run, resulting in increased cosmic ray exposure and SG noise, is highlighted in red. emission along the slit can be considered to evolve from two separate regions: the LT and the ribbons. The southern ribbon, starting at y≈295 , is outlined by the strong Doppler velocity blueshifts in Figure <ref>(b). These large upflows, reaching speeds upwards of 130 km s^-1 provide clear evidence of chromospheric evaporation. While the northern ribbon is not as clearly defined here, chromospheric lines in the O i detector images show its location starting at y≈317 and spreading northward over time (Figure <ref>). A more detailed ribbon analysis of this flare can be found in recent work by <cit.> and <cit.>. We identify the LT emission by the distinct increase in intensity at y≈311, which brightens almost in tandem with the ribbon emission. As mentioned above, emission in the region between the ribbons in a two-ribbon flare could belong to the post-flare arcade loops. This idea can be more readily seen by the location of the intensity along the slit illustrated in Figure <ref>, where the initial signal of the emission does not emanate from either ribbon. Moreover, the LT profiles themselves support a coronal source by lacking the blend photospheric lines that are typical of ribbon emission. Although this notion constrains the emission to the coronal plasma, a more rigorous definition of LT emission is provided by comparing intensity to AIA 131 Å emission. As seen inFigure <ref>, the morphology of AIA 131 Å is characterized by bright, localized LT structures, commonly referred to as EUV knots. These knots have long been observed along the arcades of post-flare loops <cit.>, and while their formation mechanism remains unclear <cit.>, they are widely considered to exist at the apex of such loops. To study the connection between knots and emission in this study, slices along AIA images in three wavelengths (94 Å,131 Å, and 193 Å) were taken over time, with the artificial slit interpolated in space to match the IRIS slit position at AIA observation times. We note that all AIA images used in this work were first prepped using routines found in the aiapy Python package <cit.>, filtered to select only non-saturated data, and deconvolved using the instrumental PSF. The resulting time-distance stack plots are shown in Figure <ref>(d)-(f), with the loop-top knots corresponding to the regions of peak intensity. Notably, the knots in each channel brighten sequentially, hinting at a cooling plasma provided the characteristic formation temperatures of the three AIA channels — Log_10 T [K] = 7.3,7.0,6.8 for AIA 193 Å, 131 Å, and 94 Å, respectively — during flares <cit.>. Focusing on the 131 Å channel in Figure <ref>(e), we found the evolution to neatly match that of the peak intensity (grey contours). While this result is not surprising given that AIA 131 Å is dominated by  128.75 Å during flares <cit.>, the similarity between the two emission sources along the SG slit shows that the IRIS emission is also capturing the evolution of the AIA LT knot, thereby acting as a spectroscopic proxy by providing plasma diagnostics for the AIA 131 Å channel. Several aspects of the LT dynamics inferred from are worth noting. First, following a period of stationary emission, the signal begins to spread outward along the slit at nearly constant velocities, as illustrated in Figure <ref>(a). This movement implies the top-down motion of hot flare plasma from the LT towards the ribbons, and in particular, the evaporation signatures from the southern ribbon appear to connect with the southward moving LT front much lower down at y≈302. Second, the entire LT structure is weakly redshifted. Velocities between 304.1- 319.9range from 0.1-15.5 km s^-1, with average value of 7.8 km s^-1. Lastly, the bright LT knots are preceded by a period of non-thermal line broadening, as shown in Figure <ref>(c). In particular, there exists an apparent gradient in v_nth over time, beginning as the LT emission starts to spread along the slit. This gradient is more clearly seen in Figure <ref>, where v_nth and peak intensities from each pixel in the region [312.9-319.9] are aligned according to their respective activation time (e.g. when the intensity first appears). Averaging over these pixels, we find the gradient is well described by a linear fit, with a decay rate of -0.092±0.006 km s^-2, and has an inverse relationship with the increase in peak intensity observed in the same period. This decay in v_nth also complements earlier observations made by <cit.> regarding a two-ribbon flare, where they did not detect any significant line broadening along the loops at the points where the IRIS raster and AIA 131 Å knots overlapped. Comparing the widths in their Figure 12 to the non-thermal velocities in our Figure <ref>(c), we see that the broadening at the peak time and location of the LT knots are comparable. Moreover, they found the non-thermal velocities of the line to decrease over time, falling from 42.8 to 26.3 km s^-1 over the course of 6 minutes by taking the median value of each 8-step raster at a cadence of 75 s. By using a sit-and-stare observation, this work allows for a more comprehensive analysis of this non-thermal decay. Large values of non-thermal velocities are also seen as the LT emission becomes visible. The initial signal also appears to precede the formation of the EUV knot, which first manifests in the AIA 193 Å channel (Figure <ref>(d)) at ≈17:37:00. As this signal is separate from the knot evolution described above, a closer look is warranted to understand the nature of the LT emission. §.§ Initial Signal Although the LT emission in Figure <ref> starts at around 17:34:30, closer inspection of O i detector images shows the signal appearing at 17:33:25. To fix this discrepancy, due to the signal threshold selection criteria in the multi-Gaussian fitting routine, we performed a second fitting routine that automatically binned the spectra in space according to the signal-to-noise ratio (SNR) of the line. Two examplesof this routine, henceforth the dynamic binning routine, can be found in Figure <ref>. The routine is applied to a region in detector image space corresponding to the LT emission along the slit (306.6-314.2) and spanning ±0.75 of line center (blue box). Within that region, pixels with signals greater than 0.5σ above the region are flagged to identify the emission (height of the red box) and are subsequently binned to create a single spectrum (black curve) that is then fit with a Gaussian (red dashed curve). The FWHM of the Gaussian fit determines the width of the red box. We note that a single Gaussian was sufficient here, as no chromospheric line blends were detected in the spectra. By automatically binning the spectra in this way, we increase the SNR without including superfluous signals and mitigate the loss of spatial resolution. The dynamic binning routine was applied on the SG data at each time during the initial signal from 17:33:25-17:36:14. Figure <ref> shows the time series of the spectral line fit parameters. The LT emission begins with a relatively high non-thermal velocity at 65.7 km s^-1, which decreases steadily to 17.4 km s^-1 in less than three minutes. Like the line width decay preceding the EUV knots (Figure <ref>), there exists an inverse trend between v_nth and peak intensity. The non-thermal velocities here also decay linearly with time but at a faster decay rate of -0.21±0.02 km s^-2. The times of increased cosmic ray exposure were excluded from the fit to limit the effects of background noise on the downward trend. In comparison to other investigations into the decay of LT non-thermal broadening in flares <cit.>, the time scales on which we observe the decay — either here during the initial signal or that which precedes the brightest emission in Figure <ref> — are much shorter, on the order of minutes compared to several tens of minutes. The duration here also results in a rate of decay that is nearly an order of magnitude faster than those measured by <cit.>. Moreover, the maximum non-thermal velocities observed in this work are also relatively lower than these other works as well, which reach values of 100 km s^-1 or more. In addition to the Gaussian fits, the dynamic binning routine also provides a measure of the LT emission's size and location during the initial signal. The size — defined by the number of pixels selected, or the height of the red box in Figure <ref> — was found to be roughly constant, with an average size of 3.1. The box's center location also deviated slightly over time, traveling northward from 310.1to 311.1. We note that although our routine bins this region to increase the SNR, the signal of the LT emission in the O i detector images (left panels in Figure <ref>) is defined by a homogeneous structure that is well approximated by the red boxes. The initial LT emission is therefore well-resolved in our observations, as the 3.1 size of this structure is well over the IRIS SG spatial resolution of 0.166. Again, this resolution is in contrast to previous investigations into the decay of non-thermal broadenings in flares using Hinode/EIS <cit.>, where the resolution, albeit sufficient to locate different regions of excess line widths along the flare loops, could have been affected by averaging over variations of macroscopic plasma dynamics within these individual regions. The broadening of spectral lines beyond their respective thermal values can have several implications for the plasma dynamics of the emitting region. If we assume the line broadening is due to thermal effects alone, then the large widths in this work would indicate plasma temperatures beyond the expected Log_10 T_Fe[K] = 7.06 thermal formation temperature. For example, the highest value of v_nth = 66 km s^-1 is equivalent to a Doppler broadened line formed at Log_10 T[K] = 7.41. However, this assumption begs whether it is possible for 1354.08 Å to emit at such high temperatures. During flares, rapid heating of tenuous coronal plasma could lead to under-ionized conditions, as the ions in the plasma — especially highly charged Fe — will not have time to adjust to equilibrium <cit.>. Under-ionization shifts the distribution of charge states, or ionization fractions, amongst different species, giving a bias towards lower plasma temperatures that could result in an underestimation of the actual plasma temperature. Recent work by <cit.> used three-dimensional MHD simulations to study the effects of NEI on Fe ions during flares, where they found instances of plasma emission dominated by but with true temperatures of more than Log_10 T[K]=7.25 — a difference of over 40%. While a scenario such as this could help explain some of the large non-thermal broadenings observed in this work, it is unclear whether NEI would be enough to account for such large temperatures. For instance, the 40% temperature discrepancy in <cit.> was taken behind a Petschek-type reconnection shock in the current sheet with ambient plasma densities of 5× 10^8cm^-3. As our event is viewed on-disk, emission contributions from the current sheet are likely to be washed out by the denser, underlying arcade plasma. Moreover, the decay in excess line broadening, in either the initial signal or during the time preceding the brightest LT emission, suggests that the plasma is quickly cooling. This scenario would be more consistent with an over-ionized plasma, in which temperatures inferred from dominant Fe species would be overestimated with respect to their equilibrium plasma temperatures (i.e., lower than 11.5 MK in the case of ). The location of over-ionized plasma in the <cit.> simulations at the base of the current sheet is also more congruent with the LT emission observed in this work. If the LT plasma is assumed to be at the peak formation temperature of , then the excess broadening must instead arise from non-thermal plasma fluctuations, ranging from scales on the order of the emitting ion to large scales associated with bulk mass flows. One popular mechanism proposed for these observations during flares has been the superposition of Doppler-shifted emission from multiple bulk plasma flows. A recent study by <cit.>, however, used a multi-threaded 1D flare loop model to find that such a superposition results in asymmetric line profiles not commonly observed in during flares. The symmetric line profiles observed here at the resolution of individual IRIS pixels (i.e., Figure <ref>) also substantiate this claim. Other mechanisms, such as pressure broadening and optical depth effects, are unlikely to contribute to excess broadening in optically thin UV lines <cit.>. The most likely explanation for the excess broadening, therefore, is the superposition of Doppler-shifted emission from unresolved turbulence — a consensus reached by several studies <cit.>. Moreover, the decay of v_nth is also characteristic of MHD turbulence, where the energy cascade from large to small scales results in the dissipation of the turbulent energy contained in the plasma <cit.>. As we will show in the following sections, it is possible that this energy is being transferred into the acceleration of non-thermal electrons. § ELECTRON ACCELERATION SIGNATURES §.§ Microwave Imaging Spectroscopy Full-disk MW images of the flare were captured by EOVSA in 1-18 GHz. Each image used 451 frequency channels distributed across 50 spectral windows with equally-spaced bandwidths of 325 MHz, and was calibrated using standard procedures as described in <cit.>. Due to a calibration discrepancy in the lowest-frequency spectral windows, images were produced from 45 windows ranging from 3.5-18 GHz. Final images were obtained after subtracting the background between the second and third MW phases at 17:34:50 - 17:35:00 UT. Image reconstruction was accomplished using the CLEAN algorithm with a frequency-dependent circular beam of size 60/ν_GHz. Preliminary analysis of MW images during the period of initial LT emission found instances where the collection of compact, multi-GHz frequency sources overlapped the IRIS SG slit. Three images were selected for further analysis, corresponding to the start of the initial LT emission, the peak time of the second phase, and the first pulse in the third phase, respectively, and are shown in Figure <ref>(a)-(c). The selected times are also indicated in Figure <ref>. We see during the first two images that lower-frequency MW sources (⪅6 GHz) are co-spatial with the IRIS slit. Notably, the edge of the MW source in the lowest-frequency spectral window reaches the region of LT emission as it first begins at time 17:33:22 (blue box in panels (a) and (b)). Over time, the MW sources move southwest into and then completely off the slit during the duration of the initial signal. Beyond the spatiotemporal correlation between the MW and emission, we highlight two points regarding the morphology of the MW sources. First, the apparent southwestward movement of the MW sources along the arcade is continuous between images (a) and (b) in Figure <ref>, which belong to the second pulsation phase, while the transition between (b) and (c) is discontinuous, following a period of decreased MW emission before the start of the third phase. This behavior is consistent with the time-distance stack plots in Figure <ref>, and indicates a spatial variance between the MW pulsations. Second, there exists a clear spatial organization between the GHz spectral windows. Previous investigations using EOVSA MW images — namely those from the 2017 September 10 X8.2 flare <cit.> — have shown that different GHz spectral windows can correspond to different regions of the flaring arcade, ranging from the lower regions of the flare loop-legs to high in the reconnection current sheet. In our case, the on-disk orientation complicates the interpretation of the MW sources in relation to the flare structure (i.e., loop-legs vs current sheet). Here we see that the higher frequency emission shows a distinct correlation with the intense emission regions of the underlying northern ribbon, shown by the gray 75% AIA 1600 Å contours. Moving downwards in frequency, the MW sources reach further southward, seemingly into the LT region of the arcade. A natural interpretation of this organization follows from the standard flare model, where MW sources are tracing out the northern half of a post-flare loop, outlined by the higher frequencies emanating near the ribbon and the lower frequencies originating from higher up in the corona, possibly from the LT or cusp regions. To assist with this interpretation, we analyzed spatially-resolved MW brightness temperature spectra T_B derived from different locations along the MW sources. Figure <ref> (d)-(f) shows the T_B spectra (closed and open circles) corresponding to the colored boxes in panels (a)-(c), respectively, with box locations chosen to roughly map out our assumed orientation of the MW sources along a post-flare arcade loop. Considering the beam size may be larger than the boxes drawn, the error of the spectra created for each box was estimated using the image RMS across different frequencies coupled with a systematic error — 10% of the maximum brightness temperature (T_B) of each frequency — to accommodate the larger beam sizes at lower GHz. Each of the eight spectra sampled is consistent with gyrosynchrotron radiation from a population of non-thermal electrons with a power-law energy distribution <cit.>, exhibiting both positive and negative slopes on the lower and high-end frequencies, respectively. A forward fit to the spectra using the homogeneous gyrosynchrotron emission model described in <cit.> and <cit.> was performed to derive the physical parameters of the non-thermal emission. This routine employs the fast gyrosynchrotron codes outlined in <cit.> and depends on free parameters such as magnetic field strength B, non-thermal power-law spectral index δ_MW, thermal n_th and non-thermal electron densities n_nth above E_min=10 keV. The best fits to these spectra are shown by the solid lines in Figure <ref>(d)-(f), where data points having T_B<1[MK] or an SNR less than than 1.8 were excluded from the fit (open circles). Trends in the inferred gyrosynchrotron fit parameters are visualized in the bottom panels of Figure <ref>. Here, the error bars designate the range of accepted values from the fitting routine spanning plus/minus the standard deviation of each model parameter. Starting with the magnetic field strengths, we find that B is lower in Box 1 and increases sequentially to Box 3, supporting the interpretation that Box 1 represents the LT region and and Boxes 2 and 3 sequentially decrease in height along the flare loop. Values for the power-law spectral index of the non-thermal electron energy distribution δ_MW are also consistent with those expected for gyrosynchrotron emission <cit.>. The non-thermal electron density n_nth is also highest in the Box 1, reaching values upwards of 10^10 cm^-3. Using these values and in addition to the thermal electron density from the GS model, we found the percentage of non-thermal-to-total electron population to be notable, with Box 1 ranging from 15-35%. This result not only supports the interpretation of the MW sources as gyrosynchrotron emission from non-thermal electrons, but also suggests that the LT is a site of efficient non-thermal electron acceleration during the flare. §.§ Hard X-ray Imaging Spectroscopy Additional evidence for electron acceleration comes from HXR data collected by the STIX instrument aboard SolO. A Fourier-imaging spectroscopy instrument, STIX measures X-ray emission in the 4-150 keV range, and is capable of imaging the Sun with a spatial resolution of ≈10. Figure <ref>(a) shows SXR (yellow) and HXR (red) contours at [20,40,60]% peak emission integrated for 48 s over the first instance of LT emission from 17:33:14-17:34:06 UT in the Earth's time frame (but from the SolO FOV), with the images reconstructed using the MEM_GE algorithm <cit.>. The X-ray contours are overlaid on the corresponding AIA 1600Å image reprojected to the SolO observing frame, with the contours shifted manually by (-13, 45) to account for inaccuracies in the STIX aspect system, as was performed in <cit.>. We find the HXR emission correlates with the intense UV ribbon emission at the southern ribbon, with the northern ribbon showing no significant HXR emission. However, we note the limited dynamic range of STIX (∼10:1) may be overpowering any HXR emission in the northern ribbon. The SolO spacecraft location with respect to Earth at the time of the event is shown in Figure <ref>(b) in Heliographic Stonyhurst coordinates. At the time of this observation, SolO was close to its perihelion with a heliocentric distance of 0.33 AU. The correlation between the HXR and UV ribbon emission is consistent with the model of electron energy transport in flares, where non-thermal electrons accelerated in the corona impact the chromosphere, depositing their energy and producing bremsstrahlung radiation in the form of HXRs <cit.>. To study the extent of electron deposition in the chromosphere, we performed an analysis of the HXR spectra over the time surrounding the second high-energy phase, also corresponding to the time of the initial LT signal. HXR spectra were fit using the SolarSoft routine using a combination of thermal and cold thick-target non-thermal functions (f_vth and f_thick2, respectively) at 4 s intervals over the period matching the second high-energy phase from time 17:32:43-17:34:47 UT. The resulting non-thermal electron flux, F_e [e^- s^-1], low energy cutoff, E_c [keV], and electron spectral index, δ, of the single-power, thick-target component are shown in Figure <ref>. Here we see a single, steady evolution of the spectral index from soft to hard and back throughout the second phase, which is indicative of a single energy pulsation, or electron acceleration event (in contrast to the many soft-hard-soft pulsations seen in the first phase <cit.>). Using the best-fit parameters, we can then estimate the power of non-thermal electrons — those with energy above E_c — deposited in the chromosphere, P_nth, by P_nth(E ≥ E_c) = k_E F_e E_c δ-1/δ-2, where k_E is the energy conversion factor from keV to erg <cit.>. However, the low energy cutoff measured here is poorly constrained, which adds significant uncertainty to the power estimate. To mitigate this error, we choose a fixed energy reference E_r=30 keV above the mean E̅_̅c̅=26 keV and calculate the non-thermal electron power, integrating from E_r to ∞. To do this, we use the f_thick2_vnorm thick-target function in . As a result, Equation <ref> becomes P_nth(E ≥ E_r) = k_E A E_r^2/δ - 2, where A is the electron flux [e^- s^-1 keV^-1] at the reference energy E_r and the other parameters are the same as before. While this method underestimates P_nth, it provides a more reliable estimate of the power and its time evolution. The resulting power is shown in the bottom panel of Figure <ref>, where the trend in P_nth roughly tracks that of the HXR count rate. The peak of the power also corresponds well with the start time of the initial LT emission (dashed line) — a connection we explore further in the following section. § TURBULENT ENERGY TRANSFER The analysis of non-thermal signatures in this work has revealed excess line-broadening in the line, gyrosynchrotron MW emission, and the deposition of high-energy electrons into the chromosphere — all of which have been shown to occur concurrently and in close spatial proximity. Together, these signatures are consistent with the acceleration and transport of non-thermal electrons in the LT plasma via a stochastic acceleration mechanism <cit.>. Here we return to the initial non-thermal broadening decay measured in Section <ref> to extrapolate on its significance to turbulent dissipation and kinetic energy transfer in light of the associated electron acceleration signatures. Following the analysis presented in <cit.> and <cit.>, a decrease in excess spectral line broadening can be used to estimate the dissipation of turbulent kinetic energy K in a plasma. Provided a measurement of the non-thermal velocity v_nth, we calculate K by: K = 3/2 m_i v_nth^2 n_p V, where m_i=1.3 m_p is the mean ion mass given coronal abundances <cit.>, n_p is the proton number density, and V is the plasma volume of the emitting region. While v_nth is directly measured, assumptions must be made to estimate n_p and V in order to calculate the dissipation of kinetic energy. Using the size of the low-GHz MW sources as a proxy for the LT plasma volume at the time of the initial signal (Figure <ref>(a)), we estimated the volume of the LT plasma to be V≈ 3 × 10^27 cm^3 assuming a cylindrical source with a radius and length of 6 Mm and 25 Mm, respectively. We note that the apparent size of the MW sources is biased by the beam size used in their reconstruction (Figure <ref>(a)), likely making our rough estimate of the coronal volume the upper limit of the actual MW source size. Nonetheless, the size of the high-frequency MW source from the northern ribbon footprint agrees well with the AIA 1600 Å (≈5, or 3 Mm) emission there, thereby adding support to our estimation technique. The coronal volume of the SXR source (≈ 4× 10^27 cm^3 (Figure <ref>(a)) also agrees with our MW source estimate when using the 50% contour level. To calculate the proton number density, we assume n_p = n_e following the approach in <cit.>, and then use the differential emission measure (DEM) diagnostic to estimate the electron number density n_e in the LT plasma. The details of this analysis are described below. §.§ DEM Density Diagnostic To understand the evolution of electron number density over time, DEM maps were first created using AIA images in five passbands (94, 131, 193, 211, and 335Å) over the period 17:29:00-17:53:48 UT. Due to saturation effects, all 171 Å images were excluded from the analysis. Images used for a single DEM map were selected at 12 s intervals within the period, with each channel image selected having the smallest time delta from a given interval. In addition to the AIA image processing described in Section <ref>, each image was further corrected for degradation and normalized by the exposure time. DEM calculations in this work were computed using the Python implementation[Avaliable online at <https://github.com/ianan/demreg>] of the regularized inversion method first described in <cit.>. AIA temperature responses used in the calculations were constructed using a convolution between the effective AIA area taken from the SolarSoft routine and atomic data from CHIANTI v10 as described in <cit.>. Because we speculate the initial LT plasma is coronal plasma heated to temperatures above 10 MK, as opposed to evaporated chromospheric plasma, temperature responses were calculated assuming a coronal abundance <cit.>. Each DEM map was internally calculated in emission measure (EM) space and constrained using the EM loci curves of each AIA channel — a method found to be the most robust following a series of tests of different DEM calculations. Finally, all DEM maps were background subtracted using a DEM map calculated at the pre-flare time 16:45:00 UT <cit.>. Additional details regarding the DEM method, AIA response functions, and abundance assumptions can be found in Appendix <ref>. Panels (a) and (b) in Figure <ref> show two EM maps calculated from the DEM results at the start time of the initial signal and at the peak time of the emission, respectively, over the range Log_10 [K] 6.83-7.19. The dashed lines indicate the IRIS slit position and the black boxes outline the location of the LT emission. At time 17:33:24, there is relatively little EM associated with the initial IRIS emission, which changes as the flare progresses and regions of high EM shift westward over the IRIS slit. To calculate the electron number density n_e, we use the following: n_e = √(EM/0.83 ℓ), where EM is the total emission measure integrated overall temperature bins in a single DEM map (Log_10 [K] 5.7-7.6) and ℓ is the LOS column depth of the emitting plasma, assuming a fully-ionized plasma with a relative hydrogen to free electron density ratio of 0.83 <cit.>. Here we take ℓ = 12 Mm, following our coronal volume estimate. To study the number density evolution across the IRIS slit, we calculate n_e across an artificial slit along the DEM maps matching the SG slit location over time. The resulting time-distance stack plots of n_e are shown in Figure <ref>(c). In addition to n_e, we also calculate the DEM-weighted mean temperature ⟨ T ⟩ using the following: ⟨ T ⟩ = ∑_i DEM(T_i) T_i Δ T/∑_i DEM(T_i) Δ T = ∑_i DEM(T_i) T_i Δ T/EM, with the resulting time-distance stack plot shown in Figure <ref>(d). Notably, the density begins to increase at the point y≈ 311— consistent with the location of the initial LT emission — and continues to increase and spread spatially across the SG over time, reaching a peak value of n_e≈ 1.5 × 10^11 cm^-3 at roughly 17:39:00 UT. The region of peak density also corresponds to the LT location of the bright EUV knots as seen in Figure <ref>(d)-(f). Preceding the increase in n_e is a rapid increase in the LT plasma temperature, with values ⟨ T ⟩ reaching Log_10 [K] ≈7.4 at 17:32:00 UT. The temperature continues to remain elevated above Log_10 [K] ≈7.2 across the LT region, and gradually decreasing over time. As a means to directly compare the values of the number density and temperature at the site of the initial non-thermal broadening, we performed a separate DEM calculation on the region corresponding to the black boxes in Figure <ref>(a) and (b). Defined in a 2x2 area centered on the IRIS SG slit over time and y=311.7, the corresponding AIA pixels were selected to create a 3x3 pixel box, given the 0.6 per pixel resolution of AIA, which was then averaged over. The resulting time series of n_e and ⟨ T ⟩ are shown in Figure <ref>(e). In a consistency check, we find the values of n_e in the range of 0.5-1.5× 10^11 cm^-3 are in agreement with the values of thermal electron population density derived from EOVSA MW spectra (Figure <ref>). Additionally, using the EM_SXR from the f_vth of the thermal STIX source and our coronal volume estimate, we find the number density inferred from the SXRs are in the range n_e = √(EM_SXR/V) = [0.3-0.5]× 10^11 cm^-3 during the period of the initial signal, which also agrees with our DEM estimate. We further note that the calculated densities on the order of 10^10-11 cm^-3 at the time of the initial emission suggest that any effects due to NEI on the line formation would likely be small. §.§ Kinetic Energy Dissipation Following the estimate of the electron number density evolution in the LT plasma above, the dissipation of turbulent kinetic energy in the initial signal is calculated using Equation (<ref>). Using the time series of the non-thermal velocity v_nth inferred from the dynamic binning routine (Figure <ref>) and the associated electron number density n_e from the DEM analysis, the resulting time series of K is shown in Figure <ref> (orange). The peak kinetic energy is found to be 2.8±0.8×10^28 erg, which then decreases to 0.6±0.1×10^28 erg over the next 100 s. As n_e is increasing over this period, from 0.7×10^11 to 0.9×10^11 cm^-3, the decay in K is slower than the decay in v_nth inferred in Section <ref>. Furthermore, as the total kinetic energy calculation relies on an estimate of the plasma volume, values of kinetic energy density (K/V) are also provided in Figure <ref>, read off the right axis, with energy densities roughly on the order of 2-9 erg cm^-3. This reduction allows for a more direct comparison to the values derived in <cit.>, who found kinetic energy densities nearly an order of magnitude lower using Fe xxiv 255 Å but for an off-limb X1.2-class flare. However, this discrepancy is likely due to the lower values of n_e calculated in their work (≈10^9 cm^-3 in regions of high K), as similar magnitudes of v_nth were found in both studies. To put the kinetic energy dissipation in context, we compare it to the power imparted on the chromosphere by non-thermal electron deposition during the second energy phase (bottom panel Figure <ref>). Time integrating P_nth at each 4 s interval, the resulting energy time series is shown in Figure <ref> (purple). The non-thermal energy peaks at 2.4±0.4×10^28 erg, which is consistent with the peak of the turbulent kinetic energy. While the cadence of the IRIS observation is higher than that of the time-integrated HXR fitting parameters, there is also a remarkable agreement between the evolution of the two energy estimates starting at the beginning of the initial signal and lasting until the end of the second phase. Despite this agreement, we note that because the reference energy E_r is used to calculate our non-thermal electron power P_nth, which is larger than the inferred high-energy cutoffs E_c during this period (Figure <ref>), the values of P_nth, and subsequently the integrated non-thermal election energy, are likely underestimated to a slight degree. Assuming our coronal volume estimate is accurate, the agreement between the two energy curves in Figure <ref> suggests the dissipation of turbulent kinetic energy in the LT plasma is enough to drive the acceleration of non-thermal electrons into the chromosphere. Not only does this connection hint at a causal relationship, but it also suggests that energy transfer from turbulence to electron acceleration is quite efficient, given the virtual lack of any time delay between the driver (turbulence) and the response (non-thermal electron deposition). This efficiency is supported by numerical experiments of stochastic acceleration in turbulent flare plasmas <cit.>, which have demonstrated sub-second timescales for particle acceleration. However, our coronal volume estimate assumes the kinetic energy derived from the non-thermal broadening is contained within a volume representative of the MW and SXR sources. As we showed in Section <ref>, the spatial extent of the initial signal along the IRIS slit is ≈ 3.1— a fraction of the overall extent of the MW emission. While the smaller size of the emission implies that the turbulent kinetic energy may be overestimated, the IRIS slit may only be sampling a small portion of the turbulent LT plasma responsible for the electron acceleration. Nonetheless, when combined with the possibility of an underestimated range of P_nth, we cannot neglect the possibility that the turbulent kinetic energy inferred by IRIS is less than the energy of chromospheric electron deposition measured by STIX, suggesting turbulence is not the sole acceleration mechanism at work. The dependency of the turbulent kinetic energy K on the coronal volume V is put into context in Figure <ref>, where the observed peak non-thermal velocity value is used to calculate possible values of K using an estimated range of coronal volumes, as well as electron number densities n_e found from our DEM analysis (Figure <ref>(e)). The volume range is constructed such that the lower bound is a cylindrical volume calculated using a diameter of 3.1 from the IRIS observation and the upper bound is the volume of the coronal SXR source as measured by STIX. In addition to the non-thermal energy derived from HXR emission, a comparison can be made between the turbulent kinetic energy density and the non-thermal energy density inferred from the EOVSA MW spectra. As detailed in <cit.>, the non-thermal energy density w_nth is calculated using the following: w_nth = k_E n_nth E_min δ_MW-1/δ_MW-2, given the best-fit parameters from the MW spectra shown in Figure <ref> and a minimum energy cutoff of E_min=10 keV used in the GSFIT routine. The resulting non-thermal energy density was found to be highest in the LT region, increasing from 4.7×10^1 to 2.4×10^2 erg cm^-3 between times 17:33:22-17:33:48 (Figure <ref>). This increase in w_nth occurs concurrently with the decay in kinetic energy, fitting the narrative of non-thermal electron activity driven by the dissipation of turbulence. However, the MW non-thermal energy density attains values nearly two orders of magnitude higher than the kinetic energy densities inferred from the non-thermal broadening. This second order of magnitude discrepancy between the non-thermal and kinetic energy densities was also observed in <cit.>, who attributed the difference to the emission regions from which the MW and EUV — responsible for the two energy estimates — emanate. In their work, which focused on the September 10th, 2017 event, MWs were observed to stem from the cusp region near the LTs and large values of non-thermal broadening from the Fe xxiv 255 Å EIS line (≈100 km s^-1) was primarily confined to the overlying plasma sheet (see also <cit.>), suggesting that the cusp region contained a higher degree of non-thermal electron activity. Although the on-disk nature of our observation hinders a direct comparison between flare geometries, the location of the MW sources in Figure <ref> in combination with the HXR footpoint emission in Figure <ref> creates a coherent picture of a loop-like structure, where the MW sources trace out one half of the loop from the corona (low-GHz) to the northern ribbon footpoint (high-GHz) and the HXR emission highlights the location of the southern ribbon footprint (This picture is also consistent with a magnetic mirroring effect, where accelerated electrons reflect off high-B regions in the northern ribbon and precipitate into the southern ribbon, possibly explaining the opposing asymmetry seen in the MW and HXR sources). If we adopt the reasoning presented in <cit.>, then it follows the non-thermal broadening in the should lie above the MW sources, either higher up in the LT/cusp region or reaching into the current sheet. We note that this location of non-thermal broadening also agrees with simulations in <cit.>, who found the source of large non-thermal broadening in the IRIS line to be the highly turbulent plasma located along the current sheet and LT/Cusp region. The location of the SG slit shown in Figure <ref>, slightly to the right of the MW sources, further supports the notion of vertical separation between the non-thermal signatures measured by IRIS and EOVSA when considering the projection of the arcade on the disk and the LOS of these observations. A schematic of this interpretation is illustrated in Figure <ref>. The configuration of the drawn flare loop orientation is further justified by that produced from a magnetic field extrapolation (left panel) performed using the GX simulator modeling package <cit.>, which employed a nonlinear force-free field reconstruction of the magnetic field from the preflare SDO/HMI magnetogram at 17:23:36 UT. Altogether, these findings indicate the co-spatiotemporal presence of non-thermal signatures in the LT and cusp regions. We interpret the decay in non-thermal velocity as the dissipation of turbulent energy in these regions, which can provide the required energy necessary to accelerate electrons into the lower solar atmosphere. Nonetheless, because the SG slit is positioned over the flare arcade at a single location, we can only infer that stochastic acceleration occurs at the time and place in which we observe the initial decay in broadening and the associated MW and HXR signals. In other words, our analysis only pertains to the second high-energy phase (see Figure <ref>), where we see the compact MW source move into the SG slit and not the entire flare. Moreover, the decay in non-thermal velocity suggests that the turbulence in the LT/Cusp region is dissipating and no longer being generated — a notion potentially at odds with investigations that see persistent reconnection outflows generating turbulence for sustained periods <cit.>. While it is likely that reconnection is ongoing following the generation of the broadening, as evidenced by the third high-energy phase, the movement of the MW and HXR sources westward along the flare arcade suggests that the mechanisms responsible for their formation <cit.> are discrete and localized along the current sheet. The generation of turbulence observed in the line under this assumption would not be ongoing at the location of the IRIS slit and would therefore be allowed to decay freely. An analysis of such extended outflows, albeit beyond the scope of the present work, would be a promising focus for future research. § SUMMARY AND CONCLUSION The X1.3 class flare on March 30th, 2022 analyzed in our study exhibited non-thermal signatures across measurements taken by IRIS, EOVSA, and STIX. Our findings highlight significant excess broadening of the 1354.08 Å spectral line emanating from the flare arcade's LT region, reaching values upwards of 65 km s^-1. These non-thermal velocities were also observed to decay over time at two different periods, beginning at the onset of LT emission and before peak line emission. We found the evolution of the latter period to coincide with the appearance of bright EUV knots in AIA 94, 131, and 193 Å channels, offering a new spectroscopic insight into this phenomenon. Our observations of the LT emission present a unique perspective on the dynamics of solar flares, with the high spatial resolution of IRIS and a fast cadence of ≈9 s, allowing for the direct measurement of non-thermal broadening in a well-resolved LT structure. The rapid decay of non-thermal broadening in the LT plasma, on the order of minutes, is significantly quicker than the tens of minutes reported in earlier studies. Moreover, confirming the LT emission from a well-resolved structure challenges earlier works with lower resolution spectrometers and lower cadence where flare emissions might have been interpreted as a superposition of flows from different macroscopic locations along the flare arcade and LOS. This highlights the importance of spatial resolution and cadence in understanding solar flare dynamics. Excess broadening in the line was interpreted as the presence of turbulent plasma in the LT region, with its decay indicative of energy dissipation via a turbulent cascade. Following methodologies from <cit.>, we estimated the LT plasma's peak turbulent kinetic energy to be approximately 2.8±0.8×10^28 erg. This energy correlated well with the time-integrated non-thermal electron power P_nth deposited in the chromosphere, as inferred from STIX HXR measurements, which peaked at 2.4±0.4×10^28 erg. As the temporal decay in turbulent kinetic energy also matched that of the integrated electron power, the agreement between the two energies suggests a relationship between the dissipation of turbulent energy and the acceleration of non-thermal electrons, thereby supporting stochastic acceleration theories. We note, however, that this interpretation is contingent on two factors: first, the assumption of a coronal volume estimate, which may overestimate the turbulent kinetic energy as observed by IRIS; and second, the values of P_nth, which are potentially underestimated in our calculations. The The connection between the turbulent plasma and non-thermal electron populations was further supported by the simultaneous presence of gyrosynchrotron MW emission from EOVSA, where the MW sources were found to be co-spatial and co-temporal with the initial LT emission. Analysis of MW spectra at different times and locations along the flare arcade indicated a significant non-thermal electron fraction in the LT plasma, with a ratio of non-thermal to thermal electrons ranging upwards of 35%. Although this high percentage indicates an efficient electron acceleration mechanism at work and also places plasma turbulence in the vicinity of these non-thermal electrons at the flare LT, the non-thermal energy density inferred from the MW sources was found to be substantially higher than the turbulent kinetic energy density inferred from the broadening. This discrepancy, first seen in <cit.>, supports the idea of a vertical variation in the non-thermal energy density across the flare arcade, where the formation region of belongs to a region of lesser non-thermal energy density, possibly higher in the solar corona than the underlying MW emission, as summarised in our cartoon in Fig. <ref>. Together, these findings contribute to a deeper understanding of non-thermal processes in the flare LT region, setting a foundation for future investigations and modeling efforts. The forthcoming MUSE mission, with its advanced EUV spectroscopic capabilities and 35 slits <cit.>, promises to enhance our ability to study these processes in greater detail. W.A. is supported by NASA contract NNG09FA40C (IRIS). VP acknowledges support from NASA Heliophysics System Observatory Connect Grant #80NSSC20K1283,NASA ROSES Heliophysics Guest Investigator program Grant #80NSSC20K0716, and from NASA under contract NNG09FA40C (IRIS). S.Y. is supported by NASA grant 80NSSC20K1318 and NSF grant AST-2108853 awarded to NJIT. H.C. is supported by the Swiss National Science Foundation Grant 200021L_189180 for STIX. L.A.H is supported by an ESA Research Fellowship. IRIS is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research Center and major contributions to downlink communications funded by ESA and the Norwegian Space Center. EOVSA operation is supported by NSF grant AST1910354 and AGS-2130832 to NJIT. Solar Orbiter is a mission of international cooperation between ESA and NASA, operated by ESA. The STIX instrument is an international collaboration between Switzerland, Poland, France, Czech Republic, Germany, Austria, Ireland, and Italy. AIA and data are provided courtesy of NASA/SDO and the AIA science team. This work has benefited from the use of NASA’s Astrophysics Data System. CHIANTI is a collaborative project involving researchers at the universities of Cambridge (UK), George Mason, and Michigan (USA). This research used version 5.1.1 of the SunPy open-source software package <cit.>. The authors thank the referee for suggestions that improved the manuscript. § DEM METHODS Figure <ref> shows the AIA temperature response functions used for the DEM calculations in Section <ref> (solid). These functions were calculated using the convolution method described in <cit.> with ion abundances calculated using the CHIANTI v10 atomic database and assuming coronal abundances <cit.>. We note that the response function for the 171 Å channel was not calculated, as the channel was excluded from the DEM calculations due to saturation effects. Temperature response functions calculated using the SolarSoft routine are also shown in Figure <ref> (dashed) for comparison. As AIA 171 Å images were not included in our analysis, we are consequently neglecting any emission contributions from the Fe IX 171 Å line (peak Log_10 T [K] =5.9). To ensure the validity of the DEM calculations in the absence of the 171 Å channel, consistency checks were performed by comparing the predicted emission from the DEM results to the input AIA data. An example of this check is shown in Figure <ref> for the pixels averaged within the box in Figure <ref>(a) at time 17:33:24 for two inversion methods: the regularized inversion method described in <cit.> and the same method constrained by the EM loci curves of the five AIA channels, calculated internally in EM space. While both methods were able to reproduce the observed AIA emission well in the absence of the 171 Å channel, the EM loci-constrained method was found to be the most robust with a reduced χ^2 of 0.99. An additional consistency check was done to test our assumption of coronal abundances, given that the ablation of chromospheric material into the corona from evaporation is likely to occur during flares. Performing the same DEM calculations above, but using AIA temperature response functions recalculated with a photospheric abundance <cit.>, we found both methods performed considerably worse than those using a coronal abundance (Figure <ref>). However, while this test agrees with our initial assumption for the use of DEMs in this paper, we acknowledge that a more comprehensive study of the DEM calculations using different abundances is beyond the scope of the current work. aasjournal
http://arxiv.org/abs/2407.12391v1
20240717081147
LLM Inference Serving: Survey of Recent Advances and Opportunities
[ "Baolin Li", "Yankai Jiang", "Vijay Gadepally", "Devesh Tiwari" ]
cs.DC
[ "cs.DC", "cs.AI" ]
IEEEexample:BSTcontrol LLM Inference Serving: Survey of Recent Advances and Opportunities Baolin Li1, Yankai Jiang1, Vijay Gadepally2, Devesh Tiwari1 1 Northeastern University, 2 MIT July 22, 2024 =========================================================================================================================================================== § ABSTRACT This survey offers a comprehensive overview of recent advancements in Large Language Model (LLM) serving systems, focusing on research since the year 2023. We specifically examine system-level enhancements that improve performance and efficiency without altering the core LLM decoding mechanisms. By selecting and reviewing high-quality papers from prestigious ML and system venues, we highlight key innovations and practical considerations for deploying and scaling LLMs in real-world production environments. This survey serves as a valuable resource for LLM practitioners seeking to stay abreast of the latest developments in this rapidly evolving field. § INTRODUCTION Large language models (LLMs) have rapidly gained immense popularity since the release of ChatGPT. However, deploying and scaling these powerful AI models in production environments has presented significant challenges. The substantial computational and memory demands of LLMs often necessitate the use of high-performance GPU servers, yet even these resources can be strained by the sheer size of the models and the lengthy text sequences they process. The growing demand for LLM-powered applications has fueled a surge of research into LLM serving systems. In this paper, we present a comprehensive survey of these systems, focusing on advancements since 2023. While previous LLM system research existed, the landscape has dramatically shifted within the last year. Nearly every major system conference now features dedicated sessions on LLMs, with a particular emphasis on serving systems due to their widespread deployment and the importance of low-latency performance for user experience. The sheer volume of research published in such a short timeframe makes it difficult for LLM practitioners to stay abreast of developments and identify the most promising approaches for real-world deployment. This survey aims to provide a clear overview of the current state of the art, highlighting key areas of innovation and practical considerations for production environments. In this survey, we have meticulously selected all the high-quality research papers focused exclusively on LLM serving systems, published between January 2023 and June 2024. Our selection criteria prioritized publications from prestigious machine learning (ML) and system venues (e.g., ASPLOS, MLSys, OSDI), as well as impactful arXiv submissions from established industry and academic research groups. Notably, we exclude studies that modify LLM decoding algorithms (e.g., multiple decoding head <cit.>, lookahead decoding <cit.>, key token selection <cit.>) and solely focus on system-level enhancements that maintain the integrity of standard LLM decoding processes. While a few prior LLM inference system surveys exist <cit.>, these generally cover a broader scope and do not specifically emphasize system research. Additionally, many of the papers discussed in those surveys involve decoding algorithm modifications that can affect model accuracy. Our survey, in contrast, explicitly focuses on system-level solutions that do not alter the core LLM decoding mechanisms. Moreover, our survey encompasses a significant body of research published after the release of these earlier surveys, thus providing a more comprehensive and up-to-date overview of the field. We have organized the recent advances in LLM serving systems into four distinct categories, each with its own set of challenges and opportunities, which we will delve into in the following sections. KV cache and memory management. Efficient memory management is crucial to handle the dynamic growth of KV caches, which store previous key-value pairs to accelerate LLM inference. Recent research explores non-contiguous memory allocation, distributed management, and intelligent caching strategies to optimize memory utilization. Compression techniques are also being investigated to reduce the overall memory footprint, ultimately enhancing LLM performance and scalability by allowing for longer context lengths and lower memory overhead. LLM computation optimization. Efforts to optimize LLM computation focus on request batching to maximize resource utilization. Additionally, disaggregating the inference process into prefill and decode phases enables independent optimization and hardware specialization. Model parallelism, employing various techniques, facilitates efficient execution across multiple GPUs. These strategies collectively enhance LLM execution efficiency and hardware utilization. Cloud LLM deployment. Cloud platforms provide a scalable and cost-effective foundation for LLM inference. However, challenges remain in optimizing costs and resource utilization. Research is addressing this through techniques such as spot instance management, serverless optimizations, intelligent resource allocation, and power management. Additionally, strategies like cloud task co-location and token delivery optimization enhance user experience and overall cloud efficiency. Emerging research fields. Emerging areas in LLM serving include retrieval-augmented generation (RAG) and mixture-of-experts (MoE) inference. RAG faces challenges related to the computational overhead of increased input lengths due to retrieved documents, while MoE inference grapples with efficient communication and load balancing between distributed experts. Other research efforts address ethical concerns in LLM serving, such as fairness and environmental sustainability, for which we provide a comprehensive list of relevant studies. § BACKGROUND §.§ Overview of Transformer-based LLM Architecture Mainstream LLMs are built on multiple transformer blocks <cit.>. Each identical transformer primarily consists of self-attention-based Multi-head Attention (MHA) operations and Feed-Forward Networks (FFN). Initially, the transformer applies three weight matrices (W^Q, W^K, W^V) to the input X (encoded representation of input text sequence) to compute queries Q, keys K, and values V. Then, the Self-attention is calculated as: Q = XW^Q;K = XW^K;V = XW^V Attention(Q, K, V)=softmax(QK^T/√(d_k))V This is the calculation of one attention head (H_i), and multiple heads are concatenated and linearly projected into the final attention result: H_i = Attention(XW^Q_i,XW^K_i,XW^Q_i) Multi-Head Attention = Concat(H_1, H_2,..., H_h) W^O MHA makes transformers focus on different parts of the sequence in different representational spaces. Next, following the MHA block, the normalized output is fed into a position-wise FFN, which consists of two linear transformations with a ReLU activation. FFN(x) = max(0, xW_1 + b_1)W_2 + b_2 The FFN can be applied separately to each position, further refining the information captured by the MHA block. The output will have the same dimension as the input X. Fig. <ref> provides a visualization of the LLM architecture. §.§ Overview of LLM Inference LLM inference generates output tokens autoregressively <cit.> based on the initial input sequences P, referred to as Prompts. This process is divided into two major phases: the prefill phase and the decoding phase. The prefill phase is essential for setting up the model to generate text efficiently, while the decoding phase handles the generation of subsequent tokens. We visualize this process in Fig. <ref>. The prefill phase starts with a tokenized and encoded representation of the prompt going through layers of the transformers. Note that the generated key-value (KV) pairs of all transformer blocks are cached during the prefill phase, referred to as KV cache <cit.>. It ensures that the model can generate tokens more efficiently without recomputing the KV vectors of all previous tokens. Let the input prompt P = [p_1, p_2, ..., p_n], during the prefill phase, a new token is generated, denoted as P_n+1, and the new K and V are cached as [(k_1,v_1), (k_2,v_2), ..., (k_n,v_n)]. The decoding phase is where the model generates new tokens autoregressively. The LLM predicts the next token, appends the newly generated token p_n+1 to the original prompt P, and updates the KV cache. Note that the KV cache grows linearly with the number of tokens generated. The autoregressive LLM inference process is outlined in Algorithm <ref>. § MEMORY MANAGEMENT AND CACHING In this section, we explore memory management techniques to mitigate memory footprint and access overhead during LLM inference. While model parameters remain constant and intermediate activations are relatively small, the KV cache – used to store attention information – grows substantially with the number of generated tokens. Therefore, recent research has focused on efficient KV cache management to enable larger batch sizes and longer context processing. §.§ Efficient Management of KV Cache PagedAttention <cit.> identifies that the KV cache dynamically grows and shrinks over time as the model generates new tokens, but the request generation lifetime and length are not known a priori. Thus, it proposes to manage the KV cache as non-contiguous memory blocks. Compared to contiguous KV cache, non-contiguous KV cache management significantly reduces the memory waste on pre-allocation and fragmentation. Due to its efficient memory management using pages, PagedAttention has become an industry norm in LLM serving frameworks, supported by TGI <cit.>, vLLM <cit.> and TensorRT-LLM <cit.>. Despite its success, researchers still identify its weakness as PagedAttention requires rewriting attention kernels to accommodate the non-contiguous memory blocks, its memory manager adds software complexity and redundancy, and introduces performance overhead. Recently, vAttention <cit.> was proposed to retain the KV cache in contiguous virtual memory. It leverages pre-existing low-level system calls for demand paging, which is a standard operating system feature to reduce the software complexity. vAttention overlaps memory allocation with computation, pre-allocates memory ahead of time, and defers memory reclamation to hide the latency of memory allocation and improve the overall performance of the system. Besides system memory management, other efforts have addressed application-specific KV cache efficiency. Prompt Cache <cit.> designs specific prompt schema for users to submit their requests, so that attention states from these pre-defined modules (e.g., system prompt) can be reused across multiple prompts. AttentionStore <cit.> identifies that human interactions with applications such as ChatGPT are mostly multi-turn conversations. However, LLM engines would discard the KV cache when the user session becomes inactive to free up HBM space for other active sessions and re-compute the whole KV cache again when the session becomes active, leading to extra pre-filling costs. AttentionStore utilizes slower-mediums (e.g., CPU memory and disk), overlaps KV cache loading with computation, and designs intelligent pre-fetching and eviction policies. §.§ Support for Long-Context Applications Serving long-context LLM applications is particularly challenging as the size of the KV cache scales with the number of tokens. The limited memory limits LLM's ability to handle long sequences, demanding more memory-efficient solutions. Ring attention <cit.> is a novel distributed approach that leverages blockwise computation of attention and feedforward of long sequences across multiple devices. It efficiently overlaps KV cache communication with computation and extends the context length by the device count times. Infinite-LLM <cit.> is another distributed solution, it breaks down KV cache into smaller manageable units called rBlocks across GPUs/CPUs, and efficiently manages them with dynamic memory sharing and coordination. MemServe <cit.> unifies handling of inter-request and intra-request optimizations for LLM serving by introducing MemPool, a distributed memory pool to manage KV cache across all cluster memory and employs a global scheduler to maximize KV cache reuse. When the context grows larger than the GPU memory limit, most systems offload the KV cache to the CPU. InfiniGen <cit.> is a solution that speculates the important KV cache entries by rehearsing the attention computation of the current layer in the preceding layer and prefetches only the essential entries to the GPU, thereby reducing the data transfer overhead. LoongServe <cit.> introduces a new parallelism paradigm called Elastic Sequence Parallelism (ESP) to dynamically adapt to resource usage variance between requests and phases (pre-filling and decoding) of a request. It reduces KV cache migration overhead and KV cache fragmentation when serving long sequences. §.§ Compression of KV Cache Due to the large memory footprint of LLM serving, some systems have resorted to compressing the KV cache. On top of memory aggregation and communication scheduling, FlexGen <cit.> uses fine-grained groupwise quantization to compress the weights and KV cache to 4 bits. KIVI <cit.> analyzes the element distribution of the LLM KV cache and applies asymmetric quantization of the Key and Value cache. KIVI quantizes the key cache per-channel (grouping elements along the channel dimension) and the value cache per-token to achieve minimum quantization error. Gear <cit.> achieves near-lossless high-ratio KV cache compression by quantizing the majority of entries of similar magnitudes and employs a low-rank matrix to approximate the quantization error. MiniCache <cit.> observes that the KV cache states exhibit high similarity between adjacent layers in the middle-to-deep portion of LLMs. Based on this insight, MiniCache leverages this high similarity to merge them into a shared representation to reduce redundancy, while also identifying and retaining distinct states that are crucial for maintaining the model's performance, preventing information loss during compression. § COMPUTATION TASK SCHEDULING Besides memory and KV cache management, the computation of LLM also presents significant system challenges. Due to the sequential dependency between tokens during the autoregressive generation, LLM can only generate one token at a time for each request. Thus, LLM inference workloads are less resource-efficient than training workloads on GPU hardware that is designed for massively parallel execution. Following this incentive, we investigate system solutions that optimize the scheduling of computation tasks during the inference process. §.§ Request Batching When a single request cannot efficiently utilize the GPU, it is intuitive to batch multiple inference requests together to boost the occupancy of GPU cores. However, as responses to different prompts can have significantly variable lengths, when batched together, the shorter responses are forced to wait for the longer ones to complete, resulting in computational waste. Response Length Perception and Sequence Scheduling <cit.> instructs the LLM to predict the response length before starting to generate the actual response, and batches queries with similar predicted response lengths to reduce computational waste. A similar approach, S^3 <cit.>, finetunes a Distillbert model for sequence length prediction. Upon mispredictions, it preempts sequences that exceed their allocated memory and retrain the predictor to learn from its mistakes. Generation length prediction based batching is less practical due to the strong reliance on the predictor. Orca <cit.> proposes continuous batching at the token level rather than the request level. It continuously schedules new requests into the batch as soon as a request in the current batch completes. Continuous batching now has become an industry standard in LLM serving frameworks, incorporated into the software of TGI, vLLM, and TensorRT-LLM. Based on continuous batching, DeepSpeed-FastGen <cit.> proposes a dynamic SplitFuse mechanism that decomposes long prompts into smaller chunks scheduled across multiple iterations and composes short prompts together to maintain the inference running at high throughput region (bounded by GPU compute not memory bandwidth). A similar idea was explored in Sarathi-Serve <cit.>, which splits prefill requests into smaller chunks and schedules them alongside ongoing decode requests without causing stalls (stall-free batching). This allows new requests to join a running batch without pausing ongoing decodes, leading to minimal pipeline bubbles. §.§ Disaggregated Inference LLM inference goes through a prefill stage to process the prompt, populate the KV cache, and start the decoding stage to generate tokens (Sec. <ref>). Existing LLM serving systems colocate the two phases and batch the computation of prefill and decoding across all users and requests. However, these two phases display distinct characteristics and can interfere with each other when requests at the prefill stage are batched with requests at the decoding stage. TetriInfer <cit.> separates prefill and decode instances, allowing each phase to run independently and preventing interference between batch-like prefill jobs and latency-critical decode tasks. It employs a two-level scheduling algorithm that incorporates predicted resource usage to avoid scheduling hotspots during the decode phase, ensuring efficient resource allocation and minimizing contention. Splitwise <cit.> extensively characterizes the differences in the execution and utilization patterns of the prefill and decoding stage on different generations of GPUs (heterogeneous hardware). Splitwise proposes to split these two phases into separate machines, allowing for specialized hardware for each phase to achieve better utilization, reduce hardware ownership costs, and save energy. DistServe <cit.> designs a placement algorithm to schedule the prefill and decoding stage computation tasks. In clusters with high-speed cross-node networks, DistServe optimizes parallelism configurations for prefill and decoding instances independently to achieve the best per-GPU goodput; In clusters with limited cross-node bandwidth, it ensures that prefill and decoding instances of the same stage are co-located within a single node and optimizes parallelism configurations within the node. §.§ Model Parallelism LLMs can have hundreds of billions of parameters, requiring model parallel execution on multiple GPUs. Pope et al. <cit.> develop an analytical model for inference efficiency, enabling the selection of optimal multi-dimensional partitioning techniques tailored for TPU v4 slices based on specific application needs. HeteGen <cit.> introduces a framework for heterogeneous parallel computing using CPUs and GPUs. It employs a heterogeneous parallel computing algorithm to distribute computation within its hybrid heterogeneous parallelism framework and enables asynchronous overlap to mitigate I/O bottlenecks between the CPU and GPU. ExeGPT <cit.> can find an optimal schedule control variable of the batch size and tensor parallelism degree that maximizes inference throughput while adhering to a given latency limit. It leverages the distribution of input and output sequence lengths to allocate resources efficiently and determine the best parallelism configuration. Helix <cit.> is designed to partition an LLM across heterogeneous GPUs and different types of network connections. It formulates its model partition scenario as a max-flow problem of a directed, weighted graph whose nodes represent GPU instances and edges capture both GPU and network heterogeneity through their capacities in the max-flow problem. § LLMS IN THE CLOUD LLM deployments are computationally intensive and often require significant infrastructure to run effectively. Cloud platforms offer a scalable and cost-effective solution for deploying LLMs, eliminating the need for expensive hardware investments. The flexibility of cloud deployment allows organizations to easily adjust resources as needed, ensuring optimal performance and minimizing downtime. However, the significant costs associated with cloud computing resources and the challenge of ensuring their efficient utilization can be major obstacles for LLM service providers. §.§ Cloud Deployment Cost Modern clouds offer a variety of spot instances (e.g., AWS EC2 Spot Instance, Azure Spot Virtual Machines, Google Cloud Spot VMs). These instances run on spare capacity and are offered at highly discounted prices, but may be preempted at any time when other instances need the capacity. SpotServe <cit.> addresses the challenges of using these instances for LLM serving, such as how to quickly adapt to changes in available instances and how to minimize the cost of migrating instances when interruptions occur. It also introduces a stateful inference recovery mechanism for inference engines to commit their progress at the token level and efficiently resume interrupted requests. Serverless is a recently emerged cloud computing paradigm, where inference service users can submit their model to the cloud and the cloud provider takes care of all infrastructure provision and scaling with varying inference request load, and saves unused hardware costs for customers. A major challenge in serverless is mitigating cold start, where a service instance would be shut down after not being accessed for some time, and once a new request arrives, it would experience a latency spike associated with re-initializing the service instance. ServerlessLLM <cit.> addresses these latency issues by utilizing the underutilized storage and memory resources available on GPU servers. It introduces a new checkpoint format and loading system to speed up LLM model loading, a live migration mechanism to avoid interrupting ongoing inferences, and a locality-aware server allocation strategy to minimize LLM inference cold start latency. Cloud providers often offer a wide range of heterogeneous instance selections labeled at different prices. Mélange <cit.> is a cloud resource allocation framework that considers three key LLM service characteristics: request size, request rate, and service-level objective. It automatically navigates through the GPU option space to determine the most cost-efficient heterogeneous GPU allocation for a given LLM service. With the resources allocated and model hosted on the GPUs, Llumnix <cit.> is a dynamic scheduling system for LLM serving that addresses the challenges of heterogeneous and unpredictable requests by rescheduling them across multiple model instances at runtime – similar to how OS context switches across cores. Llumnix introduces an efficient live migration mechanism for requests and their in-memory states, minimizing downtime during rescheduling, and employs a dynamic scheduling policy that unifies various rescheduling scenarios, such as load balancing, de-fragmentation, prioritization, and auto-scaling. This efficiency has resulted in significant cost savings while achieving similar tail latency. §.§ Cloud Efficiency A key bottleneck resource in cloud datacenters is power, which LLMs are quickly saturating due to their growing computation demand. POLCA <cit.> characterizes the power consumption patterns of LLMs in the cloud and finds that while training LLMs demands a lot of power and can strain the data center's power infrastructure, inference tasks offer more flexibility for power management due to their less predictable power demands. POLCA devises a framework to manage power in LLM inference clusters by dynamically applying techniques such as GPU frequency locking and power capping. PerLLM <cit.> takes the LLM inference to an edge-cloud collaboration scenario, where it leverages the strengths of edge computing (low latency, reduced energy costs) and cloud computing (high processing power) to handle LLM inference tasks efficiently. PerLLM employs a Constraint Satisfaction Upper Confidence Bound (CS-UCB) algorithm to optimize service scheduling and resource allocation while adhering to constraints like processing time, bandwidth, and computing power – achieving energy LLM efficiency. Workloads often get co-located in the cloud environment. FlexLLM <cit.> is a system designed to efficiently service LLM inference and parameter-efficient fine-tuning (PEFT) requests in the same iteration. LLM inference, which involves generating text token by token, is primarily limited by memory bandwidth due to the need to access all model parameters for each token generation. In contrast, PEFT, which processes all tokens of a request simultaneously, is mainly constrained by compute resources, such as the tensor cores on GPUs. FlexLLM introduces a token-level fine-tuning mechanism that breaks down the fine-tuning process into smaller, more manageable token-level computations to minimize memory usage and inference latency, making co-serving feasible. As LLM inference follows token-by-token generation, users also read the response word-by-word. Andes <cit.> defines a user experience metric of Quality of Experience (QoE) for text streaming services. It is formulated by comparing the actual token delivery timeline (TDT) of a request with its expected TDT. The expected TDT is determined by the expected time to first token (TTFT) and the expected token delivery speed (TDS), which can vary depending on factors like the user's typical reading speed. The intuition is generating text too fast (than user reading speed) does not yield QoE benefits, wasting cloud resources. Andes addresses this by strategically allocating GPU resources among multiple requests to optimize QoE. It employs a dynamic priority-based preemptive scheduler that operates at the token level, prioritizing urgent requests and preempting those that have been sufficiently served. Andes improves average QoE and can handle higher request rates while maintaining similar token generation throughput. § EMERGING RESEARCH FIELDS §.§ Retrieval Augmented Generation Retrieval-Augmented Generation (RAG) <cit.> is a technique that enhances LLMs by incorporating external information sources. It addresses the limitations of LLMs in retaining factual knowledge and their tendency to generate inaccurate or fabricated information (hallucinations). RAG operates in two stages: retrieval and generation. During retrieval, the system identifies the most relevant contexts from an external knowledge base or corpus based on the given query. Once the relevant contexts are retrieved, they are integrated into the LLM's generation process in different processes including concatenation (where the retrieved contexts are simply appended to the query) and cross-attention (where the LLM attends to the retrieved contexts during generation). Sparse RAG <cit.> observes that RAG can be computationally expensive due to the increased input length from retrieved documents. It first encodes retrieved documents in parallel to eliminate latency caused by long-range attention, then selectively decodes the output by attending only to highly relevant caches chosen via prompting the LLM with special control tokens. RAGCache <cit.> caches intermediate states of external knowledge with a knowledge tree to organize and store intermediate states. The cached knowledge can be shared across multiple queries to reduce the redundant computation. Another knowledge caching technique is CacheBlend <cit.>, which selectively recomputes a small portion of the KV cache based on the preceding text in the input. §.§ Mixture-of-Experts Inference The mixture of Experts (MoE) is used in LLMs to improve efficiency and performance. It divides the model into specialized sub-networks, called “experts", each focusing on a specific task. A “gating" network then directs input to the most suitable expert. In the inference process of an MoE transformer, the input is first passed through a gating network. This network determines which expert, or a combination of experts, is best suited to process the specific input. MoE's sparsely activated subset of experts avoids the large computational need to process the entire model for every inference. MoE Communication. Lina <cit.> is a system designed to address the all-to-all communication bottleneck in distributed MoE. The all-to-all communication occurs when distributed MoE sends tokens to their selected experts for processing and then sends the results back to the original devices. During inference, Lina dynamically schedules resources based on expert popularity, balancing the transfer size and bandwidth of all-to-all communication across devices. ExFlow <cit.> is an optimization technique to accelerate the inference of distributed MoE. It leverages the inter-layer expert affinity, which is the correlation between expert selection across different MoE layers. By placing experts on corresponding GPUs based on their affinity, ExFlow reduces cross-GPU routing latency and improves inference throughput. Expert offloading. SiDA-MoE <cit.> (Sparsity-inspired Data-Aware) leverages both main memory and GPU memory by exploiting the inherent sparsity of expert activation in MoE models. SiDA-MoE includes two parallel threads: an inference thread and a hash-building thread. The hash-building thread predicts which experts will be activated for each token at each layer, storing these predictions in a hash table. The inference thread then uses this information to dynamically load activated experts onto the GPU and offload inactive experts to main memory, maximizing GPU memory utilization. MoE-Infinity <cit.> takes a different approach toward expert offloading. The system leverages the observation that MoE models exhibit sparse activation and temporal locality during inference, meaning only a few experts are repeatedly activated for processing a specific sequence. MoE-Infinity traces expert activation at the sequence level, enabling it to predict which experts will be needed and prefetch them accordingly. MoE Efficiency. Fiddler <cit.> is a system designed to efficiently run these models on a limited number of GPUs, even when the model's size would typically exceed the GPU's memory capacity. Fiddler strategically distributes the model's components. Non-expert layers, which are used frequently, are kept on the GPU. A subset of expert layers, chosen based on how often they're used, are also placed on the GPU. The rest remain in the CPU's memory. Huang et al. <cit.> introduce three optimization techniques to address the MoE inference inefficiencies. (i) Dynamic gating allows the number of tokens processed by each expert to vary, which avoids the over-provisioning of resources in static gating and reduces computational waste, communication overhead, and memory consumption. (ii) Expert buffering leverages the observation that expert activation is often sparse and exhibits temporal locality. By caching frequently used (hot) experts in GPU memory and buffering less active experts in CPU memory, expert buffering reduces the static memory allocation on GPU. (iii) Imbalanced token assignments to experts can lead to bottlenecks and performance degradation. Expert load balancing ensures a more even distribution of workload across devices. §.§ Miscellaneous Fields Ethics and environmental sustainability. Sheng et. al <cit.> ensure fairness in serving LLMs by introducing a Virtual Token Counter (VTC). VTC defines LLM serving fairness based on a cost function that accounts for the number of input and output tokens processed. It achieves fairness by tracking the services received by each client and prioritizing those with the least service, while also considering the varying costs of processing input and output tokens. Sprout <cit.> addresses the environmental sustainability of LLMs and designs a framework to reduce the carbon footprint of LLM inference services. Sprout introduces “generation directives" to guide the autoregressive generation process, balancing the need for sustainability with the demand for high-quality generation. Inference pipeline optimization. FlashDecoding++ <cit.> conducts inference engine performance optimization, addressing several issues in softmax synchronization, GPU kernel, and dataflow. For example, the decoding phase performs linear GEMM operations with flat shapes where the batch size dimension involved in the multiplication is much smaller than the others. FlashDecoding++ accelerates flat GEMM with double buffering that overlaps computation and data transfer and hides the memory latency in loading input matrices. Parrot <cit.> is designed to optimize the performance of LLM-based applications that involve multiple LLM requests with complex workflows. Parrot performs data flow analysis and uncovers correlations across multiple LLM requests, and introduces a series of optimizations to improve performance. FlashAttention-3 <cit.> is a method to speed up attention for large language models and long-context applications. It introduces techniques like warp specialization and asynchronous block-wise operations to optimize GPU utilization. FlashAttention-3 achieves significant speedup on Hopper GPUs compared to its predecessor and reduces numerical errors in FP8 computations. Frugal inference. FrugalGPT <cit.> proposes several solutions to reduce the inference cost, such as prompt caching and LLM cascading which uses a sequence of LLMs, starting with cheaper ones and moving to more expensive ones only if necessary. SpecInfer <cit.> applies speculative decoding using smaller, speculative models to predict the LLM's output, reducing the computational resources. These predictions are organized into a tree structure, and their accuracy is verified in parallel against the LLM. RouteLLM <cit.> dynamically selects between a stronger and a weaker LLM during inference to optimize the balance between cost and response quality. § CONCLUSION This survey has presented a comprehensive overview of recent advancements in LLM serving systems, emphasizing the importance of system-level solutions for enhancing performance and efficiency. We have highlighted key innovations for deploying and scaling LLMs, paving the way for the future development of LLM serving systems. § ACKNOWLEDGMENTS This material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001, and United States Air Force Research Laboratory Cooperative Agreement Number FA8750-19-2-1000. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Assistant Secretary of Defense for Research and Engineering, or the United States Air Force. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. IEEEtran
http://arxiv.org/abs/2407.12896v1
20240717143556
A Survey of Scam Exposure, Victimization, Types, Vectors, and Reporting in 12 Countries
[ "Mo Houtti", "Abhishek Roy", "Venkata Narsi Reddy Gangula", "Ashley Marie Walker" ]
cs.CY
[ "cs.CY", "cs.HC" ]
Instance-wise Uncertainty for Class Imbalance in Semantic Segmentation Luís Almeida 1 0009-0004-0705-1801 Inês Dutra 1 0000-0002-3578-7769 Francesco Renna 1 0000-0002-8243-8350 July 22, 2024 ============================================================================================================= § ABSTRACT Scams are a widespread issue with severe consequences for both victims and perpetrators, but existing data collection is fragmented, precluding global and comparative local understanding. The present study addresses this gap through a nationally representative survey (n = 8,369) on scam exposure, victimization, types, vectors, and reporting in 12 countries: Belgium, Egypt, France, Hungary, Indonesia, Mexico, Romania, Slovakia, South Africa, South Korea, Sweden, and the United Kingdom. We analyze 6 survey questions to build a detailed quantitative picture of the scams landscape in each country, and compare across countries to identify global patterns. We find, first, that residents of less affluent countries suffer financial loss from scams more often. Second, we find that the internet plays a key role in scams across the globe, and that GNI per-capita is strongly associated with specific scam types and contact vectors. Third, we find widespread under-reporting, with residents of less affluent countries being less likely to know how to report a scam. Our findings contribute valuable insights for researchers, practitioners, and policymakers in the online fraud and scam prevention space. § INTRODUCTION Scams affect many people. A 2017 FTC survey found that almost 16% of US consumers had been victims of fraud in the past year Anderson2019-ln. This problem extends worldwide; a 2023 global survey by the Global Anti-Scam Alliance (GASA) found that 78% of respondents had experienced a scam in the past year Abraham_undated-uy. The material consequences of scams and fraud can be severe, extending far beyond moderate financial losses for consumers. There are many documented cases of individuals losing their life savings to scams, leading to severe material and emotional harm Coakley2024-cm, Khandro2024-uj, Farivar2022-jj, Kilmer2023-zn. In some cases, businesses have been destroyed by losing immense sums of money in a scam Albright2023-wt. But even these facts focus on only one set of victims. The United Nations estimates that hundreds of thousands of human trafficking victims are forced by criminal organizations to carry out scams, often after being lured to a foreign country with false promises of a job opportunity United_Nations_Human_Rights_Office_of_the_High_Commissioner2023-mp. In essence, the prevalence of scams has created lucrative opportunities for organized crime, with severe consequences for both victims and perpetrators. The scope of this issue will likely become more important as internet access and the availability of digital payments continue to increase Fico_undated-ir. The growing prevalence and widespread access to generative AI tools, such as voice cloning and high fidelity image manipulation technology, could also exacerbate the scam problem even further. For example, in a recent incident, perpetrators used AI-generated voice and images to impersonate a company's CFO, successfully tricking a Hong Kong finance worker into wiring $25.6 million to fraudulent bank accounts Ianzito2024-gv. Proliferation of such tools could enable scams to be a lot more believable and harder to discern for users  Bethea2024-dn. Unless curbed, this could increase the likelihood of victimization, and potentially reduce trust in online interactions and transactions as a whole FCC2024-dj. To help better understand and combat this problem, governments around the world regularly collect data on scams in their local jurisdictions. The Federal Trade Commission (FTC), for example, conducts surveys and publishes detailed reports Anderson2019-ln, noauthor_2023-dp to map the landscape of scams in the United States. The European Commission assumes similar responsibilities in the EU European_Anti-Fraud_Office_undated-qa, and analogous data collection efforts happen in places like the United Kingdom Jones2022-no, Australia abs_personal_2024, and South Korea Jun-hee2023-jo. But despite these efforts, we do not have a good quantitative understanding of scams at a global level. Because data collection typically happens within jurisdictions, methodological differences make it impossible to compare findings across borders. While some global reports do exist, they either compile data from existing sources or use survey methodologies that do not achieve population-representative samples (e.g., Abraham_undated-uy, Lewis2020-kt). Some industry firms (e.g., NortonLifeLock2022-wi) have conducted representative global surveys, but these have tended to be more high-level and are therefore insufficiently granular to let us draw precise and practical conclusions about the global scams landscape. This in turn hinders policymakers, practitioners, and researchers from identifying and targeting scams in contextually appropriate ways. To address this gap, we conducted a nationally representative survey (n = 8,369) in 12 countries—Belgium, Egypt, France, Hungary, Indonesia, Mexico, Romania, Slovakia, South Africa, South Korea, Sweden, and the United Kingdom. The survey asked people about their experiences with scams, including the type of scam they had most recently experienced, the vector through which they experienced the scam, whether they had sent the scammer money, and whether/where they reported the scam. Importantly, we ask these questions through a global and nationally representative survey, making it possible to draw true comparisons and contrasts between countries. This provides a couple key advantages. First, it lets us contextualize local findings. If we find that 25% of South Africans report having lost money from a scam in the past year, is that a lot? According to our survey, the answer in this case is yes—but this argument would be much more difficult to make in the absence of global data. Second, while not a perfect substitute for comprehensive global data, analyzing scam data across a broad spectrum of countries can help resource-constrained governments prioritize their efforts. If consistent scam types emerge across diverse economies, or if similar economies suffer from specific scam types, they can be valuable signals to guide local data collection and enforcement efforts. Concretely, the survey let us answer the following research questions in multiple countries: * RQ1: How common are scam exposure and victimization? * RQ2: How prevalent are specific scam types? * RQ3: To what extent are scams technology-mediated? * RQ4: How comprehensive are reporting-based data sources about scam victimization? We present several useful findings, from which we contribute concrete takeaways for researchers, practitioners, and policymakers in the online safety and fraud prevention spaces. First, we find that residents of less affluent countries suffer financial loss from scams more often. Second, we find that the internet plays a key role in scams across all surveyed countries, but that GNI per-capita is strongly correlated with specific scam types and contact methods from scammers. For example, money-making scams are more common in less affluent countries. Third, we find widespread under-reporting of scams. On overage, half of those exposed to a scam in a given country do not report it at all—and residents of less affluent countries are more likely to not know how to report a scam. § BACKGROUND §.§ Scam Exposure and Victimization Scams are a growing issue, with losses from reported scams increasing at a rapid rate. Industry surveys and reports compiled by law enforcement are the primary sources of scam prevalence data. The FBI's Internet Crime Report 2023 showed that losses grew at upwards of 20% year over year in the last three years fbi_ic3_2023. The Global Anti-Scam Alliance (GASA)'s State of Scams 2023 report estimated that losses from scams topped USD 1 trillion in 2023 and accounted for about 1% of global GDP Abraham_undated-uy. While estimates for exposure to scams vary widely across surveys and countries, it is generally accepted that the exposure to scams is increasing year over year. GASA's 2023 report stated that  78% of participants experienced at least one scam in the past 12 months. The GASA report estimated that developing countries like Kenya, Vietnam, Thailand and Brazil lost more than 3% of their GDP to scams Abraham_undated-uy. A 2020 survey commissioned by the European Commission (EC) found that the exposure to fraud is generally higher in “connected countries” identified as countries with a relatively high proportion of individuals making internet purchases ec_scams_survey. In the context of online scams, increased internet activity (e.g., online purchases) expands the pool of potential targets and the opportunities for motivated offenders kigerl2012routine, felson1980RAT, miro2014routine. Prior research indicates that a variety of psychological (e.g. overconfidence, optimism bias), socio-demographic (e.g. age, education) and situational factors (e.g. emotional distress, financial strain) play a role in susceptibility to scams Whitty2019-hv,Modic2013-wn,Button2014-ip. In this study, we looked at how the exposure and victimization to scams vary across geographies. §.§ Scam Types and Technology’s Role in Scams Scams are heterogeneous and evolve based on a variety of factors such as emerging technologies (e.g., cryptocurrency), changing communication channels (e.g., social media), and global events (e.g., the COVID-19 pandemic, the Russia-Ukraine war) xu2022differentiating. Scammers adapt their methods based on the individual circumstances of their targets, ranging from preying on financial vulnerabilities through lottery scams to exploiting loneliness through romance scams. While attempts have been made by multiple government agencies and researchers, there is still no universally accepted classification of scams. DeLiema et al. classify consumer fraud into four categories: opportunity-based scams, threat-based scams, consumer purchase scams, and phishing scams DeLiema2023-yo. In contrast, FINRA's Financial Fraud Research Center has developed a framework for a taxonomy of fraud modeled after international crime classification systems beals2015framework. In this study, we used a scam classification based on the role scammers play and the service they purportedly offer to understand the prevalence of such scams. As stated earlier, scammers take advantage of technological advances to effectively reach and scam their targets. Scammers use a variety of communication channels such as phone calls, text messages, emails, social media, and mobile apps to establish contact with their targets. Scammers also leverage advances in payment methods with the emergence of irreversible payment methods such as real-time payments, cryptocurrency, digital wallets, etc. Notably, scams utilizing cryptocurrency have skyrocketed, with the FTC reporting that cryptocurrency scams make up more than 85% of losses due to investment scams noauthor_2023-dp. In this study, we looked at how scammers use technology to reach their targets and receive payments across countries of interest. §.§ Limitations of Reporting-based Data Sources Despite scams leading to devastating financial losses and causing enormous psychological distress to victims munton2023scams, they are highly under-reported to authorities with the FTC estimating that only 10-12% of scams are actually reported to them noauthor_2023-dp.  Deliema2019-bw show that scams are under-acknowledged and under-reported even in survey research, likely due to social desirability bias or refusal to acknowledge victimization. In the limited studies that surveyed known victim pools, only about half of known fraud victims admitted to being defrauded button2009fraud,finra_sr_fraud_risk_survey. This shows that survey-based estimates are better than reporting-based estimates to assess scam prevalence and victimization. While reports like those from the FBI and industry players provide valuable information, they suffer from methodological disadvantages that hinder their representativeness. Reports based on complaint data do not provide a complete picture, as scams are severely under-reported to government agencies. Some surveys by industry players are global but often focus on a narrow aspect of the problem such as real-time payments based scams or text message scams, or have representativeness issues. Through this survey, we looked at scam exposure, victimization and reporting attitudes across regions using a representative sample of participants in order to identify regional disparities and inform targeted prevention and intervention strategies. § METHODS §.§ Survey Overview The questions analyzed in this paper were part of a larger survey covering a variety of digital safety issues. The survey was deployed through Morning Consult (MC)—a leading survey research firm—using a mixed-panels approach, which leverages the strengths of different data collection methods to create a more comprehensive and representative picture of public opinion. MC uses roughly 55 survey panel providers to conduct interviews across numerous countries. This panel network provides access to tens of millions of survey respondents via recruitment from thousands of websites, mobile apps, social networks, email lists, and publishers. They use a wide mix of panel providers with different recruitment methods to diversify their respondent pools and ensure maximal access to different respondent groups. Recruited participants completed the survey online, using mobile or desktop devices. 12 countries were surveyed: Belgium, Egypt, France, Hungary, Indonesia, Mexico, Romania, Slovakia, South Africa, South Korea, Sweden, and the United Kingdom. Questions were translated by teams with native speaker(s) in the target languages through a process consisting of translation, editing, proofreading, and quality assurance. Demographic factors used for stratified sampling and weighting varied by country to account for contextual factors such as the availability of reliable census data. Weights and sample targets were based on general population proportions for adults (18 and older) in Belgium, Egypt, France, Hungary, Indonesia, Romania, Slovakia, South Korea, Sweden, and the UK. Mexico and South Africa have internet access penetration below 80%, and folks without access to the internet are more likely to fall into lower education demographics. Therefore, to avoid over-representing adults with lower education, weights and sample targets for those two countries were based on internet population proportions for adults (18 and older). Responses were weighted using standard raking procedures Battaglia2009-ad. Table <ref> reports the demographic factors used to derive sample targets and weights in each country. §.§ Survey Questions We used the FTC's online fraud reporting tool Federal_Trade_Commission_undated-hm as a guide when formulating our survey questions. This let us ensure that the questions captured the kind of information about scams that is useful to government institutions and that the multiple-choice options adequately covered common scam experiences. To account for the likelihood of multiple scam experiences, respondents were instructed to answer questions based on their most recent scam experience (if any) in the past year. We reasoned that respondents might have trouble recalling details of earlier scam experiences, making their reports less reliable. Obtaining a cross-section of most recent scam experiences would also closely approximate the breakdown of scam experiences overall while letting us simplify the survey for respondents. While “scams” and “fraud” technically have different definitions, the two terms are often used interchangeably to refer to attempts to trick someone for monetary gain. Indeed, the aforementioned FTC reporting tool Federal_Trade_Commission_undated-hm does not distinguish between the two. Rather than providing a narrow definition that might unintentionally exclude instances commonly considered to be scams, we decided to mirror the FTC and rely on respondents' colloquial understanding of the term. Recall that we articulated four research questions. We state each research question before its corresponding survey questions. RQ1: How common are scam exposure and victimization? We analyzed responses to Q1 to understand the frequency with which internet users are exposed to and materially harmed by scams: Q1: In the past year, have you been the target of a scam? * Yes, I have been the target of a scam, but I did not lose money. * Yes, I have been the target of a scam and I did lose money. * I have not been targeted by any scams. * I do not know if I've been targeted by a scam. RQ2: How prevalent are specific scam types? We analyzed Q2 to understand the types of scams internet users experience: Q2: What kind of scam were you targeted by? Choose the one that most accurately describes your experience. * An impersonator (e.g., fake government, business, love interest, grandchild) * Job, investment, money-making opportunity, franchise * Phone, internet, TV service * Health (e.g., weight loss, eye care, treatment) * Online shopping (e.g. fake stores/pages) * Sweepstakes, prize, lottery * Auto sale, repair * Credit, debt, loan (e.g., debt collection, credit report, student loan debt relief) * Something else RQ3: To what extent are scams technology-mediated? Responses to Q3 and Q4 let us understand to what extent users' scam experiences are technology-mediated. Q4 was only given to respondents who indicated having directly sent the scammer money in a separate question. Q3: How did it start (e.g., how did they first contact you, where did you see an ad)? * Phone call * Social media (e.g., Facebook/Meta, Instagram) * Online ad or Pop-up * Website or App (e.g., Craigslist, OfferUp, Marketplace) * Messaging app (e.g., Google Hangouts, WhatsApp) * Email * Text * Mail * In-person * TV or Radio * Print media (e.g., newspaper, magazine) * Other (please specify) Q4: How did you pay or send the money? * Scammer transferred from my account without my consent * Paid through a payment or banking app on phone * Paid via payment or banking website on computer * Mailed cash, cheque, or money order * Bought gift cards & shared the details with the scammer * Other (please specify) RQ4: How comprehensive are reporting-based data sources about scam victimization? Finally Q5 and Q6 let us contextualize the current research information landscape on scams, which often relies heavily on reporting-based data. Q5: Did you report the scam to anyone? * Yes * No * I wanted to, but didn't know how Those who did report the scam were asked: Q6: Who did you report the scam to? * To a financial authority (e.g., bank) * To the service provider (e.g. Cisco, IBM, McAfee) * To the authorities (e.g., the police) * To my email provider * To the online security provider * To my network provider * To my family (who take action on my behalf) * To my work/place of education §.§ Analysis For each survey question, we report the weighted percentage of respondents who selected each multiple-choice option in a heatmap. Countries are ordered along the x-axis by ascending GNI per-capita in 2021 (the most recent year for which complete data was available). Where the heatmaps suggest possible GNI-based patterns among the most frequently selected or most pertinent options, we compute Spearman's rank-order correlations to verify and quantify the associations. § RESULTS RQ1: How common are scam exposure and victimization? §.§ Users in Less Affluent Countries Are at Greater Risk On average, 15% of a country's internet users lost money from scams in the past year (Figure <ref>). Our results also reveal a troubling pattern: internet users in less affluent countries are at greater risk of falling victim to scams. South Africa, for example, had a GNI per-capita of $12,948 and the highest scam victimization rate at 25%. South Korea, with a GNI per-capita of $44,501, had less than a third of South Africa's victimization rate (7%). We found a strong inverse correlation between GNI per-capita and scam victimization rate (ρ=-0.73, p=0.007). This is especially concerning because lower incomes in countries with lower GNI per-capita make it more difficult for victims to recover from financial loss. §.§ Scam Exposure Does Not Reduce Victimization Rate Some countries with low rates of scam exposure nevertheless have a high rate at which those who do experience scams actually suffer losses (e.g., Egypt, Mexico, and Sweden). Users in these countries may simultaneously be well protected from exposure to scams, but not resilient against scams themselves. Indeed, one could reasonably hypothesize that less frequent scam exposure might make users less alert and therefore less resilient overall. However, we found no correlation in either direction between the scam loss rate—the top row in Figure <ref>—and scam exposure rate—the sum of the following two rows (ρ=0.04, p=0.90). This suggests a more complicated relationship between scam exposure and avoidance of financial loss from scams, with additional factors possibly at play. For example, countries where users are frequently exposed to scams may become more resilient in absolute terms, but be targeted using more sophisticated techniques in response. This kind of tailoring would be consistent with the results we cover next, demonstrating some differences in scam types and communication vectors based on countries' affluence. RQ2: How prevalent are specific scam types? §.§ Top Scam Types Are Fairly Consistent Across Countries The two most common scam types were online shopping and money-making scams (Figure <ref>). There were only two countries where neither of these was the most-selected option: Indonesia with sweepstakes/lottery scams (30%), and South Korea with impersonation scams (28%). In both cases, however, money-making and online shopping scams were still in second and third place, respectively. Despite this consistency, we did see differences based on countries' economies. Money-making scams were more common in less affluent countries; GNI per-capita and the prevalence of money-making scams had a very strong inverse correlation (ρ=-0.90, p<0.001). The prevalence of online shopping scams was not significantly correlated with GNI per-capita (ρ=0.44, p=0.15). RQ3: To what extent are scams technology-mediated? §.§ From First Contact to Payment, the Internet Plays a Key Role Consistent with FTC reports in the US noauthor_2023-dp, the most common method of first contact across countries (Figure <ref>) was social media (24%), followed by email (16%) and phone (14%). On average, 68% of those exposed to a scam in a country had first contact with a scammer through an internet-based technology (social media, email, messaging app, website or app, or online ad or pop up). Social media and messaging apps were more common as initial contact methods in less affluent countries, while email-based scams were more prominent in more affluent countries. GNI per-capita had a strong inverse correlation with the prevalence of social media (ρ=-0.71, p=0.009) and messaging apps (ρ=-0.65, p=0.02) as first contact methods, and a strong positive correlation with email (ρ=0.67, p=0.02). Interestingly, South Korea was a major outlier in our data; phone calls and texts were the most common contact methods there, with internet-based methods making up only 30%. Mobile apps were a key vector for payments to scammers. Mobile-based payments were almost twice as common as computer-based ones (Figure <ref>). This pattern held across most countries—with France and Belgium being the only exceptions. This may simply reflect online banking habits across countries, but nonetheless provides good evidence that scam prevention should include a focus on mobile payments, and further substantiates reports calling out the risks of the increasing global adoption of real-time payments (RTP) Fico_undated-ir. The prevalence of mobile-based payments to a scammer had a strong inverse correlation with GNI per-capita (ρ=-0.79, p=0.002), suggesting the adverse effects of RTP adoption may be more prominent in less affluent countries. RQ4: How comprehensive are reporting-based data sources about scam victimization? §.§ Many Scams Go Unreported Our findings substantiate the widespread understanding that scams are under-reported, and provide quantitative evidence of the extent to which this is true globally. On average, over a third (37%) of a country's scam victims did not report the scam to anyone (Figure <ref>). In Slovakia and South Korea, that number was closer to half (50% and 54%, respectively). Users in less affluent countries more often wanted to report scams but did not know how (ρ=-0.82, p=0.001). This suggests that inferences based on victim reports are likely to suffer more under-reporting bias in resource-constrained countries, where survey data is also less likely to exist. Reporting to government authorities constituted an even smaller fraction of these totals (Figure <ref>). Of those who reported scams, less than a third (27%) reported them to the authorities. This varied widely by country—e.g., 57% of South Koreans who reported scams reported them to authorities versus only 16% of South Africans. GNI per-capita did not correlate significantly with reporting to either authorities (ρ=0.47, p=0.12) or financial institutions (ρ=0.19, p=0.56), though the proportions may reflect the degree to which victims in each country believe a particular institution may be able to take action for them (e.g., retrieve their lost money). § DISCUSSION Scams are a widespread and growing problem affecting people worldwide. The recent cost-of-living crisis Economist_Intelligence_Unit2023-nl has exacerbated the financial impact of scams, leaving victims even more vulnerable. Greater rates of financial loss from scam victimization coupled with more difficulty recovering from financial loss (due to low income) in less affluent economies makes this issue worth even more attention. Consequently, governments, central banks, industry and civil society around the world have responded with various tools in their arsenal, from public awareness campaigns to regulationsStainsby2023-ue. This lack of standardized data hinders our ability to understand and address scam victimization effectively. Without a global survey on scam prevalence and victimization, it is hard to accurately assess and describe the true size of the issue, develop effective solutions or bring appropriate resources to bear. Our work tries to fill in some of the gaps in terms of understanding who is most at risk, and to what types of scams, so regulators and industry can better target their interventions. Furthermore, as our numbers on contact method and payment method show, the scams landscapes in low GNI per-capita countries often look different, so minority-world-focused solutions are likely not going to be as effective in the majority world. This survey aims to fill that gap in literature. In this section, we highlight some of the key drivers of scam victimization across countries as demonstrated by our findings, and define a multi-level structural approach to fight back against this issue. §.§ Drivers of Scam Victimization Across Countries While the specific tactics and methods used by scammers vary across countries, there are several common tailwinds and themes contributing to the widespread vulnerability and victimization to scams. §.§.§ Widespread and fast adoption of digital payment systems globally post-COVID appears to be a key driver in increased scam prevalence. As we show in Figure <ref>, digital payment solutions are involved in a large fraction of the scams incidents across all the countries we surveyed. This observation mirrors the growth of the total number of non-cash transactions happening globally, which is estimated to accelerate to $2.3 trillion by 2027, growing at a rate of 15% annually Capgemini2023-rc. Frauds and scams are not, however, unique to digital payment systems. While digital payment systems are fast, convenient and thereby have a large volume, they often offer limited recourse for victims as compared to modes of payment like credit cards. While with the adoption of such payment systems we have seen a massive growth in economic activity, this has also increased the total number of scam activities over time. For example, UK's Payment Systems Regulator (PSR) found just 0.1% of fast payments in 2021 were fraudulent—well above the global average of 0.03% for card transactions World_Bank2023-yq. While this trend of growth in adoption of digital payments has implications for the total volume of scams occurring globally, it is particularly pertinent in Asia Pacific where non-cash transactions seem to be growing at an estimated rate of 19.8% annually Capgemini2023-rc. The type of scams that are prevalent vary country to country, but it appears that focusing research and enforcement efforts on scams facilitated through digital payment services is a productive intervention space, especially for those countries. §.§.§ Scams follow where users go online, and locale specific factors are highly salient in understanding specific scam victimization. As we show in Figure <ref>, the types of scams that are popular differ by country, but there seems be a trend of scammers using hooks that are most likely to yield results in their target locales. For example, high rates of gambling related scams in Indonesia is correlated with a rapid rise in the illegal gambling industry in the country, with one PPTAK (Indonesian Financial Transaction Reports and Analysis Center) spokesperson noting that the turnover in this industry grew from 57 trillion Rp in 2021 to 81 trillion Rp in 2022 Bhwana2023-rp. Similarly, we note that the high rate of jobs scams in South Africa is correlated with South Africa having one of the highest unemployment rates in the world at 32.1%, signifying that scammers are preying on a large market of job-seekers in the market Desiree_Manamela2024-nz. §.§.§ Scams go severely under-reported, creating ecosystem-wide blind spots. Since globally only about half the scams are reported to authorities, the current national level estimates of scam victimization and prevalence are most likely highly underestimated (see Figure <ref>). This trend of under-reporting is especially pronounced in countries like Indonesia, South Africa and Mexico, where more than 1 in 5 users reported being unable to report scams to authorities due to unclear reporting mechanisms. As we discuss further in next section, generating public awareness of reporting tools and mechanisms available to the general public could be a productive area of intervention. The low effective rates of enforcement against scams and low likelihood of loss recovery may also play a role in discouraging victims from reporting scams to authorities Dolgin2023-bh. For example, the 2023 State of Scams in Asia report by Global Anti Scam Alliance, a cross-stakeholder body, found that there was a positive correlation between the recovery of financial losses and people's willingness to report scams to authorities Global-Anti-Scam-Alliance2023-fb. Another recent report by Third Way, a public policy think tank, estimated that in the US the enforcement rate for reported incidents to the Internet Crime Complaint Center (IC3) was 0.3% Eoyang2018-wk. Governments and central banks must ensure allocation of more resources to law enforcement agencies, a streamlined reporting mechanism, and ensure that people have access to proper redressal mechanisms . §.§ Addressing the Scam Problem: Potential Interventions Scam victimization erodes consumer trust, jeopardizes businesses and harms the broader economy. A multi- faceted targeted intervention approach is necessary to develop and assess the effectiveness of various intervention approaches. As we show in Section 4.3, internet based communication methods play a critical mediating role in victimization. Prior work in this area has classified scams along the following broad categories (<cit.>): * Opportunity-based scams: typically involves some monetary / social / emotional reward based hook. * Threat-based scams: typically involving threats of negative consequences and blackmail * Consumer purchase scam: typically happens during online shopping or marketplace interactions. * Phishing scams: often involves techniques like impersonation of a person/entity that victims trust. This is then parlayed to elicit either money or sensitive information from the victim. As we show in Section 4.2, the top scam types remain fairly consistent across surveyed countries; the most popular scam channels were shopping (consumer purchase scam), job/investment (opportunity-based scam) and impersonation-based scams (phishing, romance or friends/family impersonation scams). This bodes well for an approach where we can target and educate users about the underlying manipulation techniques used in these scams, strengthen the resilience of such channels themselves, and raise overall awareness. For countries with outlier patterns, like Indonesia (high victimization from lottery scams), South Africa (high victimization from job-based scams), and South Korea (impersonation and romance scams), it would make sense to take a more white glove approach where Public-Private sector collaboration is called for to target and curb these trends. Interventions to strengthen user resilience Recognizing that users are often the weakest link in the scam ecosystem Baral2019-ab, we must prioritize strategies to enhance their resilience. A multi- faceted approach, akin to a “swarm the problem” strategy, is necessary to develop and assess the effectiveness of various intervention approaches. This area is ripe with opportunities to borrow from intervention tool kits developed and tested against other forms of online harms like misinformation Kozyreva2022-to, for example: * General Awareness Campaigns: Public awareness campaigns help raise awareness of common scam tactics and warning signs, and can theoretically help the broader public to recognize and avoid potential threats Miller1983-md. Having some pre-existing familiarity with scam type has been shown to help individuals better manage risks, but while general awareness campaigns are one of the most prominent intervention strategy deployed by stakeholders, its efficacy in bolstering user resiliency is mixed and unproven Downs2006-wz Jensen2024-qh. There is some research showing efficacy of such campaigns in raising awareness about reporting channels, and in that capacity this could be a tool of interest for stakeholders to potentially alleviate the problem of under-reporting we noticed across most countries Jeremy-Burke-Francisco-Perez-Arce-Christine-Kieffer-Robert-Mascio-Gary-Mottola-Olivia-Valdes2021-xv. * Behavioral Nudges and Credibility Cues: Embedding lightweight behavioral nudges might be a good low-friction way of guiding users towards safer online practices Pennycook2021-gm. For instance, research has shown that using attention prompts to verify if someone is receiving or sending money on a fast payment app can significantly reduce the risk of falling victim to scams where scammers rely on the user not paying attention and sending money instead of receiving it Jha2022-dp. Furthermore, since impersonation based scams are prevalent across all surveyed countries and users rely on visual trust indicators to assess credibility online Jakobsson2007-wx, another intervention worth exploring more broadly here would be providing users with additional credibility cues (e.g. verification check marks) for verified accounts across platforms. As we show in Figure <ref>, since most scams begin from platforms like social media, email, phone call and text messages, adding positive credibility cues (like verification check marks) or negative credibility cues (like potential spam label) could forewarn individuals and help guide them towards making safer interaction decisions online. * Inoculation-Styled Interventions: Scammers often rely on a handful of manipulation techniques to trick their victims into falling for scams. Like vaccines, these interventions expose users to weaker forms of these techniques in a controlled environment to help them build immunity or “mental antibodies” to these real-world threats McGuire1961-yv. Prior research has attempted to map out user journeys of victims and understand what manipulation techniques are at play, particularly for romance scams Whitty2013-gb. Building on this, future research could replicate such mapping to the 3 most popular scam journeys across countries: consumer purchases online, opportunity-based scams (job/investment) and impersonation-based scams (phishing, romance and family/friend impersonation). Users can then be inoculated against these manipulation techniques and tactics. For example, Robb & Wendel 2023 showed that there is strong evidence that inoculation techniques increased users' ability to detect SSN scam emails (government impersonation) as compared to general tips about scams autociteRobb2023-gz. * Digital and Financial Literacy-based Interventions: Since scammers prey on lack of digital & financial literacy among new and habitual internet users, deploying low-cost comprehensive intervention programs that equip individuals with the knowledge and skills to navigate the digital landscape safely, critically evaluate information, and make informed decisions would go a long way as a preventative measure in this space McGrew2023-uh. Research has shown that educational interventions can reduce susceptibility to financial fraud pitches Burke2022-wv. While digital literacy is known to enhance resilience Graham2017-he, the field lacks interventions comparable to those addressing misinformation Kozyreva2022-to. Such tools could be powerful interventions against opportunity-based scams, particularly those that involve investment schemes preying on individuals with low financial literacy. §.§ Scope for Future Research Deepening research to understand scam victimization and barriers to reporting To effectively combat scams, we need a more comprehensive understanding of the phenomenon. While quantitative data, such as that presented in this study, helps to gauge the scale of victimization, we also need equally powerful qualitative research to better understand the issues uncovered in this study, particularly in less affluent countries. In-depth qualitative research, particularly in the following areas, could help tailor effective interventions and policies: * Understanding Lived Experiences of Scam Victims: Scam victimization leaves behind profound and lasting scars. Beyond immediate impact like financial hardships, and emotional repercussions like feelings of shame, anger, and betrayal, it can lead to long-term instabilities and victims socially isolating themselves resulting from a loss of trust in those around them Cross2018-ht, Button2014-bj, Whitty2016-gm. But while qualitative research related to lived experiences of victims in more affluent countries has been insightful, experiences of victims in non-affluent countries largely remains a blind spot. Findings from Section 4.1 in this study underscore an urgent need for such studies as GNI per-capita and scam victimization rates were strongly negatively correlated. Understanding victim's coping strategies, whether seeking support, changing their behavior, or even withdrawing, is critical for developing effective support from industry and policymakers. * Understanding Vulnerability and Resilience: As we discuss in Section 4.2, while top scam types remain fairly consistent across surveyed countries, scam exposure and victimization rates vary across countries. This suggests that there is a complex interplay of factors contributing to vulnerability and resilience among populations across these countries. Prior research has identified demographic factors (like income, age, education), psychological traits and risky online behavior as key contributors to victimization  Vitak2018-kr, Modic2018-ss, Whitty2020-gw, Norris2019-fi, Modic2013-wn. We find that while demographic factors like income and internet access are linked to scam victimization, financial losses stemming from victimization is more common in less affluent countries, even when controlling for internet access. This suggests that factors such as cultural attitudes toward risk, financial/digital literacy, and access to information and support could also play a role in resilience, warranting further research. * Identifying Barriers to Scam Reporting in countries with high under-reporting: The under-reporting of scams, particularly in less affluent countries where individuals lack awareness of or access to reporting mechanisms, hinders efforts to understand and address the issue, as highlighted by our results (see Figure <ref>). Prior research has shown that shame, stigma, and fear of retaliation or judgment often prevent victims from seeking help or reporting scams, particularly in romance scams Havers2024-ah. Distrust in authorities and victim-blaming further deter victims from reporting Button2009-wz. Complexity of reporting processes can also sometimes be a barrier, especially considering victims' post-victimization state of mind Button2013-mz. It is likely that this problem can be addressed with increased focus on qualitative research in such areas to understand underlying factors leading to high under-reporting in these countries. § CONCLUSION Through a nationally representative, multi-country, scams-focused survey, the present study addresses a substantial gap in our understanding of the global scams landscape. Our findings substantiate recent theories about the role of real-time payments in facilitating scams’ proliferation, and highlight the value of consistent scams reporting across countries to help us understand regional issues in context. By analyzing data from a diverse range of economies and cultures, we contribute important insights for researchers, practitioners, and policymakers in online fraud and scams prevention. =200 =300 =400 =500 § AUTHORS Mo Houtti is a Student Researcher on the Trust & Safety Research Team at Google. Abhishek Roy is a Staff UX Researcher on the Trust & Safety Research Team at Google. Venkata Narsi Reddy Gangula is a Senior UX Researcher on the Trust & Safety Research Team at Google. Ashley Marie Walker is a Senior UX Researcher on the Trust & Safety Research Team at Google. § DISCLOSURES This research was funded in it’s entirety by Google. Google is a member of the Global Anti-Scam Alliance, a coalition body of over 100 organizations, including governments, law enforcement, consumer protection, financial authorities and services, social media, internet service providers and cybersecurity professionals.
http://arxiv.org/abs/2407.12729v1
20240717164821
FlexFL: Heterogeneous Federated Learning via APoZ-Guided Flexible Pruning in Uncertain Scenarios
[ "Zekai Chen", "Chentao Jia", "Ming Hu", "Xiaofei Xie", "Anran Li", "Mingsong Chen" ]
cs.DC
[ "cs.DC" ]
FlexFL: Heterogeneous Federated Learning via APoZ-Guided Flexible Pruning in Uncertain Scenarios Zekai Chen, Chentao Jia, Ming Hu, Member, IEEE, Xiaofei Xie, Member, IEEE, Anran Li, Member, IEEE, and Mingsong Chen, Senior Member, IEEE The authors Zekai Chen, Chentao Jia, and Mingsong Chen are with the MoE Engineering Research Center of Hardware/Software Co-design Technology and Application at East China Normal University, Shanghai, 200062, China. The authors Ming Hu and Xiaofei Xie are with Singapore Management University, Singapore. The author Anran Li is with Yale University, USA. Ming Hu (hu.ming.work@gmail.com) is the corresponding author. July 22, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Along with the increasing popularity of Deep Learning (DL) techniques, more and more Artificial Intelligence of Things (AIoT) systems are adopting federated learning (FL) to enable privacy-aware collaborative learning among AIoT devices. However, due to the inherent data and device heterogeneity issues, existing FL-based AIoT systems suffer from the model selection problem. Although various heterogeneous FL methods have been investigated to enable collaborative training among heterogeneous models, there is still a lack of i) wise heterogeneous model generation methods for devices, ii) consideration of uncertain factors, and iii) performance guarantee for large models, thus strongly limiting the overall FL performance. To address the above issues, this paper introduces a novel heterogeneous FL framework named FlexFL. By adopting our Average Percentage of Zeros (APoZ)-guided flexible pruning strategy, FlexFL can effectively derive best-fit models for heterogeneous devices to explore their greatest potential. Meanwhile, our proposed adaptive local pruning strategy allows AIoT devices to prune their received models according to their varying resources within uncertain scenarios. Moreover, based on self-knowledge distillation, FlexFL can enhance the inference performance of large models by learning knowledge from small models. Comprehensive experimental results show that, compared to state-of-the-art heterogeneous FL methods, FlexFL can significantly improve the overall inference accuracy by up to 14.24%. AIoT, APoZ, Heterogeneous Federated Learning, Model Pruning, Uncertain Scenario. § INTRODUCTION Along with the prosperity of Artificial Intelligence (AI) and the Internet of Things (IoT), Federated Learning (FL) <cit.> is becoming a mainstream distributed Deep Learning (DL) paradigm in the design Artificial Intelligence of Things (AIoT) systems <cit.>, since it enables collaborative learning among devices without compromising their data privacy. So far, FL has been widely investigated in various AIoT applications, such as intelligent transportation <cit.>, edge-based mobile computing <cit.>, real-time control <cit.>, and healthcare systems <cit.>. Typically, an FL-based AIoT system is based on a client-server architecture involving a cloud server and numerous AIoT devices. In each FL training round, the cloud server first dispatches the latest global model to multiple selected (activated) devices for local training and then aggregates the trained local models to update the global model. Since the communication between the cloud server and devices is based on model gradients, FL enables knowledge sharing among AIoT devices without privacy leaks. Although existing FL methods are promising for sharing knowledge among devices, they are not well-suited for large-scale AIoT applications involving various heterogeneous devices with different available resources <cit.>. This is because traditional FL methods assume that all the device models are of the same architecture. According to the Cannikin Law, only the models best fit for the weakest devices can be used for FL training. Typically, such models are of small sizes with limited inference capability, thus strongly suppressing the potential FL learning performance. To maximize the knowledge learned on heterogeneous devices, various heterogeneous FL methods have been investigated to use heterogeneous models for local training, which can be mainly classified into two categories, i.e., completely heterogeneous methods and partially heterogeneous methods. Specifically, completely heterogeneous methods <cit.> apply Knowledge Distillation (KD) strategies <cit.> on heterogeneous models with totally different structures for knowledge sharing, while partially heterogeneous methods <cit.> derive heterogeneous models from the same large global model for local training and knowledge aggregation. As an example of partially heterogeneous methods, for a given large global model, HeteroFL <cit.> can generate heterogeneous models to fit devices by pruning the parameters of each model layer. When dealing with real-world AIoT applications, existing heterogeneous FL methods greatly suffer from the following three problems: 182 low-performance heterogeneous models derived by unwise model pruning strategies, 183 inefficient or ineffective local training within uncertain scenarios, and 184 low inference performance of large models caused by resource-constrained scenarios. Specifically, existing methods generate heterogeneous models coarsely by pruning the parameters of each model layer with the same ratio or directly removing the entire layer. Without considering the different functions of parameters within layers, such unwise pruning strategies severely limit the performance of heterogeneous models. Meanwhile, when encountering various uncertainty factors, such as hardware performance fluctuations caused by process variations  <cit.>, dynamic resource utilization (e.g., available memory size), traditional FL may fail in local training since they assume static device resources during FL training. Due to the inaccurate estimation of available device resources, the overall training performance can be deteriorated. Furthermore, within a large-scale AIoT application, typically large models cannot be accommodated by most resource-constrained devices. As a result, the small amount of training data will inevitably influence the inference capability of large models. Therefore, how to wisely generate high-performance heterogeneous models to fit for uncertain scenarios is becoming an urgent issue in heterogeneous FL design. Intuitively, to achieve high-performance pruned models, a model pruning method should delete the least significant neurons first. As a promising measure, activation information can be used to evaluate the importance of neurons, where neurons with more activation times have greater importance. In other words, if a model layer consists of more neurons with fewer activation times, it has more parameters to be pruned. According to <cit.>, the Activation Percentage of Zeros (APoZ) can be used to measure the percentage of zero neuron activation times under the Rectified Linear Unit (ReLU) mapping. Therefore, the higher the APoZ score of a model layer, the higher the pruning ratio we can apply to the layer. Based on this motivation, this paper proposes a novel heterogeneous FL approach named FlexFL, which utilizes the APoZ scores of model layers to perform finer-grained pruning to generate high-performance heterogeneous to best fit their target devices for high-quality local training. To accommodate various uncertain scenarios, FlexFL allows devices to adaptively prune their received models according to their available resources. Meanwhile, based on a self-KD-based training strategy, FlexFL enables large models to learn from small models, thus improving their inference performance. Note that in FlexFL, the small models are derived from large models, and the self-KD-based training is only performed by devices. In this way, FlexFL can effectively explore the greatest potential of devices, thus improving the overall FL training performance. This paper makes the following four major contributions: * We propose an APoZ-guided flexible pruning strategy to wisely generate heterogeneous models best for devices. * We design an adaptive local pruning strategy to enable devices to further prune their local models to adapt to varying available resources within uncertain scenarios. * We present a self-KD-based local training strategy that utilizes the knowledge of small models to enhance the training of large models. * We perform extensive experiments based on simulation and real test-beds to evaluate the performance of FlexFL. § BACKGROUND AND RELATED WORK §.§ Background §.§.§ Federated Learning Federated Learning (FL) is a distributed machine learning approach that addresses data privacy protection and decentralization issues. Traditional FL framework usually consists of a server and multiple devices. In FL training, the server maintains a global model and dispatches it to the selected devices for local training in each round. Each device then trains locally on its own data and uploads the trained model to the server after training. Finally, the server aggregates all the received models to generate a new global model. Specifically, the optimization objective of FL is based on FedAvg <cit.>, which is defined as follows: min_wF(w)=1/|D|∑_k=1^|D|f_k(w), s.t., f_k(w)=1/|𝒟_k|∑_i=1^|𝒟_k|ℓ(w,⟨ x_i,y_i⟩), where |D| denotes the number of devices, and the function f_k(w) is the loss value of the model on device k, |𝒟_k| denotes the data set size in device k and ℓ denotes the loss function (e.g., cross-entropy loss), and w is the model parameter, as well as optimization objective, x_i and y_i, are the samples and the corresponding labels, respectively. §.§.§ Model Pruning In the field of machine learning and deep learning, model pruning is a technique to reduce model complexity and computational resource requirements by reducing redundant parameters and connections in neural network models. The main objective of model pruning is to achieve a more compact and efficient model without significantly sacrificing its performance. Initially, the trained model is analyzed to identify parameters or connections that contribute less to the overall model performance. These parameters are considered redundant and can be pruned without affecting the model's performance. Common approaches include magnitude-based pruning  <cit.>, which removes parameters with small weights; sensitivity-based pruning  <cit.>, which measures the impact of each parameter on the model's output; and structured pruning  <cit.>, which removes entire neurons or channels. APoZ (Activation Percentage of Zeros) <cit.> is a metric used in model pruning to quantify the sparsity level of neural network activations. It measures the percentage of zero activations in a layer or network after applying a pruning technique. A high APoZ score indicates that a large proportion of activations in the network are zero, indicating that the network has achieved significant sparsity. In essence, APoZ provides a quantitative measure of sparsity, allowing us to assess the impact of pruning methods on neural network architectures and optimize pruning strategies to achieve the desired trade-off between model size and performance. §.§ Model Heterogeneous Federated Learning Model heterogeneous FL <cit.> has a natural advantage in solving systemic heterogeneity. Different from traditional FL, model heterogeneous FL usually maintains some models of different sizes on the cloud server, so as to better deal with the heterogeneous resources of different devices. The current model heterogeneous FL methods can be classified into three categories, i.e., width-wise pruning, depth-wise pruning, and two-dimensional pruning. For width-wise pruning, in <cit.>, Diao et al. proposed HeteroFL, which alleviated the problem of device system heterogeneity by tailoring the model width and conducted parameter-averaging over heterogeneous models. Similarly, in <cit.>, Horvath et al. proposed FJoRD, which used the Ordered Dropout mechanism to extract the lower footprint submodels. For depth-wise pruning, DepthFL <cit.> prunes the later parts of deeper networks to reduce the number of network parameters. In InclusiveFL <cit.>, Liu et al. proposed a layer-wise model pruning method with momentum knowledge distillation to better transfer knowledge among submodels. For two-dimensional pruning, in ScaleFL <cit.>, Ilhan et al. introduced a pruning method that scales from both width and depth dimensions, aiming to balance the proportions of model width and depth. Additionally, it incorporates skip connections to facilitate connections between shallower models and the network's classification layers. However, the existing approaches seldom consider the characteristics of each model and the differences in neuronal activation distribution on different datasets, and most of them adopt a fixed pruning method while ignoring the different model architectures. Moreover, existing methods largely lack consideration of device-related resource uncertainties in real-world environments. Most of them are studied under the assumption of fixed device resources, which deviates from the dynamic nature of AIoT scenarios. To the best of our knowledge, FlexFL is the first attempt to utilize a flexible pruning strategy to generate heterogeneous models in FL under resource uncertainty scenarios. Using APoZ scores and the number of parameters of each layer, FlexFL can generate higher-performance heterogeneous models for local training. To deal with various uncertain and resource-constrained scenarios, FlexFL integrates an adaptive local pruning mechanism and self-KD-based local training strategy, which enables devices to adaptively prune their received model according to their available resources and effectively improves the performance of FL training. § OUR FLEXFL APPROACH §.§ Overview of FlexFL Figure <ref> presents the framework and workflow of FlexFL, which consists of two stages, i.e., the pre-processing stage and the FL training stage. FlexFL maintains a large model as the global model and generates multiple heterogeneous models for local training based on the global model. The pre-processing stage aims to calculate the APoZ scores of each layer, which are used for model pruning to generate heterogeneous models. The FL training stage aims to train the multiple heterogeneous models, which are pruned from the global model. As shown in Figure <ref>, in the pre-processing stage, the cloud server maintains a proxy dataset to pre-train the global model and calculates APoZ scores for each layer according to neuron activation. Since APoZ calculation does not require the model to be fully trained, the proxy dataset requires only a small amount of data compared to the training dataset. The cloud server then prunes the global model to generate multiple heterogeneous models based on calculated APoZ scores. Specifically, the workflow of the pre-processing stage consists of three steps as follows: * Step 1: Pre-training. Since the APoZ scores are calculated based on a trained model, the cloud server uses a proxy dataset to train the global model. Note that the proxy dataset consists of two parts, i.e., a training part and a test part. Here, the cloud server only uses the training part to pre-train the global model. * Step 2: APoZ Score Calculation. The cloud server inputs the test part of the proxy dataset into the pre-trained global model and records the activation of each neuron. Then the could server calculates the APoZ scores of each layer based on the activation of its neurons. * Step 3: Heterogeneous Model Generation. The cloud server first specifies multiple levels of heterogeneous models with varying parameter sizes. For example, the cloud server can specify to generate three levels of heterogeneous models with 25% (small), 50% (medium), and 100% (large) parameters of the original global model, respectively. For each heterogeneous model, the cloud server calculates the pruning ratio for each layer according to its APoZ score, adjustment weight, and target model pruning ratio. The cloud server then generates multiple heterogeneous models by pruning the global model according to its calculated pruning ratio. Note that our pruning strategy ensures that a small model is a submodel of any model larger than it. The generated heterogeneous models are stored in the model pool. The FL training stage consists of multiple FL training rounds. As shown in Figure <ref>, the cloud server maintains a table to record the device resource configuration, which can be requested directly from each device. In each FL training round, the cloud server selects multiple devices for local training according to their resources. Then, the cloud server dispatches the heterogeneous models in the model pool to the selected AIoT devices. Note that each device is dispatched with a model coupled with its APoZ scores to guide local pruning. Here, the APoZ score is an array with the length of the number of global model layers, whose communication overhead is negligible. The device adaptively prunes the model according to its currently available resources before conducting local training. To improve the performance of the large model, the devices perform a self-KD-based training strategy, which uses the output of the small models to guide the training of the large model. Note that since small models are pruned from the large model, devices can directly obtain small models from their dispatched model. After local training, devices upload their trained model to the cloud server. The cloud server aggregates all the local models to update the global model and uses the new global model to update the models in the model pool. Specifically, as shown in Figure <ref>, the workflow of each FL training round consists of seven steps as follows: * Step 1: Model & Device Selection. The cloud server selects devices to participate in local training and selects a model from the model pool for each device according to their resources. * Step 2: Model Dispatching. The cloud server dispatches models from the model pool to its corresponding selected devices for local training. Note that the cloud server also sends the APoZ scores to devices for local pruning. * Step 3: Adaptive Local Pruning. Due to various uncertain factors, the available resources of a device may not be sufficient to enable training of the dispatched model. To facilitate local training, a device prunes its received models when its available resources are insufficient. FlexFL enables a device to prune partial parameters based on its received model. If the device cannot train the model, it will be pruned directly to a smaller model. * Step 4: Self-KD-based Local Training. Each device uses its local data to train the pruned model. Since a small model is the subset of any larger model, the pruned model includes all the parameters of small models. Small models can be trained on more devices, which means that small models are more adequately trained. To improve the performance of large models, devices calculate the loss for large model training using the soft label of small models together with the true label of data samples. * Step 5: Model Uploading. Each device uploads its trained model to the cloud server for aggregation. * Step 6: Model Aggregation. The cloud server aggregates the corresponding parameters of local models to generate a new global model. * Step 7: Model Pool Updating. The cloud server uses the new global model to update the parameters of each model in the model pool. §.§ APoZ-Guided Model Generation FlexFL generates multiple heterogeneous models by pruning the global model. To generate high-performance heterogeneous models for local training, FlexFL aims to assign a higher pruning ratio to layers with more redundant parameters. Based on this motivation, FlexFL adopts the APoZ <cit.> score as a metric to guide the model generation. §.§.§ APoZ Score Calculation (APoZCal(·)) APoZ is a metric that measures the number of neuron activations after the ReLU layer. For each ReLU layer i, APoZ is defined as: A_i= ∑_j=1^|𝒟^test_p|∑_k=1^Nf(h^i_k(𝒮_j)= 0) /|𝒟^test_p| × N, where f(δ) is a boolean function, which returns 1 when the bool statement δ⊤, N denotes the dimension of output feature map after ReLU layer, |𝒟^test_p| denotes the total size of the test part of the proxy dataset and h^i_k(𝒮_j) denotes the k^th output feature map of the j^th sample 𝒮_j after i^th ReLU layer. For models consisting of multiple residual blocks, e.g., ResNet <cit.> and MobileNet <cit.>, we adjust the number of channels between blocks. When a block contains multiple ReLU layers, we average its APoZs as the APoZ score of this block. Specifically, if i^th block contains K_i ReLU layers, the block APoZ is calculated as: A_i= 1/K_i∑_t=1^K_i∑_j=1^|𝒟^test_p|∑_k=1^Nf(h^t_k(𝒮_j)= 0)|𝒟^test_p| × N. §.§.§ Adjustment Weight Calculation (AdjWCal(·)) Typically, model layers with more parameters often contain more redundant neurons, diminishing the significance of individual neurons within layers. When pruning a specific number of neurons, prioritizing layers with more parameters is likely to have less impact on overall model performance compared to layers with fewer parameters. Therefore, we calculate the adjustment weight as follows to adjust APoZ scores: AdjWCal(l_i,M) = logsize(l_i)/logmax(size(l_1),...,size(l_len(M))), where l_i is the i^th layer of M, the function size(l_i) calculates the number of parameters of the i^th layer, and len(M) denotes the number of layers of the model M. §.§.§ Heterogeneous Model Generation (ModelGen(·)) Based on the calculated APoZ scores and adjustment weights, FlexFL can prune the global model to generate heterogeneous models. Note that the heterogeneous model generation is performed on the server side and only once upon initialization. Algorithm <ref> presents the process of heterogeneous model generation. Lines <ref>- <ref> initialize model pool P and pruning ratios s for each target model. Lines <ref>- <ref> generates the multiple models according to the target model pruning ratios in L_p. Lines <ref>- <ref> initialize the pruning control variable γ, the target model size p_i, and the target model M^', respectively. Line <ref> evaluates the gap between the size of M^' and a target model size p_i. Line  <ref> uses Equation <ref> to calculate the adjustment weight AdjW_j for layer l_j. Line <ref> generates pruning ratio s[i][j] based on APoZ score A_j and adjustment weight AdjW_j. In Line <ref>, we set the minimum pruning ratios of each layer to 0.01 to avoid the parameters of a layer being completely pruned. In Line <ref>, according to the pruning ratios s[i][] generated, we prune the global model M to M^'. Specifically, for layer l_j, y_j and x_j represent the numbers of output and input channels, according to the pruning ratios s[i][], we prune it to a model M' with x_j × s[i][j-1] input channels and y_i × s[i][j] output channels. Note that if W_i ∈ℝ^y_j × x_j is the hidden weight matrix of the global model M in layer l_j, after pruning, W'_i ∈ℝ^(y_j × s[i][j]) × (x_j × s[i][j-1]) is the new hidden weight matrix of the pruned model M^' in layer l_j. In Line <ref>, for M^' with an error less than or equal to ϵ, we add M^' to the model pool P as the pruned model corresponding to the target model pruning ratio L_p[i]. §.§ Adaptive Local Model Pruning (AdaPrune(·)) To address the problem of insufficient available resources within uncertain scenarios, FlexFL enables devices to adaptively prune their received models to adaptive models for local training, focusing on memory resource constraints. Specifically, when a device does not have sufficient memory resources for training its received model, it first prunes Γ× size(M) parameters to generate an adaptive model for local training, where Γ is the adaptive pruning size and size(M) is the number of parameters of the global model M. Note that our approach requires that the size of the pruned parameters (i.e., Γ× size(M)) should be smaller than the smallest size of the parameter differences between any two models. When its available resources are still insufficient to train the pruned model, the device directly prunes it to a smaller model that best fits the device. For example, assume that M_1, M_2, and M_3 denote the small, medium, and large models, respectively. Let M_2^' and M_3^' be the adaptive models of M_2 and M_3, respectively. If a model of type M_3 is dispatched to some device with insufficient resources, the client will adaptively prune the model in the order of M^'_3, M_2, M^'_2, and M_1 until the pruned model can be accommodated by the device. Similar to Algorithm <ref>, the adaptive pruning is still based on our calculated APoZ scores. Since our APoZ scores are calculated before FL training and are not updated within the training process, the cloud server can directly send the APoZ scores to all devices. In addition, since all heterogeneous models in FlexFL are pruned from the same global model without any extra exit layers, and a smaller model is a submodel of a larger model, devices can prune the received model according to the corresponding pruning scheme. §.§ Self-Knowledge Distillation-based Local Training In resource-constrained scenarios, the majority of devices cannot train large models, which results in inadequate training for large models. To improve the performance of large models, FlexFL adopts a Knowledge Distillation <cit.> strategy. Since small models are the submodel of large models, devices can utilize adequately trained small models to enhance the training of large training. Specifically, during model training, the device can obtain the outputs (i.e., soft labels) of the large model and small models. Then the device calculates the Cross-Entropy (CE) loss using the output of the large model and true labels and calculates the Kullback-Leibler (KL) loss based on the outputs of the large model and small models. Finally, the device uses both CE and KL losses to update the model. Assume that M_i is a large model, ŷ_i is the soft labels of model M_i and y_i is ground truth, the CE loss of M_i is defined as: ℒ_CE = - logh(ŷ_i)[y_i], where h(·) is the softmax function. Assume that M_1, M_2, ..., M_i-1 are the smaller models for M_i and ŷ_1, ŷ_2, ..., ŷ_i-1 are the soft labels of these models. The KL loss can be calculated as follows: ℒ_KL = 1/i-1∑_j=1^i-1sum(h(ŷ_j)/τ)·τ^2 logh(ŷ_j)/h(ŷ_i), where τ is the temperature to control the distillation process. Note that a higher value of τ leads to smoother probability distributions, making the model focus more on relatively difficult samples, and a lower value of τ makes the probability distribution sharper, making the model more confident and prone to overfitting. According to the CE loss and the KL loss, we can obtain the final loss ℒ of the model M_i as follows: ℒ = ℒ_CE + λℒ_KL where λ is a hyperparameter that controls the training preference for two types of losses. Note that a large value of λ makes the model training more influenced by KL divergence loss. On the contrary, a small value of λ guides the training to focus more on cross-entropy loss. §.§ Heterogeneous Model Aggregation (Aggr(·)) When all the local models are received and saved in S_upload, the cloud server can perform the aggregation. Since all the heterogeneous models are pruned from the global model, the cloud server generates a new global model by aggregating the corresponding parameters of the models in S_upload with the weights determined by the number of its trained data. Assume that p is a parameter in the global model M, and σ(p, S_upload) extracts the model in the set that contains the corresponding parameter of p and the numbers of their training data. Let θ be the parameters of the aggregated global model M, which can be calculated as follows: ∀p∈θ, p = ∑_m∈σ(p, S_upload) p^m× d^m/∑_m∈σ(p, S_upload) d^m, where p^m denotes the corresponding parameter of p in m and d^m is the number of training data of m. By applying the above equation to aggregate all the parameters of models in S_upload, the cloud server can generate an updated global model. Subsequently, the cloud server updates all the heterogeneous models in the model pool P by assigning the parameter values of the aggregated global model to their corresponding parameters. §.§ Implementation of FlexFL Algorithm <ref> presents the implementation of our FlexFL approach. Lines <ref>-<ref> denote the operations of pre-processing. Line <ref> initializes the resource configuration table, where function ResConfRequest(·) requests all the devices to upload their available resource information. In Line <ref>, the function APoZCalculate(·) pretrains the global model M using the training part of the proxy dataset 𝒟^train_p and calculates the APoZ scores for each layer of M using the test part of the proxy dataset 𝒟^test_p, where S_APoZ is a set of two-tuples ⟨ l_i, A_i ⟩, l_i denotes the i^th layer of M, A_i denotes the APoZ score of the i^th layer. Line <ref> resets the global model. Line <ref> generates len(L_p) heterogeneous models according to calculated APoZ scores. Line <ref> stores the generated models in the model pool P. Lines <ref>-<ref> present the process of FL training stage. Line <ref> calculates the number of devices needed to participate in local training. In Line <ref>, the function DevSel(·) selects K devices and their respective trained models, where S_d is a set of two tuples ⟨ d, m⟩, d∈ D is a selected device, and m∈ P is a model that will be dispatched to d. Line <ref> initializes the model set S_upload, a set of two tuples ⟨ m, num ⟩, where m is a local model and num is the number of data samples used to train m. Lines <ref>-<ref> present the local training process. Line <ref> requests the current resources of d_k and Line <ref> uses our adaptive local pruning strategy to prune the received model m_k according to r_d_k. Line <ref> employs Equation <ref> to calculate the loss L and Line <ref> updates the parameters of m^'_k according to L, where θ^'_m_k denotes the parameters of m^'_k. In Line <ref>, the device uploads its trained model m^'_k together with the number of data samples |𝒟_k| to the model set S_upload. Line <ref> aggregates all the models in S_upload to update the global model M. Line <ref> updates the models in P using the global model M. § PERFORMANCE EVALUATION To evaluate FlexFL performance, we implemented FlexFL using PyTorch. For all investigated FL methods, we adopted the same SGD optimizer with a learning rate of 0.01 and a momentum of 0.5. For local training, we set the batch size to 50 and the local epoch to 5. We assumed that |D|=100 AIoT devices were involved in total, and f=10% of them were selected in each FL training round. All the experiments were conducted on an Ubuntu workstation with one Intel i9 13900k CPU, 64GB memory, and one NVIDIA RTX 4090 GPU. §.§ Experimental Settings §.§.§ Device Heterogeneity Settings To evaluate the performance of FlexFL in uncertain and resource-constrained scenarios, we simulated various devices with different dynamic resources (available memory size). Specifically, we employed Gaussian distribution to define dynamic device resources as follows: r = r_M - |u|, where r_M is the maximum memory capacity of the device and u ∼𝒩(0,σ^2). In our experiment, we adopted three levels of devices, i.e., weak, medium, and strong. We set the ratio of the number of devices at these three levels to 40%, 30%, and 30%, respectively. The distribution of their available memory size across these devices is uncertain, as shown in Table <ref>. If the device memory is smaller than its received model m, i.e., r ≤size(m)/size(M)× 100, our approach will not train m due to insufficient memory resources. In this case, m will be pruned to be an adaptive model to fit the device. For example, if r is 30, the number of parameters of a pruned model cannot exceed 30% of its original counterpart. §.§.§ Data Settings In our experiments, we utilized three well-known datasets, i.e., CIFAR-10 <cit.>, CIFAR-100 <cit.>, and TinyImagenet <cit.>. To investigate the performance on non-IID scenarios, we adopted the Dirichlet distribution Dir(α) to assign data to the devices involved. By controlling the hyperparameter α of the Dirichlet distribution, we managed the degree of IID bias in the data, where smaller values of α indicate higher data heterogeneity. §.§.§ Model Settings To validate the generality of our method, we conducted experiments under the models of different sizes and different architectures, i.e., VGG16 <cit.>, ResNet34 <cit.>, and MobileNetV2 <cit.>. We adopted p=3 and uniformly set the list of target model pruning ratios L_p to [25%, 50%, 100%] for all methods, yielding a model pool P ={M_1, M_2, M_3} with three models, where model M_3 is the largest model and also the global model. Figure <ref> presents an example to visualize the pruned models based on different FL methods. In the case of FlexFL, the adaptive models are M^'_2 and M^'_3, which are obtained by pruning Γ× size(M_3) parameters from M_2 and M_3, respectively, where Γ = 10%. For self-KD hyperparameters, we set τ =3 and λ = 10. §.§ Performance Comparison In our experiment, we compared three methods to our method: Decoupled<cit.>, HeteroFL<cit.>, and ScaleFL<cit.>. Decoupled follows a strategy similar to FedAvg<cit.>, where large, medium, and small models are trained on devices capable of hosting them without considering model aggregation. HeteroFL generates corresponding models based on width-wise pruning. ScaleFL, on the other hand, prunes models based on both width and depth proportions to create their corresponding models. Figure <ref> illustrates the comparison between the accuracy of our method and three other baselines. The solid line in the middle represents the average accuracy of the 25%, 50%, and 100% models, and the boundaries filled with corresponding colors represent the highest and lowest accuracies among all models. §.§.§ Large Model Performance Analysis In Table <ref>, we use the notation “x/y” to specify the test accuracy, where x denotes the average accuracy of the 25%, 50%, and 100% models and y indicates the accuracy of the 100% model. It is evident that FlexFL consistently achieves an accuracy improvement ranging from 1.75% to 13.13% in terms of large model accuracy, regardless of whether in the IID or non-IID scenarios. This indicates that our method performs better in obtaining high accuracy on larger models. This discrepancy can be attributed to the greater flexibility in pruning offered by VGG16 and MobileNetV2. Specifically, VGG16 permits pruning of up to the first 15 layers, whereas MobileNetV2 allows for pruning of up to 9 blocks. In contrast, due to the inability to disrupt the internal structure of residual blocks in ResNet34, we can only prune 5 blocks, resulting in a relatively smaller improvement compared to the baselines. §.§.§ Average Model Performance Analysis Our experiment also evaluated the average model accuracy, as shown in Table <ref>. Based on our observations of the datasets, our approach demonstrates an accuracy improvement ranging from 0.92% to 7.65% compared to ScaleFL. We also observe that in most datasets, the accuracy of the largest model is higher than the average model. Conversely, in the ScaleFL, HeteroFL, and Decoupled approaches, the accuracy of the largest model is lower than that of the average model. This indicates that in our approach, large models can effectively leverage their greater number of parameters, while in other methods, large models exhibit a paradoxical scenario where they possess more parameters but lower accuracy compared to smaller or medium-sized models. This phenomenon arises from the fact that small models can be trained on all devices, whereas large models can only be trained on devices with ample resources. Consequently, small models encapsulate a broader spectrum of knowledge. In our approach, by distilling knowledge from large models to smaller ones, the larger models can enhance accuracy by assimilating knowledge from other models. §.§ Impacts of Different Configurations §.§.§ Proxy Dataset Size To evaluate the impact of the pruning ratio s on different proxy dataset sizes, we used training datasets of sizes 100%, 50%, 20%, 10%, 5%, and 1% as proxy datasets, where 80% of the proxy dataset was used as the training set for pre-training. After training for 100 rounds, the remaining portion was used as the test set to calculate APoZ scores. Based on Algorithm <ref>, we compared the similarity of pruning ratio s_p%[i] obtained from model M_i in the model pool P, where p% represents the proxy dataset size. The similarity is defined as: sim_(p%,M_i) = 1-avg(|s_p%[i]-s_100%[i]|/s_100%[i]). Figure <ref> shows the similarity for each level model M_i. We can observe that even with only 1% of the data, FlexFL achieves pruning ratios similar to those using the full dataset. We conducted heterogeneous FL training using different model pools P generated by different proxy dataset sizes. As shown in Table <ref>, FlexFL achieves inference accuracy similar to that of the full dataset when using only 1% data as a proxy dataset. Therefore, FlexFL can achieve good performance only by using a very small proxy dataset. §.§.§ Numbers of Involved Devices To investigate the scalability of our approach in various heterogeneous FL scenarios, we studied the impact of varying numbers of involved devices on inference accuracy. Specifically, we conducted experiments based on CIFAR10 and VGG16 within IID scenarios involving |D|=50, 100, 200, and 500 devices, respectively. In each training round, 10% of the devices were selected. Figure <ref> shows that our method consistently improves performance across different numbers of devices. Moreover, as |D| increases, the accuracy of all methods decreases. The increase in the number of involved devices leads to less local data on each device, which reduces the generalization of the local model and further reduces the accuracy of the global model. Specifically, our method experiences a 3.72% decrease in average model accuracy when increasing from |D|=50 to |D|=500. In contrast, ScaleFL experiences an 8.86% decrease in average model accuracy as the number of involved devices increases. This indicates that our method is more practical and adaptable to heterogeneous FL scenarios with large numbers of devices. §.§.§ Numbers of Selected Devices Assume that there are a total of |C|=100 devices. Figure <ref> compares the training performance of FL methods considering different ratios f of selected devices based on CIFAR10 and VGG16 within an IID scenario. We can find that our approach achieves the best performance in all four cases. Moreover, as the ratio f increases, the accuracy of our method remains stable, and the differences between the accuracy of the large and small models are the smallest. Conversely, for ScaleFL, as f increases, its performance slightly decreases, and the difference in accuracy between the large and small models gradually increases. §.§.§ Proportions of Different Devices We compared our method with three baselines in terms of accuracy under different proportions of devices, as shown in Figure <ref>. We categorized all devices into three groups, and the uncertainty configurations for each group of devices are shown in Table <ref>. We varied the device proportions to 1:1:8, 1:8:1, and 8:1:1. According to our experimental results, we observed that our method outperformed baselines in terms of average model accuracy across all device proportions. Furthermore, as the device proportions changed, our method's accuracy remained relatively stable, while other baselines exhibited performance degradation. Moreover, when the number of medium and small models was higher, the difference between the large and small model accuracy in our method was small. In contrast, the difference between the highest and lowest model accuracy in other Baselines reached approximately 10%. This indicates that our method can effectively accommodate extreme device resource variations with different proportions. §.§.§ Adaptive Local Model Pruning Size We conducted experiments on CIFAR10 within an IID scenario with VGG16 to evaluate the impact of adaptive pruning sizes Γ. The experimental results are presented in Table <ref>. We can find that the optimal value for the hyperparameter Γ in our experiments is 10%. This is mainly because a low value of Γ leads to lower utilization of adaptive models, meaning more devices degrade their received models directly to smaller models. In contrast, although a high value of Γ can improve the utilization of adaptive models, it causes smaller sizes of adaptive models, which results in a lower utilization of resources. §.§.§ Self-KD Hyperparameter Settings We investigated the impact of the hyperparameter λ in self-KD on our experiments. In our experimental setup, we fixed the temperature parameter τ=3 for self-KD and varied the coefficient of KL-loss λ during local training as λ=0,5,10,20,50. The experimental results in Figure <ref> show that without self-KD (λ=0), the accuracy of our method decreased by approximately 2-4% compared to when distillation was used. Furthermore, with increasing distillation coefficient λ, the accuracy showed an initial increase followed by a decreasing trend in both IID and Non-IID scenarios. When the distillation coefficient λ is at a reasonable range, i.e. λ∈[10,20], the overall loss can be well balanced between distillation loss and cross-entropy loss. When λ is set to a high value, the overall loss is dominated by distillation loss, which hinders effective learning of knowledge from the local dataset, resulting in an accuracy decrease. §.§.§ Different Settings of Resource Distributions To explore our method's adaptability in different resource allocation scenarios, we constructed three distinct configuration plans, as shown in Table <ref>. Conf1 indicates a constant number of resources for each device, Conf2 indicates slight fluctuations in the resources of devices , and Conf3 suggests significant resource fluctuations across devices. We conducted a study comparing the accuracy differences between our method and three baseline methods, with results shown in Table <ref>. Our method achieved approximately a 4% performance improvement compared to ScaleFL across all configs, indicating that our method can maintain high accuracy when dealing with various degrees of resource fluctuations. §.§.§ Real-world Datasets To validate the generalization ability of our approach, we extended our experiments to include real-world datasets, i.e., FEMNIST <cit.> and Widar <cit.>, in addition to image recognition datasets. The FEMNIST dataset comprises 180 devices, with each training round selecting 10% devices. The data distribution on devices is naturally non-IID. We assumed that the Widar dataset involves 100 devices following given Dirichlet distributions, and 10 devices are selected for local training in each FL round. We applied the uncertainty settings in Table <ref> to all devices. The results presented in Table <ref> demonstrate the performance of our method on ResNet34 and MobileNetV2, with performance improvements of up to 10.63%. There is minimal difference between the average model accuracy and the accuracy of the best-performing model. On VGG16, although FlexFL exhibits a 2.68% lower average accuracy compared to HeteroFL on the FEMNIST dataset, FlexFL still demonstrates improved accuracy for the largest model. §.§ Ablation Study We conducted a study on the effectiveness of each component within our method to investigate their respective impacts on the accuracy of our approach. We designed four varieties of FlexFL: i) “w/o self-KD” indicates the absence of self-distillation during local training; ii) “w/o adaptive model” implies the utilization of only models M_1, M_2, and M_3, with the current model M_i being pruned to the model M_i-1 under resource constraints; iii) “w/o APoZ” involves only using adjustment weight AdjW for model pruning; and iv) “w/o adjustment” entails utilizing APoZ for model pruning without adjustment weight AdjW. Figure <ref> shows the superiority of FlexFL against its four variants, indicating that the absence of our proposed components will decrease model accuracy, with the lack of APoZ causing the most significant decline. §.§ Evaluation on Real Test-bed To demonstrate the effectiveness of FlexFL in real AIoT scenarios, we conducted experiments on a real test-bed platform, which consists of 17 different AIoT devices and a cloud server. Table <ref> shows the details of the configuration of these AIoT devices. Based on our real test-bed platform, we conducted experiments with a non-IID α =0.1 scenario on CIFAR10 <cit.> dataset using MobileNetV2 <cit.> models and selected 10 devices to participate in local training in each FL training round. We set a uniform time limit of 70000 seconds for FlexFL, ScaleFL, and HeteroFL. Figure <ref> illustrates our real test-bed devices and the learning curves of the methods. From Figure <ref> (b), we can observe that FlexFL achieves the highest accuracy compared to ScaleFL and HeteroFL. In addition, we can also find that compared with the two baselines, FlexFL has relatively small accuracy fluctuations. Therefore, compared to ScaleFL and HereroFL, FlexFL still achieves the best inference accuracy and stability on the real test-bed platform. §.§ Discussion §.§.§ Scalability Analysis With the increasing number of heterogeneous devices integrated into complex IoT systems, scalability becomes crucial for the deployment of FlexFL. We conducted experiments to evaluate the scalability of FlexFL in Sections <ref>, <ref>, and <ref>. We verified the performance of FlexFL using 500 devices for heterogeneous federated learning, selecting 50% devices for training in each federated learning round, and considering scenarios with extreme variations in device resource distribution. The experimental results show that our approach can accommodate large FL applications with various network architectures. §.§.§ Computation Overhead To evaluate the computation overhead of components (i.e., pre-processing, adaptive local pruning, and self-KD local training) introduced by FlexFL, we conducted various experiments in our real test-bed platform to investigate their impacts on the overall training time. Specifically, we evaluated from two aspects: i) time overhead of components per FL training round; and ii) training time to achieve a specific test accuracy. From Table <ref>, we can find that “FlexFL without self-KD” needs 52.05s on average, while FlexFL needs 71.82s on average. The introduction of self-KD will result in longer local training time. Note that our method's pre-processing time accounts for approximately 0.3% of the total training time (219s for 1000 rounds of training), and its adaptive pruning time accounts for about 2% of the local time. Overall, the computational overhead of adaptive pruning and pre-training is almost negligible, and the main additional computational overhead of FlexFL comes from self-KD. However, as shown in Table <ref>, FlexFL needs much less training time to achieve a specific test accuracy than HeteroFL and ScaleFL. Specifically, compared with ScaleFL, FlexFL can achieve a training speedup of up to 6.63×. Here, “All Large” represents the results obtained by only dispatching the largest models for FL training under FedAvg, and the notation “N/A” means not available. Moreover, FlexFL can achieve an accuracy of 85%, while all the other methods fail, mainly benefit from the performance improvement brought about by our proposed self-KD technique. §.§.§ Communication Overhead From Table <ref>, we can observe that FlexFL achieves the optimal communication overhead under all target accuracy levels and can reduce communication overhead (Dispatch+Upload) by up to 85%. Since APoZ-guided pruning, adaptive local pruning, and self-distillation do not rely on any extra complex data structures, the additional memory overhead they introduce is negligible. Note that APoZ-guided pruning is performed on the server side and only once upon initialization, it does not impose any burden on devices. § CONCLUSION Due to the lack of strategies to generate high-performance heterogeneous models, existing heterogeneous FL suffers from low inference performance, especially for various uncertain scenarios. To address this problem, this paper presents a novel heterogeneous FL approach named FlexFL, which adopts an APoZ-guided flexible pruning strategy to wisely generate heterogeneous models to fit various heterogeneous AIoT devices. Based on our proposed adaptive local pruning mechanism, FlexFL enables devices to further prune their received models to accommodate various uncertain scenarios. Meanwhile, FlexFL introduces an effective self-knowledge distillation-based local training strategy, which can improve the inference capability of large models by learning from small models, thus boosting the overall FL performance. Comprehensive experimental results obtained from simulation- and real test-bed-based AIoT systems show that our approach can achieve better inference performance compared with state-of-the-art heterogeneous FL methods. § ACKNOWLEDGEMENT This work was supported by Natural Science Foundation of China (62272170), and “Digital Silk Road” Shanghai International Joint Lab of Trustworthy Intelligent Software (22510750100), Shanghai Trusted Industry Internet Software Collaborative Innovation Center, the National Research Foundation, Singapore, and the Cyber Security Agency under its National Cybersecurity R&D Programme (NCRP25-P04-TAICeN). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore and Cyber Security Agency of Singapore. IEEEtran
http://arxiv.org/abs/2407.11926v1
20240716171813
Far from Perfect: Quantum Error Correction with (Hyperinvariant) Evenbly Codes
[ "Matthew Steinberg", "Junyu Fan", "Robert J. Harris", "David Elkouss", "Sebastian Feld", "Alexander Jahn" ]
quant-ph
[ "quant-ph", "hep-th" ]
^1QuTech, Delft University of Technology, 2628 CJ Delft, The Netherlands ^2Quantum and Computer Engineering Department, Delft University of Technology, 2628 CD Delft, The Netherlands ^3ARC Centre for Engineered Quantum Systems, School of Mathematics and Physics, The University of Queensland, St Lucia, QLD, 4072, Australia ^4Networked Quantum Devices Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan ^5Department of Physics, Freie Universität Berlin, 14195 Berlin, Germany § ABSTRACT We introduce a new class of qubit codes that we call Evenbly codes, building on a previous proposal of hyperinvariant tensor networks. Its tensor network description consists of local, non-perfect tensors describing CSS codes interspersed with Hadamard gates, placed on a hyperbolic {p,q} geometry with even q≥ 4, yielding an infinitely large class of subsystem codes. We construct an example for a {5,4} manifold and describe strategies of logical gauge fixing that lead to different rates k/n and distances d, which we calculate analytically, finding distances which range from d=2 to d ∼ n^2/3 in the ungauged case. Investigating threshold performance under erasure, depolarizing, and pure Pauli noise channels, we find that the code exhibits a depolarizing noise threshold of about 19.1% in the code-capacity model and 50% for pure Pauli and erasure channels under suitable gauges. We also test a constant-rate version with k/n = 0.125, finding excellent error resilience (about 40%) under the erasure channel. Recovery rates for these and other settings are studied both under an optimal decoder as well as a more efficient but non-optimal greedy decoder. We also consider generalizations beyond the CSS tensor construction, compute error rates and thresholds for other hyperbolic geometries, and discuss the relationship to holographic bulk/boundary dualities. Our work indicates that Evenbly codes may show promise for practical quantum computing applications. Far from Perfect: Quantum Error Correction with (Hyperinvariant) Evenbly Codes M. Steinberg^1,2, J. Fan^1,2, R. J. Harris^3, D. Elkouss^1,4, S. Feld^1,2, A. Jahn^5 Received —; accepted — ======================================================================================== § INTRODUCTION Quantum error correction (QEC) is a necessity for fault-tolerant quantum-information processing at large scale <cit.>. The utility of QEC codes for practical quantum processing depends on a number of properties, such as finite code rates, a threshold against physical errors, low-weight stabilizer checks, and fault-tolerant protocols for universal logical operations <cit.>. The program of finding specific code constructions with such properties has drawn from a number of fields outside of quantum information theory, such as topological order <cit.>, classical coding theory <cit.>, and more recently, holographic dualities such as the Anti-de Sitter / Conformal Field Theory (AdS/CFT) correspondence <cit.>. In the last approach, one constructs so-called holographic codes derived from the bulk-to-boundary encoding maps of model of quantum gravity <cit.>. Though the bulk degrees of freedom in these models – which are associated with logical qubits of a code – live in one higher dimension than the boundary, the hyperbolic geometry of the bulk ensures that the Hilbert space of the physical qubits on the boundary is larger. The peculiar geometric properties of these maps then imply genuine quantum error correction <cit.>. Apart from their usefulness in studying bulk / boundary correspondences in theoretical high-energy physics, it has been suggested that holographic codes could have practical application in quantum information science. Indeed, holographic codes based on absolutely maximally-entangled (AME) states <cit.> or planar maximally-entangled (PME) states <cit.> (which are also known as perfect and block-perfect tensors, respectively <cit.>) have been shown to exhibit several desirable properties for practical quantum error correction. Among these properties are tunable, non-zero rates; high central-logical qubit thresholds for several logical-index implantation schemes (which are comparable to those of several topological codes <cit.>); and efficient parallelizable decoders <cit.>. In this work, we explore a recent proposal for hyperinvariant tensor network (HTN) codes <cit.> and show that it encompasses simple qubit-level codes; we dub these new codes Evenbly codes after the author of the original HTN paper <cit.>. An essential property of these tensor network codes is that they do not require perfect tensors, and can thus be built from gates that mediate less-than-maximal entanglement. Our specific construction incorporates seed tensors from q,1,2 _2 CSS codes placed on the vertices of a hyperbolic tiling, along with Hadamard gates on its edges; this can be generalized to any even-q hyperbolic tiling with Schläfli symbols { p,q }, giving rise to a new, infinite family of holographic subsystem codes. We methodically show that the resulting code structure upholds the isometric constraints typified by the HTN ansatz <cit.>, and additionally show how to partition the logical subspace as a subsystem code. The distance and rate of the code is analyzed, and variants with a rate up to R=√(3)/2 and a distance scaling up to d(n) ∼ n^2/3 can be constructed. Moreover, we investigate the erasure threshold of the zero-rate { 5,4} Evenbly code using two types of decoders: a quadratically-scaling greedy reconstruction algorithm and an optimal recovery algorithm based on Gaussian elimination, which exhibits a cubic time complexity <cit.>. We find that the greedy reconstruction algorithm faithfully provides a lower bound for the threshold (approximately 40%), whereas the Gaussian-elimination algorithm obtains a threshold of 49.95 ± 0.01% in the Z gauge. A constant-rate k/n = 0.125 variant of the Evenbly code is tested as well, and it is shown to achieve a threshold of about 44% in the Y gauge for the central logical qubit, and more than 39% in the Z gauge. In both the zero- and constant-rate case, the X gauge corresponds to a product state embedding of the ungauged logical qubits on the boundary, preserving the transversal weight-2 gates of the 4,1,2 _2 seed code. We also study the effects of depolarizing and pure Pauli noise on the zero-rate {5,4} Evenbly code using the integer-optimization decoder from <cit.>, proving the existence of a threshold at 19.1 ± 0.94% for the former, and around 50% for the latter, ostensibly surpassing the well-known zero-rate hashing bound from random coding theory <cit.>. Other constructions are possible, and we provide several examples that utilize seed tensors which are neither CSS nor planar 2-uniform, as well as evaluating the erasure thresholds under greedy decoding for several even-q generalizations of the Evenbly code discussed in the main text. This paper is organized as follows. We start with a synopsis on tensor networks and holographic quantum codes (<ref>) and subsequently introduce hyperinvariant tensor networks (HTN) as an ansatz (<ref>). Next, we construct Evenbly codes (<ref>), commencing first with the introduction of 1-uniform (i.e. GHZ) states to the vertex tensors of the original HTN ansatz (<ref>), and following subsequently with the construction of the qubit-level Evenbly code using planar 2-uniform states (<ref>). We further show that our construction admits a subsystem structure, in line with previous holographic quantum codes, and we describe how to partition the logical space and perform gauge fixing for the code (<ref>). Proceeding, we investigate the rate and distance scaling for the code, and as a result construct a finite-rate version of the Evenbly code (<ref>); the threshold properties are investigated for a zero-rate Evenbly code (<ref>) and a constant-rate variant in <ref>; in the process, we explain how the X gauge choice in the Evenbly code allows for weight-2 transversal logical operations to emerge, and we show that very high thresholds emerge in several of the other gauge choices. Finally we report depolarizing and pure Pauli noise threshold results (<ref>), which are found to be competitive with known code-capacity thresholds for topological codes <cit.>. The implications of our findings are discussed and concluding comments are provided in <ref>. In the appendices, we provide additional information on how to build multipartite maximally-entangled (MME) states (<ref>); an example construction of an Evenbly code using an AME graph state (<ref>); Several additional constructions on generalized {p,q} tilings and and an accompanying greedy threshold analysis (<ref>); several seed-tensor constructions that are neither CSS nor planar 2-uniform states, yet still can be used to create new Evenbly codes (<ref>); and lastly, details on the three different decoder strategies employed in this paper (<ref>). § BACKGROUND §.§ Tensor Networks & Holographic Codes The first tensor network to be studied as a model for holographic dualities was the multiscale entanglement-renormalization ansatz (MERA) <cit.>, originally proposed as an ansatz for approximating groundstates of quantum-critical spin chains <cit.>. Though it exhibits some approximate error-correction features <cit.>, its geometry does not match with expectations from AdS/CFT <cit.>. A more natural tensor network realization of holographic bulk-to-boundary codes was introduced with the Harlow-Preskill-Pastawski-Yoshida (HaPPY) codes <cit.>, a discrete realization of quantum error correction in the AdS/CFT correspondence <cit.>. The HaPPY code itself can be defined using the seed tensor of the ansatz, which determines the encoding isometry for each individual logical qubit, all of which are contracted together as a tensor network on a hyperbolic {p,q} tiling. For the pentagon version (q=5) of the HaPPY code, this seed tensor is the encoding isometry for the 5,1,3 perfect stabilizer code <cit.>. This perfect tensor also describes a 6-qubit AME state. The logical qubits of this tensor network code can be recovered using a greedy algorithm <cit.>, so one obtains a concrete dictionary between quantum states in the bulk of the network and boundary states on the uncontracted edges of the tensor network. Other holographic code proposals have also been introduced, mainly in the context of utilizing PME states (such as in <cit.>) in the bulk rather than AME states. Holographic codes typically exhibit uberholography, i.e., only operators on a fractal subset of the boundary are needed in order to reconstruct logical operators in the bulk <cit.>. Uberholography is a special case of operator-algebra quantum-error correction <cit.>, in which a subalgebra representing all possible logical operators in a region is defined using non-local physical boundary operators. HaPPY-style holographic codes have been shown to exhibit high thresholds under both erasure and depolarizing noise models <cit.>. In the spirit of investigating holographic dualities, it has been shown that HaPPY-style holographic codes do not reproduce many of holography's known features at finite N (i.e., under gravitational corrections) <cit.>. Such features include non-trivial entanglement spectra of reduced density matrices, state-dependent bulk-reconstruction, and corrections to the entanglement entropy, whose recovery requires tensor network codes with non-perfect tensors <cit.>. The quantum code construction that we propose in this paper, in addition to its novel error correction properties, are thus relevant from the perspective of holography as well; we address these attributes in our companion paper <cit.>. §.§ Hyperinvariant Tensor Networks Hyperinvariant tensor networks (HTN) were proposed for exploring conformal field theories and critical states that are adherent to a description in terms of the AdS/CFT correspondence <cit.>. The hyperbolic tessellation and associated bulk hyperbolic symmetries were taken from previous work on holographic codes <cit.>; however, when combined with the superoperator structure present in MERA <cit.>, the intent was to devise simulations of conformal field theory (CFT) states for which the ansatz upholds a discretized version of conformal symmetry <cit.>. In keeping with the hyperbolic structure characteristic of holographic quantum codes, the HTN ansatz is constructed from a two-dimensional hyperbolic tessellation, as shown in <ref>(a); such hyperbolic tilings are defined by Schläfli symbols {p,q} in the bulk manifold (where p represents the number of sides in a polygon residing in the bulk, and q is the number of edges meeting at each vertex), with the requirement that 1/p + 1/q < 1/2 for hyperbolic tessellations. Here we follow the notation used in <cit.>, which is related to the HaPPY code notation in Ref. <cit.> via a p ↔ q duality transformation exchanging tiles and vertices. An HTN can be arranged into concentric layers of tensors with which one can perform entanglement renormalization <cit.>. Every concentric layer of the HTN's bulk can be described as realizing a step in the scale-invariant, real-space renormalization-group (RG) flow of a typical MERA network. There are different prescriptions on how to delineate these concentric layers; here we consider vertex inflation <cit.>, which matches the definition of RG layers in the original HTN proposal <cit.>. Every vertex and edge in an HTN consists of a set of tensors which we shall call A and B, where A corresponds to rotationally-invariant tensors with q indices, and B represents rank-2 rotationally-invariant tensors on every contracted edge of the tessellation. See <cit.> for detailed information on how to construct an HTN ansatz using a set of isometric tensors U(A,B) and W(A,B). <ref> also shows an example of isometry constraints which are needed in order to realize entanglement renormalization on the {p,q}-manifold at every layer, in the spirit of the MERA. The tensors A and B themselves admit various decompositions which fulfill the so-called isometry constraints (called multitensor constraints in <cit.>). Furthermore, the original HTN ansätze employ doubly-unitary matrices named Y,Q and R <cit.>. The arbitrariness of A- and B-tensor decompositions was showcased in <cit.>, where several other matrices were utilized in order to uphold the isometry constraints. Such solutions to the HTN isometry constraints come with a set of free parameters, each non-equivalent choice of which leads to a valid HTN solution with different RG flow and correlation decay <cit.>, as expected for a quantum-critical system <cit.>. With a reformulation of solution sets in terms of logical qubits or qudits, one can then construct codes from HTN ansätze <cit.>, with a choice of the logical state determining its free parameters. Before introducing such a code, however, we begin with a simpler qubit HTN solution that already exhibits some central features. § RESULTS §.§ HTN Ansätze from 1-Uniform States We begin by constructing first a simpler case of the original HTN ansatz of <cit.>; namely, we prepare a single boundary state rather than a bulk-to-boundary code, which permits us to analyze the tensor network structure more easily. We then show in <ref> that this simplified case easily generalizes to a new subclass of holographic quantum codes. A hyperinvariant tensor network (HTN) is composed of two tensors A and B which reside on the vertices and edges, respectively, of a {p,q} hyperbolic tessellation (with q-leg vertices in a p-gon tiling). These tensors comply with the following criteria: * A and B fulfill isometry constraints involving tensors on single and neighboring vertices. The constraints are shown explicitly in <ref> for the q>3 case (the special q=3 case is covered in <cit.>). * A is symmetric under cyclic index permutations, i.e., A_k_1,k_2,…,k_q = A_k_q,k_1,…,k_q-1. * B is a symmetric unitary, i.e., B=B^T and B B^† = 𝕀. The first criterion ensures that the tensor network behaves as an isometric map along any radial direction, while the second and third ensure that the tensors preserve all the (triangle group) symmetries of the {p,q} tessellation (hence the moniker hyperinvariant). These three criteria can be fulfilled by choosing B = 𝕀 and A as perfect tensors that describe q-qudit states that are absolutely maximally entangled (AME). This special case of HTNs corresponds to states of HaPPY codes <cit.>, but also produces flat entanglement spectra for subregions. Avoiding this scenario, Evenbly codes can be restricted further by imposing a fourth criterion: 4. A is chosen as a non-perfect tensor, i.e., there exist bipartitions of its q indices between which it does not mediate maximal entanglement. We now introduce a basic example of an HTN that fulfills all four criteria, choosing an A tensor that describes q-partite GHZ state and a B tensor that describes a Hadamard gate H. For simplicity we will restrict ourselves to qubits, though generalizations to qudits are possible <cit.>. Explicitly, the q-partite GHZ state vector is defined as |GHZ⟩ = 1/√(2)[ |0⟩^⊗ q + |1⟩^⊗ q] . Equivalently, we can express this state using the stabilizer formalism, where 𝒢_1 = X^⊗ q , 𝒢_k = I^⊗ k-2 (Z Z) I^⊗ q-k , where k∈{2,3,…,q}. As usual, we end up with (n-k) = q generators for the full stabilizer group. GHZ states are 1-uniform <cit.> and thus A forms an isometry only from each single site to the remaining ones. For q>3, this is a weaker condition than for perfect tensors, which are ⌊q/2⌋-uniform, hence fulfilling the fourth criterion. Moreover, from <ref>, we can infer rotational invariance as well, thereby satisfying the second criterion. The third criterion is upheld through our choice of B = H with H = 1/√(2)[ 1 1; 1 -1 ] . The isometry constraints from the remaining first criterion can be proven by using operator pushing: By applying the stabilizers from <ref>, we can replace a Pauli operator acting on one of the tensor legs by an equivalent operator acting on other legs. Moving Pauli operators past a B tensor involves exchanging X ↔ Z, as X H = H Z and Z H = H X. Showing that each Pauli basis operator X and Z acting on the upper (input) legs of the isometries in <ref>(a) can be pushed to a different operator on the lower (output) legs then implies that W or U form an isometry, respectively. The steps of the proof are shown explicitly in <ref>. In these steps, we find that the choice B=H is essential for recovering the X subalgebra in the two-tensor constraint (shown in <ref>(b)), as it converts the problem of removing a single X operator into that of removing a single Z, which can be done by a local generator. We have therefore found a class of HTNs that provably fulfill the constraints for general {p,q}, subject to the usual hyperbolic requirement p>2q/q-2. One can in principle define an HTN for q=3, but in that case our GHZ construction leads to a perfect A tensor, violating the fourth criterion. Though it is possible to construct a non-perfect Evenbly code by modifying the isometry constraints <cit.>, this case will not be of interest to our construction of Evenbly codes: Associating each bulk vertex with a logical qubit will lead to more bulk than boundary qubits in the case of {p,3} tilings, precluding any exact bulk-to-boundary code. §.§ HTN qubit codes Now we generalize the results gained from <ref> and define Evenbly codes; i.e., HTNs with bulk logical degrees of freedom. Evenbly codes were first defined in <cit.> as tensor network codes that associate a logical qubit with each vertex in an HTN, a process of bulk implantation which replaces each q-leg A tensor by a (q+1)-leg tensor A'. In analogy with <ref>, an Evenbly code is then defined as follows: A Evenbly code is composed of two tensors A' and B, on the vertices and edges, respectively, of a hyperbolic tessellation with Schläfli symbol {p,q} and q>3. These tensors must comply with the following criteria: * A' and B fulfill the generalized isometry constraints depicted in <ref>. * A' defines the encoding isometry for a q,1,2 error-detecting code. * A' is symmetric under cyclic permutations of the planar indices i.e., A_j,k_1,k_2,…,k_q = A_j,k_q,k_1,…,k_q-1, where j denotes the logical index. * B is a symmetric unitary matrix, i.e., B=B^T and B B^† = 𝕀. Note that A' merely describes an error-detecting code with code distance d=2; this condition implicitly excludes perfect tensors, which would lead to codes with d>2 for q>3. Generally, finding solutions to the constraints in <ref> is much harder than for those in <ref> since the isometries W' and U' act on more input sites than W and U. <cit.> introduced only a single ququart example fulfilling these constraints; we will now show that our GHZ HTN construction can be extended to define a class of qubit codes whose A^' tensors define Calderbank-Steane-Shor (CSS) codes. While we focus on this construction in the remainder of the paper, other qubit-level Evenbly codes can be constructed: In <ref>, we introduce Evenbly codes whose seed tensors are not built from CSS codes, with A^' tensors describing strictly 2-uniform or AME(5,2) states. In contrast, the Evenbly code can be defined for any even q ≥ 4, and its stabilizer generators defining the A' tensors of rank (q+1) are given by 𝒢_1 = X^⊗ q , 𝒢_k = I^⊗ k-2 (ZIZ) I^⊗ q-k-1 , where k ∈{2,3,…,q-1}. This defines a q, 1, 2 code with a logical qubit spanned by the states |0̅⟩ = 1/√(2)[ |0⟩^⊗ q + |1⟩^⊗ q] , |1̅⟩ = 1/√(2)[ ( |0⟩|1⟩)^⊗q/2 + (|1⟩|0⟩)^⊗q/2] . Both codeword states are invariant under cyclic permutation of the physical qubits, and hence the encoding isometry represented by A' is invariant under cyclic permutation of the physical indices. This code is closely related to the GHZ state construction with generators in <ref>, which can be obtained by adding Z̅ (defined below) as an additional generator to <ref>. In fact, |0̅⟩, |1̅⟩, and any superposition of these two state vectors comprise a GHZ state in different bases, with each single site maximally entangled with the rest of the system. The tensor A' is therefore partially 2-uniform when considering the logical and any one physical index, fulfilling the first of the Evenbly code constraints. In fact, it fulfills an even stronger condition: For any planar embedding of the logical index, the tensor describes a q+1-qubit state that is planar 2-uniform, i.e., any two qubits that are neighboring under this embedding are maximally entangled with the remainder <cit.>. One choice for the logical operators is given by X̅ = (IX)^⊗q/2 , Z̅ = I^⊗ (q-2) Z Z . Any cyclic permutation of these operators acts equivalently on the code space. Note that this implies that the logical Z subalgebra can be recovered on any two adjacent physical qubits (leading to a code distance d=2), while the logical X subalgebra requires q/2 non-adjacent sites (for even q). This code defines an A' tensor that forms an Evenbly code if the B tensor is again chosen as the Hadamard gate, as shown in <ref>. As we show graphically in <ref>, this construction allows us to push any operator acting on the logical or physical input legs to the output legs of the local tensor(s), analogously to <ref> but with additional logical legs, hence proving the isometry conditions. Finally, we add that one may use Hadamard gates to exchange the stabilizers and logical operators of this (generalized) code, leading to the form: 𝒢_1 = Z^⊗ q , 𝒢_k = I^⊗ k-2 (XIX) I^⊗ q-k-1, X̅ = I^⊗ (q-2) XX , Z̅ = (IZ)^⊗q/2 , where k ∈{2,3,…,q-1}, as before. For the case where the {p,4} lattice is bipartite (i.e., the vertices can be assigned two alternating colors), such as for p=6, we can therefore absorb all of the B tensors of the HTN by changing every second vertex to represent the alternate code above. §.§ Evenbly Codes as Subsystem Codes Before considering the resilience of Evenbly codes against errors, we first discuss possible gauge restrictions on the bulk logical qubits. The general setup introduced in <cit.> and discussed above assumes a logical space composed of all bulk legs, leading to a setting where both the number of logical and physical qubits increase exponentially with the number L of tensor network layers. However, an alternative to this max-rate setting is to only consider the logical qubits on a subset of bulk sites, forming a stabilizer subsystem code <cit.> in which the remaining bulk legs define gauge degrees of freedom. Let us briefly review subsystem codes. In general quantum error correction codes, one typically divides the physical Hilbert space ℋ_p as ℋ_p = ℋ⊗ℋ̅, where ℋ represents the code subspace, and ℋ̅ is the complement. For more general subsystem codes, it is generally taken that ℋ can be decomposed as ℋ = ℋ_L⊗ℋ_G <cit.>, where ℋ_L and ℋ_G represent the logical and gauge subsystems of the code subspace, respectively. In this paradigm, logical information is encoded into ℋ_L, while the subsystem ℋ_G is used to help diagnose and correct errors in ℋ_L. This aim can be accomplished by defining a gauge group 𝒢, of which the total stabilizer group 𝒮 is the center of 𝒢, i.e. {iI,𝒮} = 𝒵(𝒢) := C(𝒢) ∩𝒢. Here iI, 𝒵(·), and C(·) represent the identity operator (up to a phase), the center, and the centralizer, respectively <cit.>. Up to phase factors, therefore, elements from 𝒢 are either stabilizer operators (and act trivially on ℋ_L⊗ℋ_G) or operators which act non-trivially on ℋ_G. Lastly, due to the subsystem structure, two types of logical operators arise. Bare logical operators are those which act non-trivially only on ℋ_L, while dressed logical operators act non-trivially on ℋ_L⊗ℋ_G <cit.>. In addition to the max-rate setting of Evenbly codes, we consider two subsystem code settings: First, a zero-rate case where all logical information is encoded in the central logical qubit and all other logical legs form the gauge subspace ℋ_G (<ref>(a)). Second, a constant-rate case where we only include a fraction of logical qubits on each layer, chosen so that the hyperbolic geodesics crossing over the logical site do not pass over any other ungauged sites. The {5,4} example of this second case is visualized in <ref>(d). We show below that this constant-rate version of the Evenbly code yields a rate of R = k/n = 1/8; for comparison, the trivial gauge subsystem variant (i.e. the maximum number of bulk logical indices are used for storing logical information and gauge group is empty) of the {5,4} Evenbly code exhibits an asymptotic rate of R = 0.866. Such finite encoding rates are a common feature of codes defined on hyperbolic manifolds <cit.>. Both of the code variants' threshold properties will be evaluated in <ref>. In both cases, ℋ_bulk refers to all other logical qubits partitioned into the gauge subsystem. Finally, we discuss how to perform gauge fixing for Evenbly codes. The concept of gauge fixing originates in <cit.>; here, the concept amounts to adding an element g ∈𝒢 into the stabilizer group 𝒮 and in parallel removing the element h ∈𝒢 (which anticommutes with g). For Evenbly codes, we define three ways to gauge-fix ℋ_G: we designate these gauges either the X, Y, and Z gauges. In order to form the gauge-fixed stabilizer group S', we simply add logical operators from ℋ_G in accordance with the appropriate logical eigenstate that we wish. For example, forming the X gauge-fixed stabilizer group can be done by simply defining 𝒮'_X̅ := 𝒮∩𝒢_X̅, where 𝒢_X̅ represents the X operators acting on ℋ_G. Similarly, one can form any logical-eigenstate gauge in our picture simply by adding the appropriate logical operators as 𝒮'_P̅ := 𝒮∩𝒢_P̅, where 𝒫 is an element of the Pauli group. §.§ Rate and Distance Scaling Calculating the asymptotic rate of a {5,4} Evenbly code can be done by following the prescriptions found in <cit.>, and is related to an inflation procedure for hyperbolic tilings known as vertex inflation <cit.>. As in these previous examples, rates can be evaluated by assessing the tensorial content in the bulk, and comparing with the total number of boundary sites. In each layer of the tiling, we can distinguish vertex tensors (and their logical qubits) by the number of legs going out towards the next layer (for the last layer, these open legs support the physical qubits). We denote vertices with one to four outgoing legs with α,β,γ and δ. For vertex inflation, δ only appears as the central (seed) tensor and γ never, following the substitution rules α ↦αβ , β ↦αββαβ , δ ↦αββαββαββαββ . Vertex inflation determines a substitution matrix M; every column of M represents the number of sites of each type that arise in every subsequent layer as a result of applying the inflation rules. Combining this matrix with the vector η=(n_α,n_β,n_δ)^T which represents the number of sites at the L^th layer, we construct a recursion relation of the form η^(L+1)≡[ n_α^(L+1); n_β^(L+1); n_δ^(L+1) ] = [ 1 2 4; 1 3 8; 0 0 0 ][ n_α^(L); n_β^(L); n_δ^(L) ] = M η^(L) . We can use our initial vector η^(0) = (0 0 1)^T in order to recursively calculate the number of logical sites at each layer via η^(L) = M^L η^(0) , and compute the numbers n and k of physical and logical qubits as n^(L) = n_α^(L) + 2 n_β^(L) + 4 n_δ^(L) , k^(L) = ∑_i=0^L (n_α^(L) + n_β^(L) + n_δ^(L)) . Given our initial vector η^(0) and the substitution matrix M for the {5,4} code, we can explicitly resolve these expressions in terms of the nonzero eigenvalues λ_1 = 2 + √(3) and λ_2 = 2 - √(3) of M, leading to n_α^(L) = (2√(3)-2) λ_1^L - (2√(3)+2) λ_2^L , n_β^(L) = 2 λ_1^L + 2 λ_2^L , n^(L) = (2√(3)+2) λ_1^L - (2√(3)-2)λ_2^L , k^(L) = 1 + 2√(3)(λ_2^L+1+λ_2^-L+√(3)-3)/√(3)-1 . for L>0. Despite appearances, all of these evaluate as integers. The rate of the code follows as R^(L)≡n^(L)/k^(L) = 5-5√(3) + 2√(3)( λ_2^L+1 + λ_2^-L )/4(λ_1^L - (2-√(3)) λ_2^L ) , which quickly converges to the asymptotic rate R̅≡lim_L →∞k^(L)/n^(L) = √(3)/2≈ 0.866 . This is the rate of the max-rate Evenbly code in which all logical qubits are gauge-free. However, in <ref>, we treat another variant, where a number of logical qubits are gauge-fixed. The remaining logical qubits correspond to every second β-type vertex in the language of inflation steps. Confirming the geometric argument of <ref>(b), we find that this code has the constant rate R^' = 1/2∑_i=0^L n_β^(L)/ n_α^(L) + 2 n_β^(L) + 4 n_δ^(L) = 1/4 . However, since every layer is half-filled (<ref>(d)), we must then divide by two, leaving the rate R' = 1/8. We now consider code distances, i.e., the minimum physical weight of any logical operator, distinguishing between bit distances of single-qubit logical operators and word distances of arbitrary multi-qubit logical operators. For the Evenbly code, distances heavily depend on the choice of gauge: In the max-rate case, acting with two physical Pauli operators can create a logical operation arbitrarily deep in the bulk, leading to d_word=2. If the two Pauli operators act on neighboring physical qubits, they will affect only logical qubits adjacent to the boundary (e.g. Z_j Z_j+1 acting as a Z̅ on a single tile); however, if they act further apart, the result is a string of logical operators along a shortest path throughout the bulk, a feature that is related to the appearance of non-trivial 2-point functions in such codes <cit.>. However, the bit distance of logical operators X̅_i, Z̅_i on a particular logical qubit on the ith tile generally increases exponentially with its distance to the boundary. For the max-rate {5,4} code, we can compute this bit distance by again utilizing inflation rules, focusing on the logical qubit in the center of the hyperbolic tiling. Applying the operator pushing rules for the stabilizer generators ( XXXX, IZIZ, ZIZI ), we find the following behavior under vertex inflation: An operator X acting on the boundary of the layer L turns into a Z on layer L+1, while a Z turns into three operators Z,X,Z acting on different boundary qubits. It is this second step which results in an exponential growth of Pauli weight with L, and we can compute this growth exactly. Denoting the number of Xs and Zs of a given logical operator on the Lth layer as w_X^(L) and w_Z^(L), we arrive at the rule [ w_X^(L+1); w_Z^(L+1) ] = [ 0 1; 1 2 ][ w_X^(L); w_Z^(L) ] , where the substitution matrix is obtained from the mapping X ↦ Z and Z ↦ ZXZ, as described above. At the central tile, we begin with (w_X^(0),w_Z^(0)) = (2,0) for X̅ and (0,2) for Z̅. From this it follows that the total Pauli weight w^(L)=w_X^(L) + W_Z^(L) on a given layer L is w^(L)(X̅) = (1 + √(2))^L + (1 - √(2))^L , w^(L)(Z̅) = (1 + √(2))^L+1 + (1 - √(2))^L+1 . Note that the weight of Z̅ is exactly one inflation step ahead of X̅; hence, the bit distance is given by d_bit= w^(L)(X̅). It is generally desirable to express the scaling of d_bit in terms of the number n of physical qubits, which we computed in <ref>. At large L, the scaling becomes d_bit∼ (1 + √(2))^L = (n/2 + 2√(3))^log_2+√(3) (1+√(2)) ≈ 0.32 n^0.67 . The distance for Z̅ scales similarly, with an additional prefactor of 1+√(2). To achieve a more useful code, we would like to increase d_word for a greater resilience against errors while decreasing d_bit so that targeted logical operations are easier to perform. This condition is exactly what gauge-fixing some logical qubits of the Evenbly code achieves: In the most extreme example of the zero-rate code with only a single gauge-free logical qubit in the center of the tiling, d_word = d_bit while being upper-bounded by the max-rate result from <ref>. This upper bound may not be tight, as the availability of additional stabilizer terms on gauge-fixed tiles allows for more compact representations of logical operators on the boundary. Exactly computing these distances is a computationally difficult problem, though we expect that gauge-fixing in a Z or Y gauge will reduce the power coefficient in <ref> somewhat below 0.67. A special case is given by the zero-rate code in the X gauge, where the weight of logical operators does not scale at all with the number of layers, resulting in d_word = d_bit = 2. This result also extends to the constant-rate code of <ref>(c) and its sparsifications, e.g. the rate R^' = 1/8 code in the X gauge. As a result, it can be easily shown using operator-pushing rules that the transversal logical operations of the seed tensor (i.e. X̅, Z̅, CX) are of constant weight-2 in this gauge. §.§ Erasure Threshold for a Zero-Rate Code We analyze the threshold performance of the {5,4} Evenbly code using two different decoding strategies for erasure errors: A greedy reconstruction method generalizing the approach of <cit.>, in which the bulk logical qubits are iteratively reconstructed from the boundary and whose threshold we denote by p^greedy_erasure, and a Gaussian elimination algorithm adapted from <cit.> with threshold p^gaussian_erasure. The implementation and simulation details for both algorithms can be found in <ref>. To summarize, greedy reconstruction relies on simple geometric steps but is less effective than the Gaussian elimination approach, which considers all possible physical representations of logical operators. Simulation results for recovery rates with both decoding algorithms under an erasure channel are shown in <ref>, considering only the recovery of the central logical qubit. In subfigures (a)-(d), the threshold results p_erasure are shown for each choice of gauge fixing discussed in <ref>. No gauge fixing (subfigure (a)) corresponds to the max-rate case, whereas (b)-(d) consider different gauges of the zero-rate code. In agreement with the discussion of a ququart Evenbly code in <cit.>, the max-rate case is highly susceptible to erasure errors and does not display a threshold. This property is a feature of the holographic construction, in which non-vanishing two-point functions for all boundary states are desirable. In the zero-rate case for logical X gauge (subfigure (b)), recovery probabilities for the full recovery algorithms are independent of the number L of tensor network layers. Under this gauge, A^' corresponds to a state of two EPR pairs, each between the two opposite ends of the four sites of the tensor. The initial 4,1,2 code is thus directly mapped onto four boundary qubits (up to local Hadamard gates), with the remaining boundary qubits forming sets of disconnected EPR pairs. Even though the code properties do not change, we find a weaker greedy reconstruction at higher L, whose local reconstruction steps becomes less effective as the relevant physical qubits move further apart on the boundary. The logical Y and Z gauges (subfigures (c)-(d)) are different: Here we find a threshold close to the maximal value of 50% for full reconstruction, with the simpler greedy reconstruction only achieving around 24% and 40%, respectively. This relative underperformance of the greedy algorithm is due to its reconstruction of the bulk algebra on edges tile-by-tile, which may be prevented by erasure errors even though the logical operators of the central qubit are still recoverable. For this reason, the greedy recovery probability always lower-bounds full recovery, as already seen in other work <cit.>. Conversely, the Gaussian elimination algorithm poses an upper bound to the recovery probability, as it has been shown to be optimal for erasure noise <cit.>. §.§ Erasure Threshold for a Constant-Rate Code As shown in <ref>(b), one may also gauge-fix some of the logical qubits while keeping the rate of the code finite, leading to the constant-rate setting described in <ref>. Keeping the two graph geodesics across any logical qubit gauge-fixed on all other bulk sites, we potentially increase the protection of each logical qubit from erasures closest to it on the boundary. In the X gauge, this behavior directly generalizes the zero-rate case: Again, the gauged sites turn into EPR pairs that directly map a 4,1,2 code onto four unique physical qubits, but with k=n/4 logical qubits instead of one. This is visualized in <ref>(c). The case of either the Y and Z gauge again leads to non-trivial L dependence and an asymptotic threshold, as in the zero-rate case; the recovery curves are shown in Fig. <ref>. They can be boosted by adding completelY gauge-fixed layers of tensors to the constant-rate code, pushing them close to 50% while decreasing the rate accordingly, as we report in <ref>. Note that for such codes, a threshold for the central logical qubit implies a threshold for all the others, as the threshold in a holographic code depends on how much each tensor network layer suppresses errors <cit.>. Curves for p_erasure thus become sharper as one considers logical qubits deeper in the bulk. As shown in <cit.>, in holographic codes the recovery of non-central logical qubits can be parallelized as long as the physical error rate stays below the threshold of the central logical qubit. §.§ Threshold for Depolarizing and Pure Pauli Noise Finally, we investigate here the threshold performance of the zero-rate {5,4} Evenbly code for depolarizing noise. The depolarizing noise model can be written succinctly as ℰ(ρ) = (1-p)ρ + p/3 ( ∑_i𝒫_iρ𝒫_i), in which it is not assumed that knowledge of the error's locations are known, in contrast to erasure noise <cit.>. Here, we utilize the integer-optimization decoder from <cit.>, which is known to be optimal (although scaling exponentially badly). More details on the simulation can be found in <ref>. In <ref>, (a) displays the recovery threshold p_depo obtained as a result of varying the physical noise parameter. As is shown, a threshold appears in the region of approximately p_depo≈ 19.1 ± 0.94%; this threshold value is very competitive with known code-capacity thresholds for the surface code, color code, and the holographic 6,1,3 code <cit.>. Subfigures (b-d) display the results obtained for pure Pauli noise using the integer-optimization decoder. As can be ascertained, preliminary estimates indicate pure Pauli noise thresholds of approximately 50%, in line with the results obtained for the erasure channel. However, as we were limited in the computational power and time available, we report these findings here as preliminary, with three crossing points. However, if confirmed, it would seem that the Evenbly code is capable of reaching and exceeding known achievable bounds for random zero-rate quantum codes (known as the zero-rate hashing bound <cit.>). § DISCUSSION §.§ qCFTs & Holography The construction presented in this work is one among many possible realizations of HTN codes; indeed, it was shown in <cit.> that many possible tensor decompositions for the original HTN ansatz exist, even those which are non-unitary in terms of individual tensor properties. However, constructing a quantum error-correcting code with non-unitary tensors may prove to be very challenging, and even more so when trying to determine relevant error correction properties of the resulting code. Although we have realized an exact holographic code which exhibits genuine polynomially-decaying boundary-to-boundary correlation functions, we have yet to investigate whether CFT data relevant for a known minimal-model CFT <cit.>, or a quasiperiodically disordered qCFT <cit.>, can appear as a result of the gauge-fixing techniques described. We leave this issue for future work. Evenbly codes at their core circumvent a no-go theorem mentioned in <cit.> on the existence of exact holographic codes with polynomially-decaying correlation functions. Recent work suggests that a bulk-to-boundary code replicating full quantum gravity should be expressed in terms of an approximate error-correcting code <cit.>, and that stabilizer codes in particular only exhibit trivial area operators <cit.>, meaning that quantum superpositions of bulk geometries on subregions are restricted to classical values for the area of the subregion boundaries (Ryu-Takayanagi surfaces). As discussed in <cit.>, this still allows for restricted quantum gravity effects where the bulk subregion accessible from a boundary subregion can fluctuate between configurations with equal area. An interesting implication of our work is that gauge-fixing, which in the holographic picture corresponds to a choice of a given local (quantum) geometry, implies different reconstruction properties and thresholds. For example, the X gauge in the {5,4} Evenbly code leads to A tensors describing 4-qubit PME states composed of EPR pairs, resembling the “fixed-area states” described by HaPPY codes <cit.>. Different gauge choices can thus be seen as a modification of bulk entanglement to suppress or enhance certain types of operators under a bulk RG flow. It is therefore possible that understanding the suitability of holographic codes under different types of errors will also lead to new insights into how quantum gravity emerges in such tensor network models. §.§ Error Correction The results conveyed demonstrate that Evenbly codes achieve similar logical recovery rates when compared to both topological and quantum low-density parity-check (qLDPC) codes <cit.> in the code-capacity regime, as well as other holographic codes <cit.>. However, in contrast to the surface or color codes <cit.>, the distance of all bulk logical qubits scale as ∼ n^2/3, rather than ∼√(n); what is more, the rate of the code is tunable, a property generally ascribed to holographic codes <cit.>. It would be interesting to examine phenomenological noise models (which incorporate measurement errors); we leave this idea for future work. It is known that the seed tensors of our construction, k-uniform states, exhibit generally less entanglement than their AME and PME counterparts used in other constructions of holographic codes <cit.>. As the task of generating an AME state is itself considered a benchmark for quantum-processor performance <cit.>, k-uniform states are more suitable for experimental implementation since they can be prepared using local interactions only, as recent experiments have shown <cit.>. This observation, together with recent advances in non-Euclidean lattice architectures <cit.> as well as in the preparation of holographic logical code states <cit.> underscores the potential for experimental feasibility, particularly in architectures with flexible qubit connectivity, such as trapped-ion, neutral-atom, or photonic devices <cit.>. Additionally, we reported that the recovery rate for the central logical qubit in the R = 1/8 Evenbly code is 44.02 ± 2.89% in the Y gauge. At first glance, this result may violate the quantum channel-capacity theorem <cit.>, which sets a strict upper bound for erasure channel capacity at 43.75% in our case. However, the channel capacity obtained is well within the error bounds in our simulation, and it is known that as long as physical error rates are below the threshold of the central logical qubit, all other bulk logical qubits can be decoded in parallel in holographic codes <cit.>. In this way, the central logical qubit's recovery rate dictates the practical upper bound for all other bulk logical qubits. Our work therefore suggests that the word recovery rate for a constant-rate Evenbly code is only slightly lower than that of the central logical qubit, and may achieve channel capacity for finite-rate quantum erasure channels. Further work will be required in order to determine conclusively whether or not this is indeed the case. In addition to the code construction presented in the main text, we have also explored the threshold properties of other examples on {p,q} hyperbolic tilings using the greedy recovery algorithm. These results are shown in <ref>, and depict very interesting (if not similar) threshold data to the example fashioned in the main text. As stated in <ref>, several other non-CSS seed-tensor constructions using AME and 2-uniform quantum states can be utilized to build an Evenbly code. These alternative Evenbly codes may be difficult to decode using the standard greedy decoder, owing to the lack of rotational invariance of the seed tensors. However, the Gaussian elimination or integer-optimization algorithms employed in this work could be employed; one may also consider modifying existing tensor-network decoders <cit.> to accommodate Hadamard gates on the edges of the Evenbly code, as well as memory usage due to the exponentially-scaling number of sites observed in vertex inflation <cit.>. As a final comment in this section, no prior work to our knowledge has yet considered fault-tolerance protocols <cit.> for a holographic code. Ref. <cit.> discovered the existence of a threshold in continuum AdS/CFT models, arising as a confinement/deconfinement phase transition. As the majority of holographic codes are stabilizer codes, one can minimally expect a generic holographic code to benefit from the implementation of a fault-tolerance protocol such as the flag-style protocols of <cit.>. However, more elegant solutions may be possible; although not exhibiting bounded-weight check operators like in quantum low-density parity-check (qLDPC) codes <cit.>, one may expect holographic codes to be amenable to similar fault-tolerance techniques, as in recent work demonstrating promising space and time overhead savings in the context of generalized concatenated codes <cit.>. As providing for fault-tolerant operation of the code is a vital step towards the experimental realm, this topic is the subject of active research. §.§ Logical Gate Sets A no-go theorem regarding locality-preserving gates outside of the Clifford group for holographic codes was recently presented in <cit.>. Here, holographic codes based on AME and PME states (i.e. approximately upholding the principle of subsystem complementarity) were shown to not admit locality-preserving logical operations outside of the Clifford group. These results were extended to holographic models for which complementary recovery is not exactly fulfilled on any entanglement wedge. Naively, one would ordinarily rule out non-Clifford locality-preserving logical operations for the Evenbly code as well. However, we suspect that the set of locality-preserving logical gates for Evenbly codes may be gauge-dependent, and significantly larger than simply the set known for the q,1,2_2 seed tensor, since the unique, mutable isometry constraints of our construction were not taken into account in <cit.>. This fact is significant since we have shown that new non-local bulk reconstruction rules emerge and substantially alter the threshold properties of the Evenbly code, in accordance with the appropriate gauge selection. It is known that gauge fixing can be used in certain topological code constructions in order to obtain universal fault-tolerant logical gate sets <cit.>, and such a picture may arise in our codes as well. Also, it was shown in <cit.> that holographic codes with transversal logical gates outside of the Clifford group can easily be constructed, taking the form of a holographic quantum Reed-Muller code. In any case, a rigorous examination of the fault-tolerant logical gate set for Evenbly codes will be the subject of future work. Interestingly, it was also mentioned in <cit.> that the quantum Reed-Muller code's seed tensor exhibits 2-uniformity, equivalent to seed tensors that we present for alternative Evenbly codes in <ref>. Constructing an Evenbly code from quantum Reed-Muller seed tensors and investigating the resulting error correction properties could also prove fruitful. Lastly, one can consider heterogeneous holographic constructions with the Evenbly code, in which layers or seed tensors per layer are alternated in some pattern; we report on this in upcoming work <cit.> and will address the subject of constructing holographic codes with large, potentially universal logical gate sets more thoroughly. In this article, we have generalized the HTN ansatz to qubit-level codes, which we dub Evenbly codes. We have shown that this new, infinitely large code class exhibits unique and rich error correction properties that we have analyzed and reported on. Some of the highlights of this code include high erasure, depolarizing, and pure Pauli noise thresholds; low-weight transversal logical operators in the X gauge; excellent distance scaling for bulk logical qubits; and constant-rate variants which are easy to tune and contrive. Lastly, our construction utilizes seed tensors which have already been prepared in the laboratory, implying that this class of holographic code may be amenable for experimental realization. § SOFTWARE AVAILABILITY A portion of this work was completed using an upcoming open-source python package <cit.> for the simulation of holographic quantum codes and partially uses functionality from <cit.>. All of the thresholds determined using the Gaussian elimination and integer-optimization decoders were obtained by utilizing the DelftBlue supercomputer <cit.>. For this study, the Gurobi optimization package <cit.> was also used for calculating the depolarizing noise thresholds. § ACKNOWLEDGEMENTS We thank Aritra Sarkar, Charles Cao, and Jens Eisert for useful discussions. MS and SF thank the Intel corporation for financial support. AJ received funding from the Einstein Research Unit “Perspectives of a quantum digital transformation”. RH was supported by the Australian Research Council Centre of Excellence for Engineered Quantum Systems (Grant No. CE 170100009). § AUTHOR CONTRIBUTIONS MS and AJ developed the theoretical constructions and formalism for Evenbly codes. AJ developed and implemented the greedy reconstruction algorithm. JF and MS constructed the Gaussian elimination algorithm. JF constructed and implemented the distance-scaling approximations, as well as the integer-optimization decoder; this work was conducted during the master thesis of JF, and was supervised by MS and SF. RH provided guidance on the construction of both the Gaussian elimination and the integer-optimization decoders. MS and AJ wrote the manuscript. DE, SF, and AJ supervised and coordinated the project, as well as providing insight and guidance during the writing process. § MULTIPARTITE MAXIMALLY-ENTANGLED STATES & QUANTUM CODES MME states are highly-entangled many-body states in which certain reductions of the state are maximally mixed <cit.>; such states have found numerous applications in the construction of quantum error correction codes and quantum-communications protocols, among others <cit.>. In the case of systems consisting of two qubits, the Bell state |Ψ^+_2⟩ = 1/√(2)(|00⟩+|11⟩) is well-known as the simplest quantum state which is maximally entangled, since the state obtained after performing a partial trace on either of the two subsystems results in a maximally-mixed quantum state tr_A[ |Ψ^+_2⟩⟨Ψ^+_2|] = 𝕀_2/2. This state is locally equivalent to other states of the form (U_A⊗ U_B)|Ψ_2^+⟩, where U_A,U_B∈ U(2). In a similar way, one can define a generalized Bell state for any bipartite quantum system with n levels: |Ψ^+_n⟩ = 1/√(n)∑_i=1^n|i⟩_A⊗|i⟩_B. Here, one can easily check that the resulting RDMs for subsystem A or B will indeed be maximally mixed, as the entanglement entropy 𝒮 of such states can be expressed as 𝒮(|Ψ^+_n⟩|Ψ^+_n⟩) = logn. As one increases the number of constituent subsystems to three or more subsystems, the process of characterizing quantum entanglement becomes more complicated as well, with several measures currently in use <cit.>. In this manuscript, we will consider maximal entanglement to be defined by considering the possible bipartitions of a many-body quantum state and the reduced state's corresponding entanglement entropy. In what follows, we will give several examples of maximally-mixed states using the aforementioned definition, which will be useful for the present work. In vein with the notion of bipartite maximally-mixed reductions, k-uniform states are defined as pure states of n distinguishable qudits in which all k-qudit reductions of the entire system are maximally mixed <cit.>, with k ≤⌊n/2⌋. It is well-known that k-uniform states themselves are dual descriptions of stabilizer codes <cit.>, and that several inequivalent methods for constructing k-uniform states exist <cit.>. More specifically, a k-uniform state |ψ_k-uniform⟩ in a Hilbert space ℋ(n,q) ∈ℂ^⊗ n_χ (where χ represents the local Hilbert-space dimension of all n qudits), denoted by k-uniform_(n,q) exists when the following condition holds: tr_𝒜̅[ |ψ_k-uniform⟩⟨ψ_k-uniform|] ∝𝕀, where 𝒜̅ represents the set complementary to 𝒜, for all 𝒜⊂{1, ⋯ , n}, |𝒜| ≤ k. The Schmidt decomposition shows that values of k generally are bounded as k ≤⌊n/2⌋, with the condition k = ⌊n/2⌋ known as the absolutely maximally-entangled (AME) state condition <cit.> (or equivalently perfect tensors <cit.>), and the case of 1-uniform states is commonly associated with the famous Greenberger–Horne–Zeilinger (GHZ) state <cit.>. Another notion of maximal entanglement was introduced in <cit.>, in which k = ⌊n/2⌋ reductions are necessary, but only for ⌊n/2⌋ adjacent parties; these quantum states are known as planar maximally-entangled (PME) states (or equivalently the block-perfect tensors of <cit.>). Yet another type of maximal entanglement has recently been shown to exist and is known as the so-called planar k-uniform states <cit.>. Although several methods currently exist for constructing k-uniform states, we shall focus on the techniques utilized in <cit.> for the sake of presenting a coherent formalism. In particular, we employ on the construction of k-uniform states from <cit.>, using particular sets of n-qubit Pauli operators, as a restriction of the stabilizer formalism that is already well-known <cit.>, together with the fact that any state can be written as a tensor product of Pauli operators <cit.>. Recalling the form of n-qubit Pauli operators, we use the convention σ_X⊗σ_Y⋯σ_Z⊗σ_I≡ XY ⋯ ZI, where σ_j, ∀ j ∈{I,X,Y,Z} are the qubit-level Pauli operators. In accordance with the stabilizer formalism, we can consider a set of these n-qubit Pauli operators 𝒢 𝒢 = {𝒢_1, ⋯, 𝒢_m}, such that these m operators have the following three properties: ∘ Commutation: every pair of elements 𝒢_i, 𝒢_j must commute, i.e. [𝒢_i,𝒢_j] = 0, ∀ i,j ∈{1,...,m}, and ∘ Independence: 𝒢^j_1_1⋯𝒢^j_m_m∝𝕀 iff j_1 = j_2 = ⋯ = j_m = 0. ∘ k-uniformity: the product 𝒢^j_1_1⋯𝒢^j_m_m must result in an n-qubit Pauli operator containing no more than (n-k-1) Pauli I operators. By summing up all possible products of generators, one can write the density matrix of a k-uniform state as: ρ = 1/2^n∑^1_j_1⋯ j_m = 0𝒢_1^j_1⋯𝒢_m^j_m. This construction follows from the fact that any arbitrary quantum state can be written down using Pauli matrices and a correlation tensor, as utilized in quantum state tomography <cit.>. It is additionally well-known that pure k-uniform states themselves form a set of quantum codes, formed as superpositions of classical maximum distance-separable (MDS) codes <cit.>. A particular class of quantum codes known as stabilizer quantum codes can be modified via a technique known as shortening, and one can show the existence of a pure quantum stabilizer code whose spanning vectors are (k-1)-UNI states. Such codes can be defined for any local-dimension Hilbert space, and correspond to qudit-based stabilizer codes. An n-qubit quantum stabilizer code which encodes k logical qubits, is represented with the notation n,k,d_χ (where χ represents the local dimension for each of the n particles in the n-partite quantum system), and the distance d represents the Hamming-weight difference between two codewords defined in a quantum code. The rate of such a code is R = k/n. The stabilizer 𝒮 of a code is an Abelian subgroup of the n-fold Pauli group ∏^n, and does not contain the operator I^⊗ n; the simultaneous +1 eigenspace of operators forms what is known as the codespace. A stabilizer code exhibits a minimal representation of its stabilizer 𝒮 in terms of (n-k) independent generators: {𝒢_1, ⋯ ,𝒢_(n-k) | ∀ i ∈{1, ⋯ ,(n-k)}, 𝒢_i∈𝒮}. The generators of the stabilizer function in the same way as the parity-check matrix for classical linear codes <cit.>. Moreover, stabilizer generators commute with each other, and errors are generally identified and diagnosed by noting anticommutation with errors that may occur <cit.>. Quantum error correction codes are capable of correcting a number of errors, determined by their distance d, and represented by the following equation t = ⌊d-1/2⌋, where t is the Hamming weight of the error. § AN EVENBLY CODE FROM A GRAPH STATE As a first attempt to devise a specific example of an Evenbly code, we start by utilizing the HTN defined on the {5,4} tiling from <cit.>; we do not consider the {7,3} (i.e. q > 3 as a basic requirement) simply because the conditions for bulk reconstruction prohibit recovery of bulk logical qubits <cit.>. Rather than considering the ququart construction introduced in previous work <cit.>, it is in fact possible to construct qubit-level Evenbly codes. We utilize a vertex tensor A' corresponding to a 2-uniform state, which is given by any state in the ground space of the 5,1,3 perfect code <cit.>. For convenience, we choose the physical state encoding the |+̅⟩=|0̅⟩ + |1̅⟩/√(2) logical state, which happens to coincide with the (AME) 5-qubit cycle graph state <cit.>. This logical state is the unique ground state of the stabilizer Hamiltonian H_ 5,0,3 = -( ZXZII. + IZXZI + IIZXZ + . ZIIZX + XZIIZ ) , using the stabilizer generator S_1= ZXZII and its cyclic product permutations. Through code shortening <cit.>, we convert this 5,0,3 state into a 4,1,2 code (see <ref>B)-C) with stabilizer Hamiltonian H_ 4,1,2 = -( ZXZI + IZXZ + XZZX ) . The logical operators of this code can be represented as X̅=ZIIZ and Z̅= IIZX. Unlike the original HTN construction <cit.> and the ququart Evenbly code <cit.>, this code is not symmetric under cyclic permutation of physical indices. Therefore, a suitable B tensor to satisfy the isometry constraints has to be found for each of the 16 possible rotations (6 up to reflections) of two neighboring A' tensors with a B in between; consequently, the constraints allow for bulk reconstruction from any direction. For any pair of neighboring A' tensors, this implies that the B tensor connecting them has to fulfill two isometry constraints, shown in <ref>A. We find that for the A' tensor following from <ref>, three different B tensors are necessary to fulfill the constraints for various rotations, visualized in <ref>D. These three tensors are the Hadamard gate H=X+Z/√(2) as well as Y+Z/√(2) and X+Y/√(2), all of which are unitary. Given these solutions, one can therefore build an Evenbly code by placing A' tensors with arbitrary orientation on a hyperbolic lattice and then choosing the associated B tensor on each edge. § GENERALIZATIONS OF EVENBLY CODES FOR OTHER HYPERBOLIC TILINGS The {5,4} Evenbly code discussed in the main text can be generalized into a family of infinitely many variants on {p,q} tilings for even p≥ 4 and p>2q/q-2, the latter condition ensuring hyperbolicity. Using the greedy reconstruction algorithm, we compute recovery curves in <ref> for some of these generalizations under erasure noise, in several different gauge choices; note that results for the ungauged max-rate code as well as the X gauge for q=4 are omitted here for brevity. In (a) and (b), we showcase the p = 5 and p = 6 result for q = 4. For these plots, we observe that as we increase p, the greedy threshold increases commensurately in the Z gauge; however, as the same procedure is undertaken in the Y gauge, no improvement is seen. Similarly, (c) and (d) convey the results obtained for q =6 while increasing p from 5 to 6. Again, in the Y gauge, we note that almost no change in the threshold p_erasure occurs. Note that greedy reconstruction for q ≥ 6 is the same in X and Y gauge, as the corresponding greedy steps (Fig. <ref>) are identical. The X gauge thus behaves very differently than in the q=4 case, as its eigenstates are no longer composed of EPR pairs. Equivalently, in the operator picture applying X̅ = XIXIXI for q=6 increases the operator weight when pushing X operators to the boundary, while X̅ = XIXI for q=4 does not. Finally, the Z gauge of both codes echo the same tendencies found in the q = 4 codes: namely, as we increment from p = 4 to p = 5, we see that the threshold increases for the resulting code. As growing p reduces the rate of the code, it is indeed expected that the central logical-qubit thresholds, as shown, would also improve. § ALTERNATIVE EVENBLY CODE CONSTRUCTIONS Here, we present several constructions of alternative HTN codes, using non-CSS constructions of vertex tensors A', as well as the typical B = H edge tensor condition. In addition to the two constructions provided in <ref>, one may also formulate such alternative HTN quantum codes using the complex amplitudes from full 2-uniform states as the A' tensors. Previous work has shown that it is possible to generate new stabilizer quantum error correction codes from old ones by utilizing a procedure known as code shortening <cit.>. Here, we show that by employing (q+1)-partite 2-uniform states, one can perform logical implantation in the bulk using such 2-uniform quantum states. The implanted 2-uniform state can be then “shortened" from a q+1,0,k+1 code to a q,1,k+1 stabilizer code. We provide an example originating from a {5,5} tiling below. Consider an HTN ansatz constructed on a {5,5} tiling; one may replace the 5-partite 1-uniform GHZ states defined at every vertex with a 6-partite 2-uniform state of the following form: |Ψ_ 6,0,3_2⟩ = |111111⟩ + |101010⟩ + |001100⟩ + |011001⟩ + |110000⟩ + |100101⟩ + |000011⟩ + |010110⟩, with stabilizer generators of the form 𝒢_1 = XXYYZZ, 𝒢_2 = XXZZYY, 𝒢_3 = XZZXXZ, 𝒢_4 = XYYXIZ, 𝒢_5 = YXIZXY, 𝒢_6 = YYYYII, which form a 6,0,3 quantum error correction code. Upon shortening, the code is converted into a 5,1,2, with new logical operators defined as Z̅_ 6,0,3 = XXYYZZ = 𝒢_1, X̅_ 6,0,3 = XXZZYY = 𝒢_2. The new stabilizer generators {𝒢̅_l}, where l = (n-k)-2 = 4, are defined as 𝒢̅_1 = 𝒢_3∘Z̅_ 6,0,3 = IYXZYI, 𝒢̅_2 = 𝒢_4∘Z̅_ 6,0,3 = IZIZZI, 𝒢̅_3 = 𝒢_5∘X̅_ 6,0,3 = ZIZIZI, 𝒢̅_4 = 𝒢_6 = YYYYII. Our new set of generators is now formed simply by removing the last (identity) Pauli from the full set described above, and are given by the stabilizer 𝒮_ 5,1,2 = {IYXZY,IZIZZ,ZIZIZ,YYYYI}; consequently, the new logical operators are X̅_ 5,1,2 = XXZZY, and Z̅_ 5,1,2 = XXYYZ. As in <ref>, one may apply Hadamard gates to each of the boundary qubits for the stabilizers and logical operators, effectively flipping X ↦ Z, and Z ↦ X, as expected. We also checked numerically that the isometry constraints of <ref> are upheld, with the only difference being that rotational invariance is not upheld for this 2-uniform state. There are other constructions of alternative Evenbly codes that exist; we present a few of them below. As an example, one can define an Evenbly code for the {5,7} hyperbolic tiling, as we can define a vertex tensor A' |Ψ_ 7,0,3_2⟩ = 1/√(8)[ |1111111⟩ + |0101010⟩ + |1001100⟩ + |0011001⟩ + |1110000⟩ + |0100101⟩ + |1000011⟩ + |0010110⟩] . This state is 2-uniform, and was taken from <cit.>. Here, as in all examples of this section, we utilize stabilizer shortening in order to convert these MME states into codes. In addition to the solutions presented thus far, we have also found other 2-uniform solutions which generate alternative Evenbly codes for hyperbolic {p,q} tilings with q=8,9,10, respectively: |Ψ_ 8,0,3_2⟩ = 1/√(12)( |00000000⟩ + |00011101⟩ + |10001110⟩ + |01000111⟩ + |10100011⟩ + |11010001⟩ + |01101000⟩ + |10110100⟩ + |11011010⟩ + |11101101⟩ + |01110110⟩ + |00111011⟩) , |Ψ_ 9,0,3_2⟩ = 1/√(12)( |000000000⟩ + |100011101⟩ + |010001110⟩ + |101000111⟩ + |110100011⟩ + |011010001⟩ + |101101000⟩ + |110110100⟩ + |111011010⟩ + |011101101⟩ + |001110110⟩ + |000111011⟩) , |Ψ_ 10,0,3_2⟩ = 1/√(12)( |0000000000⟩ + |0100011101⟩ + |1010001110⟩ + |1101000111⟩ + |0110100011⟩ + |1011010001⟩ + |1101101000⟩ + |1110110100⟩ + |0111011010⟩ + |0011101101⟩ + |0001110110⟩ + |1000111011⟩) . Similarly to the constructions of <cit.>, one may also be able to utilize higher local-dimension k-uniform quantum states in order to design qudit-level Evenbly codes. This direction is the subject of ongoing research. As stated in the main text, decoding these resulting alternative Evenbly codes is highly non-trivial using the greedy recovery algorithm, although in principle it may still be possible. As such, one may utilize the Gaussian elimination algorithm as detailed in <ref>. § DECODER DETAILS We utilize in this paper two different decoding schemes against erasure noise. Typically, one describes erasure noise as a specialized form of depolarizing noise in which exact knowledge about the locations of errors is given. Such a channel ℰ(ρ) acting on density matrix ρ takes on the following form <cit.>: ℰ(ρ) = (1-p)ρ + p/3 ( ∑_i𝒫_iρ𝒫_i), where, as we mentioned before, it is assumed that the precise location of errors is known and can be given to a decoder. §.§ Greedy Reconstruction Algorithm A non-optimal but efficient decoder against erasure errors on holographic codes is given by a greedy algorithm, first proposed in <cit.> for HaPPY codes. In holographic codes, the logical qubits are arranged on a hyperbolic lattice, resulting in a hierarchy with respect to bulk depth, i.e., lattice distance from the physical qubits on the boundary. After some number of known (heralded) erasures of the physical qubits has occured, one starts to reconstruct the logical qubits closest to the boundary, applying the decoding map corresponding to each individual tensor (e.g., of the 5,1,3 code for the perfect-tensor pentagon code). Once these are (partially) reconstructed, one has access to the “intermediate” physical qubits corresponding to contracted legs one layer deeper into the bulk, repeating the local reconstruction procedure there then repeating the procedure. When no more logical qubits can be reconstructed, the greedy algorithm terminates and one has effectively found a decoder for logical qubits in a greedy wedge of the bulk on the non-erased physical qubits. A greedy decoder can be straightforwardly constructed for any tensor network stabilizer code by translating possible operator pushing steps, which follow from the application of the stabilizers and gauge-fixed logical operators, into greedy steps. For any Evenbly code, for example, the operator pushing steps of <ref> (which are equivalent to the generalized isometry conditions of <ref>) imply a greedy recovery for single and bulk-neighboring logical qubits that is shown in <ref>(a)-(b). Note that this generalizes the original greedy algorithm of <cit.>, in which each greedy step only recovers one logical qubit at a time. Beyond these general recovery step that any Evenbly code fulfills, the GHZ-based code discussed in the main text allows for an additional greedy step shown in <ref>(c), whereby access to physical sites on a tensor can assist in the recovery of the logical qubit on its neighbor. Yet, greedy recovery for a max-rate Evenbly code is generally much weaker than for a HaPPY code, owing to the effect that low-weight errors can have deep in the bulk <cit.>, precluding the HaPPY codes' feature of “uberholography” <cit.>. A generic example is given in <ref>(d), where the erasure of only a few boundary sites prevents greedy reconstruction of most of the bulk. Fortunately, the greedy algorithm is significantly strengthened in the gauge-fixed regime, where the availability of additional stabilizer terms on each tensor allows for a wider range of greedy reconstruction steps. In the GHZ-based Evenbly code, some greedy steps following gauge-fixing in the X, Y, or Z gauge are visualized in <ref>(a)-(d). Crucially, some of these steps extend the greedy wedge from non-adjacent sites, allowing disjoint wedges to merge in the bulk and preventing small boundary erasures from affecting recovery deep in the bulk. Again we give a generic example of such a reconstruction in <ref>(e). The greedy algorithm is not an optimal erasure decoder, as it only allows for reconstruction of a logical operator if an entire bulk wedge containing the relevant logical sites can be reconstructed as well. However, it is very fast: Given k logical qubits, each greedy step involves checking every bulk site that is not yet in the greedy wedge (that is, at most k). As the algorithm terminates if a greedy step does not add at least one more logical qubit to the wedge, the number of steps is likewise bounded by k, resulting in a complexity of O(k^2). §.§ Gaussian Elimination Algorithm The Gaussian elimination algorithm, as introduced in <cit.>, functions as follows. Firstly, the stabilizers 𝒮_R and logical operators ℒ_R are used as input into the algorithm, along with the number of physical qubits n. Next, the stabilizers and logical operators are converted into binary symplectic vectors <cit.>; each vector is arranged as an ((n-k) × n) array in which the last two rows represent the binary symplectic support of the two logical operators (in the case of the zero-rate Evenbly code), and we name these BSV in <ref>. For every Monte Carlo trial, an error is initialized and takes the form of an error of weight w = [0, n]. As an example, a w = 1 error on the L = 0 HTN seed code could be randomly initialized in BSV form as 𝐞 = [1000|1000] . After initializing a randomized error of weight w, we perform stabilizer and logical operator filtering, equivalent to the treatment in <cit.>. After obtaining the filtered-support array BSV' from BSV_filter, we perform Gaussian elimination modulo 2; the matrix row-reduction algorithm that we utilize exhibits runtime complexity as 𝒪((n-k)^2n), although faster methods that guarantee optimality could be leveraged <cit.>. After obtaining the binary reduced-row echelon form (RREF) BSV'_RREF, we iterate through the rows of BSV'_RREF and check the last two columns. If either of the entries are of value 1, then we check the other values in the other (n-2) columns. If any value is equal to 1, then the algorithm succeeds, and we return True after incrementing the counter num_true. The algorithm terminates when all trials have completed, and the number of True values are counted against the number of trials (which we name 𝒫_rec = num_true/mc_trials). This mathematical procedure largely follows <cit.>. Finally, in order to obtain estimates for the error recovery probability, we use the binomial formula P_recovery(p,n) = ∑_wn w p^w (1-p)^n-w𝒫_rec . In the simulations executed for this work, we vary the number of Monte Carlo simulations as a function of the number of layers, in order to efficiently delegate computational resources. The number of Monte Carlo trials per layer and variant tested are shown in <ref>. The first column, labeled "HTN", corresponds to the zero-rate and constant-rate Evenbly codes tested in <ref>, while the others refer to the other variants of the constant-rate HTN in which extra layers of Z or Y gauge are added. §.§ Integer-Optimization Decoding Algorithm Our approach to the integer-optimization decoder largely follows the treatment of <cit.>, and proceeds as follows. After initializing all of the stabilizers in BSV form (as discussed in <ref>) and stacking them into a ((n-k) × n) matrix 𝒮, and calculate the error syndromes s = 𝒮·ϵ, where ϵ represents a Pauli error initialized of a certain weight w. However, as it is known that the matrix 𝒮 is not a square matrix, it does not, strictly speaking, have a modulo 2 inverse. As such, we must seek the pseudoinverse 𝒮_pseudo^-1 for the error ϵ such that the 𝒮·𝒮_pseudo^-1 = 𝕀. This matrix 𝒮_pseudo^-1 exhibits the property that every column anticommutes with the corresponding stabilizer from 𝒮. More information on this procedure can be found in <cit.>. Finally, after obtaining the best guess for the error based on the pseudoinverse, we recognize that the suggested error is not of the lowest possible weight; therefore, a second portion of the algorithm involves searching for a minimal-weight logical-equivalent error, solving the combinatorial problem ϵ' = ϵ + ∑^(n-k)_ℓλ_ℓ s_ℓ + ∑^k_mμ_m L_m, where λ_ℓ, μ_m∈ℤ_2. For this, we leveraged the integer-optimization functionality of the Gurobi optimization package <cit.>. Judging the correctness of the combinatorial optimization equation above proceeds similarly to the Gaussian elimination algorithm described in <ref>; if the algorithm succeeds in finding a minimal-weight vector ϵ', then recovery is deemed as successful. After a number of Monte Carlo trials, the same binomial formula from <ref> is used to estimate the recovery threshold p_rec.
http://arxiv.org/abs/2407.13492v1
20240718132053
Enhancing Biomedical Knowledge Discovery for Diseases: An End-To-End Open-Source Framework
[ "Christos Theodoropoulos", "Andrei Catalin Coman", "James Henderson", "Marie-Francine Moens" ]
cs.CL
[ "cs.CL", "cs.AI" ]
[ [ Received: date / Accepted: date =================================== § ABSTRACT The ever-growing volume of biomedical publications creates a critical need for efficient knowledge discovery. In this context, we introduce an open-source end-to-end framework designed to construct knowledge around specific diseases directly from raw text. To facilitate research in disease-related knowledge discovery, we create two annotated datasets focused on Rett syndrome and Alzheimer's disease, enabling the identification of semantic relations between biomedical entities. Extensive benchmarking explores various ways to represent relations and entity representations, offering insights into optimal modeling strategies for semantic relation detection and highlighting language models' competence in knowledge discovery. We also conduct probing experiments using different layer representations and attention scores to explore transformers' ability to capture semantic relations.[Data and code: publicly available upon acceptance.] § INTRODUCTION Knowledge discovery <cit.> is a pivotal research domain due to the surge in publications, which makes keeping up with new findings challenging, necessitating automated knowledge extraction and processing. Of particular concern is the biomedical literature, where updates occur with ever-accelerating frequency (Fig. 1). Despite advances in healthcare, many diseases, such as Alzheimer's disease (AD) <cit.> and multiple sclerosis <cit.>, lack effective cures. Additionally, over 1,200 rare disorders have limited or no cures according to the National Organization for Rare Disorders.[https://rarediseases.org/rare-diseases/] Discovering new scientific insights from research papers can expedite disease understanding and accelerate cure development. This paper presents an end-to-end framework for detecting medical entities in unstructured text and annotating semantic relations, enabling automated knowledge discovery for diseases. We employ a multi-stage methodology for data acquisition, annotation, and model evaluation. The process starts with gathering relevant PubMed abstracts from PubMed to form the corpus. Entities are identified and extracted, followed by the co-occurrence graph generation that models the intra-sentence co-occurrence of the entities across the corpus. Leveraging the processed text and co-occurrence graph, an algorithm samples sentences to create gold-standard datasets. Medical experts label the semantic relations between entities within these sentences via an annotation portal. The framework's versatility allows application across various diseases and enables expansion to encompass knowledge about symptoms, genes, and more. This study focuses on two diseases of particular research interest: Rett syndrome (RS) <cit.> and AD. These diseases are selected due to their significant impact and the absence of a cure, highlighting the urgency for advancements in understanding and treatment. We introduce two curated datasets tailored for detecting semantic relations between entities in biomedical text related to RS and AD. The datasets are used for benchmarking, testing techniques for representing relations and entities and assessing language models' capabilities in knowledge discovery. This work probes the layer outputs of transformer models <cit.> and their attention patterns to reveal their ability to implicitly capture semantic relations in biomedical text. RS <cit.> poses challenges due to its sporadic nature and rare expression across diverse racial groups. The disorder's elusive nature undermines its comprehension and stresses the pressing need for a cure. Rare diseases collectively affect a substantial portion of the population, with over 30 million affected people in Europe alone <cit.>. AD is characterized by its prevalence among older populations, with millions of patients worldwide as it is the most common type of dementia (60-70% cases) <cit.>. With life expectancy on the rise, the projected increase in Alzheimer's cases accentuates the urgency of finding a cure. In summary, the key paper's contributions are: * Development of an open-source end-to-end framework to build disease knowledge directly from raw text. * Two annotated datasets for RS and AD provide gold labels for semantic relations, aiding disease knowledge discovery research.[The description of distantly supervised datasets for weakly supervised scenarios is included in Appendix F.] * Benchmarking on the datasets examines methods for relation and entity representation, offering insights into optimal approaches for semantic relation detection and emphasizing language models' knowledge discovery capabilities. * Probing experiments with different layer representations and attention scores assess transformers' inherent ability to capture semantic relations. § DATA PIPELINE We focus on developing a robust data pipeline (Fig. 2) to annotate sentences with entities associated with the Unified Medical Language System (UMLS) <cit.>. The first step involves the retrieval of the textual abstracts, followed by the mention extraction that includes entity detection and linking to UMLS. We construct a co-occurrence graph to highlight interconnections between entities in the text. The processed text and co-occurrence graph are then used to develop two curated datasets with precise entity annotations and semantic relations between detected entity pairs.[Additional information regarding the data pipeline is incorporated in Appendix A.] Abstract retrieval. We retrieve PubMed[ https://pubmed.ncbi.nlm.nih.gov/] articles ids based on a query (e.g., Rett syndrome) and extract their open-access abstracts. To accomplish this, we leverage the official Entrez Programming Utilities <cit.> and the Biopython API <cit.> (BSD 3-Clause License), ensuring access to the vast repository of biomedical literature. After obtaining the PubMed IDs (PMIDs), we retrieve the abstracts from the specified articles and tokenize the text into sentences using NLTK <cit.> (Apache License 2.0). Mention extraction. MetaMapLite <cit.> (open-source BSD License) is provided by the National Library of Medicine (NLM) for extracting biomedical entities and mapping them to Concept Unique Identifiers (CUIs) within UMLS. The tool is updated every two years to incorporate the latest medical terminology and to ensure its accuracy in extraction and mapping. MetaMapLite simultaneously extracts mentions and links them to UMLS in one step, efficiently associating mentions with their corresponding CUIs. We detect a diverse range of entities, spanning 82 unique semantic types and covering a broad spectrum of biomedical concepts, including diseases, biologically active substances, anatomical structures, genes, and more. Detailed entity detection often leads to overlapping or successive entities in the text. To address this, our pipeline incorporates a merging strategy that consolidates overlapping or subsequent entities into cohesive units. For example, in the sentence: "To test norepinephrine augmentation as a potential disease-modifying therapy, we performed a biomarker-driven phase II trial of atomoxetine, a clinically-approved norepinephrine transporter inhibitor, in subjects with mild cognitive impairment due to AD.", the subsequent relevant mentions norepinephrine transporter and inhibitor are merged to one entity. Co-occurrence graph generation. We model the intra-sentence co-occurrence between the entities. Each node in the graph corresponds to a unique CUI and contains metadata including the semantic type and the list of sentence IDs where the corresponding entity is detected. An edge between two nodes signifies that the corresponding entities co-occur within the same sentence. The edge weight represents the number of times two entities co-occur in a sentence throughout the text corpus. §.§ Dataset Creation Leveraging the extracted co-occurrence graph, we define two distinct probability distributions to select sentences for manual annotation. The first distribution 𝒫 focuses on common pairs of co-occurred entities, with higher frequency in the co-occurrence graph resulting in a higher likelihood of sampling. The second distribution ℐ𝒫 prioritizes novel/rare pairs of co-occurred entities, selecting sentences where the entities have a lower frequency in the co-occurrence graph. We sample 50% of sentences using 𝒫 and 50% using ℐ𝒫 to ensure a balance of common and potentially novel pairs of co-occurring entities in the datasets.[Sentence sampling algorithm details in Appendix A.] Then, we develop an annotation portal using the streamlit[ https://streamlit.io/] library, providing a user-friendly interface for annotators. Annotators are presented with a sentence containing two highlighted entities and are prompted to categorize the semantic relation between them. Options include positive (direct semantic connection), negative (negative semantic connection where negative words like "no" and "absence" are present), complex (semantic connection with complex reasoning), and no relation. The annotation portal offers additional functions such as sentence removal (for non-informative sentences), entity removal (for incorrect entity types or spans), and context addition (for providing additional text to aid in relation type determination). We enlist the expertise of three medical experts to ensure the accuracy and reliability of the annotation process. The result of the expert annotation yields two curated datasets. The Relation Detection dataset for Rett Syndrome (ReDReS) contains 601 sentences with 5,259 instances and 1,148 unique CUIs (Tab. 1). The inter-annotator agreement is measured using the Fleiss kappa score <cit.>, resulting in 0.6143 in the multi-class setup (4 classes) and indicating substantial agreement among annotators <cit.>. In the binary setup (relation or no relation), the Fleiss kappa score is 0.7139. The Relation Detection dataset for Alzheimer's Disease (ReDAD) comprises 641 sentences with 8,565 instances and 1,480 unique CUIs (Tab. 1). The Fleiss kappa score is 0.6403 in the multi-class setup and 0.7064 in the binary setup, showing substantial consensus among annotators. The final labels are determined through majority voting, leveraging the labels provided by each expert. While the label distribution across classes is relatively balanced, the negative class is under-represented with 97 and 125 instances in ReDReS and ReDAD respectively (Tab. 1). Each dataset is randomly split into train, development, and test sets. § MODELS In this section, we introduce two main models, the Language-Model Embedding Learning (LaMEL) model and the Language-Model Relation Detection (LaMReD) model (Fig. 3), to benchmark datasets and establish robust baselines. Task formulation. Given a sentence containing two identified entities e1 and e2, we predict the semantic relation sem_r between them. In the multi-class setup, the labels are: positive, negative, complex, and no relation. In the binary setup, the goal is to determine if any relation exists. Special tokens [ent] and [/ent] mark the start and end of each entity within the sentence, ensuring consistent identification and processing of entity boundaries. §.§ LaMEL model LaMEL learns an embedding space optimized for relation detection (Fig. 3). As the backbone language model (LM), we opt for PubMedBERT <cit.> (MIT License), available in both uncased base and uncased large versions.[HuggingFace's Transformers library <cit.>] PubMedBERT is pretrained on the PubMed corpus, making it well-suited for our task as the curated datasets consist of sentences of abstracts from PubMed papers. Leveraging PubMedBERT ensures that the model can capture the language patterns prevalent in biomedical text. Following the LM encoding, we construct the representation of each entity by extracting its contextualized embedding E_i corresponding to each entity e_i from the encoded sequence. Subsequently, the entity representations are projected to the embedding space using a linear layer without changing the embedding dimension. The final prediction is based on cosine similarity between the two projected entity representations. If the cosine similarity exceeds a predefined threshold, the model predicts that there is a semantic relation between the two entities. We experiment with diverse strategies for learning entity representations (Fig. 3), aiming to optimize the effectiveness of the embedding space for the relation detection task. The explored types of entity representation E are: * A, B, C - Special Tokens: E_A = t_[ent], E_B = t_[/ent], E_C = t_[ent] ; t_[/ent], * D - Entity Pool: E_D = [t_E], * E - Entity & Middle Pool: E_E = [t_E] * [t_Inter], * F, G, H - Special Tokens & Middle Pool: E_F = t_[ent] * [t_Inter], E_G = t_[/ent] * [t_Inter], E_H = t_[ent] * t_[/ent] * [t_Inter], where {E_A, E_B, E_D, E_E, E_F, E_G, E_H}∈ℝ^d and E_C∈ℝ^2d, d is the embedding size of PubMedBERT base (768) and PubMedBERT large (1024), ; defines the concatenation, * holds for the element-wise multiplication, t_[ent], t_[/ent] are the embeddings of the start and end special tokens of the entities, [t_E] and [t_Inter] are the averaged pooled representation of the entities and the intermediate tokens between the entities respectively. §.§ LaMReD model LaMReD provides two variations that differ in information synthesis (Fig. 3), aiming to explore the potential effect of different aggregations <cit.>. LaMReDA utilizes element-wise addition to aggregate the entities' representations, while LaMReDM employs element-wise multiplication. The input text is encoded using PubMedBERT (base or large). Following LM encoding, we construct the relation representation by sampling and aggregating tokens from the input sequence. This step enables the model to capture essential features and contextual information relevant to semantic relation classification. To mitigate the risk of overfitting and enhance model generalization, we incorporate a dropout layer <cit.> with a probability of 0.3. The linear classification layer takes the aggregated representation and outputs the predicted label. Following the paradigm proposed by <cit.> and <cit.>, we experiment with various approaches for learning relation representations tailored to the relation detection task to empirically ascertain the effectiveness of each strategy (Fig. 3). The explored types of relation representation R are the following: * A, B, C - Special Tokens: R_A = f(l(t_[ent]_1), l(t_[ent]_2)), R_B = f(l(t_[/ent]_1), l(t_[/ent]_2)), R_C = f(l(t_[ent]_1), l(t_[/ent]_1), l(t_[ent]_2), l(t_[/ent]_2)), * D - Entity Pool: R_D = f(l([t_E1]), l([t_E2])), * E - Middle Pool: R_E = l([t_Inter]), * F - [CLS] token & Entity Pool: R_F = f(l(t_[CLS]), l([t_E1]), l([t_E2])), * G, H, I - [CLS] token & Special Tokens: R_G = f(l(t_[CLS]), l(t_[ent]_1), l(t_[ent]_2)), R_H = f(l(t_[CLS]), l(t_[/ent]_1), l(t_[/ent]_2)), R_I = f(l(t_[CLS]), l(t_[ent]_1), l(t_[/ent]_1), l(t_[ent]_2), l(t_[/ent]_2)), * J - [CLS] token & Middle Pool: R_J = f(l(t_[CLS]), l([t_Inter])), * K, L, M - Special tokens & Middle Pool: R_K = f(l(t_[ent]_1), l([t_Inter]), l(t_[ent]_2)), R_L = f(l(t_[/ent]_1), l(t_Inter]), l(t_[/ent]_2)), R_M = f(l(t_[ent]_1), l(t_[/ent]_1), l([t_Inter]), l(t_[ent]_2), l(t_[/ent]_2)), * N - Entity & Middle Pool: R_N = f(l([t_E1]), l([t_Inter]), l([t_E2])), * O, P - Context Vector & Entity Pool: R_O = l(cv), R_P = f(l([t_E1]), l([t_E2]), l(cv)), where {R_A, R_B, R_C, R_D, R_E, R_F, R_G, R_H, R_I, R_J, R_K, R_L, R_M, R_N, R_O, R_P}∈ℝ^d, d is the embedding size of PubMedBERT base (768) and PubMedBERT large (1024), f() is the aggregation function, element-wise addition for LaMReDA and element-wise multiplication for LaMReDM, l() is a linear projection layer with dimension equal to the embedding size, t_[ent]_1, t_[/ent]_1, t_[ent]_2, and t_[/ent]_2 are the embeddings of the start and end special tokens of the first and second entity and t_[CLS] is the representation of the special token [CLS]. We define the averaged pooled representation of the entities and the intermediate tokens between the entities as [t_E1], [t_E2], and [t_Inter] correspondingly. In equations 23 and 24, we utilize the localized context vector cv[Additional information is provided in Appendix D.] that utilizes the attention heads to locate relevant context for the entity pair and was introduced in ATLOP <cit.>, a state-of-the-art model in document-level relation extraction. §.§ Experimental setup The models are trained for 50 epochs and the best checkpoints are retained based on the performance on the development set, measured using the F1-score. We utilize the Adam <cit.> optimizer with a learning rate of 10-5. The batch size is set to 16. We conduct experiments in two distinct setups. In the multi-class setup, we evaluate performance using micro and macro F1-score, considering four relation types: positive, negative, complex, and no relation. In the binary setup, the objective is the prediction of the presence of relation. LaMEL is specifically designed for the binary setup. We utilize the official splits of ReDReS and ReDAD (Tab. 1) and repeat the experiments 10 times with different seeds. To ensure robustness of results, we also employ a 5-fold cross-validation approach. To explore the cross-disease capabilities of our approach, we train the models using one dataset (e.g., ReDReS) and evaluate on the other (e.g., ReDAD), and vice versa. We utilize the relation representation R_A (Eq. 9) for LaMReDA and LaMReDM and the entity representation E_A (Eq. 1) for LaMEL. These experiments are repeated 10 times with different seeds, and 15% of the training data is excluded to define the development set[Hardware: single NVIDIA RTX 3090 GPU 24GB.]. The cross-entropy loss function is used to train LaMReDA and LaMReDM. For LaMEL, the following cosine embedding loss function is used: .77!l(x_1, x_2, y)= 1 - cos(x_1, x_2), if y = 1 max(0, cos(x_1, x_2) - m), if y = -1, where x_1 and x_2 are the projected representations of the two entities, y is the gold-truth label (1 if the entities are correlated, -1 if they are not), cos() is the cosine similarity in the embedding space, and m is the margin parameter that is set to 0. In the inference step, the threshold to predict the presence of relation based on the cosine similarity of the two entity representations is set to 0.5. § RESULTS Tables 2 and 3 report the F1-scores for LaMEL LaMReDA, and LaMReDM, models on the ReDReS and ReDAD datasets. Each cell (except for cross-disease experiments) displays two values: the average F1-score from 10 runs on the original test set (Tab. 1) and the average F1-score from a 5-fold cross-validation. The models perform well across all relation (A-P) and entity (A-H) representations, showing their ability to learn meaningful representations for the semantic relation task regardless of initial token selection. However, we observe patterns regarding the relation representations. In the binary setup, relation representation R_G (Eq. 15) yields strong results for both datasets, suggesting that including the [CLS] token representation might be beneficial. In the multi-class setup, relation representations R_L (Eq. 20), R_J (Eq. 18), and R_O (Eq. 23) are effective for both datasets, indicating that the surrounding context is crucial for the more complex task, as R_L and R_J include the averaged pooled representation of intermediate tokens between entities, and R_O leverages the context vector <cit.>. The intra-model comparison reveals that over-parameterization tends to be useful. Using PubMedBERT large generally results in better performance than the base alternative. The PubMedBERT base shows superior performance mainly only in experiments using the original splits of ReDReS (Tab. 1). LaMEL is highly competitive with LaMReDA and LaMReDM, indicating that learning entity embedding spaces optimized for relation detection is promising. LaMEL achieves the highest performance in the 5-fold setup of ReDAD and the original setup of ReDReS, with F1-scores of 91.03% and 91.25% respectively. The inter-model comparison across the same relation representations indicates that the aggregation function does not significantly impact relation detection tasks. Neither LaMReDA (element-wise addition) nor LaMReDM (element-wise multiplication) show a clear advantage over the other. This suggests that the transformer layers of PubMedBERT and the projection layer l() preceding the aggregation are effectively trained in both models to encode the essential information for relation detection, regardless of the aggregation function used. The cross-disease experiments underscore the robustness of the models in both binary and multi-class setups. This robustness supports transfer learning <cit.> in semantic relation detection, extending to other diseases, highlighting the potential for broader applications and research endeavors in knowledge discovery. Human Performance. To assess and compare to human performance, two additional experts identify the relation type in a random sample of 300 instances from the test set of each dataset (Tab. 1). The evaluation ground truth is based on the original test set labels. In the binary setup, the average F1-score ranges from 92.14 for ReDReS to 91.87 for ReDAD. The LaMReDA, LaMReDM, and LMEL models achieve performance comparable to human experts, indicating a high ability to detect semantic relations. Multi-class macro F1-scores range from 85.23 (micro: 85.45) to 85.76 (micro: 85.87) for ReDReS and ReDAD, respectively. Compared to human experts, all models show a performance gap, highlighting that identifying more complex aspects of semantic relation is a challenging task. Baseline performance - lower bound. We randomly assign labels based on the training data's class distribution (Tab 1). In the binary setup, the baseline achieves F1-scores of 54% (ReDReS) and 53.16% (ReDAD). For the multi-class setup, the macro F1-scores range from 32.05% to 32.43%, stressing the task's difficulty, particularly for distinguishing various semantic relations (multi-class)[More information is available in Appendix E.]. § PROBING This study probes PubMedBERT's ability to capture semantic relations between entities. We explore different transformer layer representations and attention scores per layer and attention head. Averaged pooled entity representations are extracted from each layer, followed by training a linear classification layer. We test relation representations R_D, R_O, and R_P (Eq. 12, 23, 24) of LaMReDA and LaMReDM to assess the impact of the context vector. Out-of-the-box representations are evaluated without the projection linear layer l(). We also extract average attention scores of tokens for each entity towards the other across each layer and head, concatenating these into a feature vector for training a linear classification layer. Following <cit.>, we also train the classification layer using average attention scores between the two entities across all layers. Figure 4 shows the results of probing experiments in the binary setup using ReDReS and PubMedBERT base.[Additional probing experiments in Appendix G.] The 10th and 11th layers provide the most informative representations for relation types (R_D, R_O, and R_P). The R_D representation, using element-wise multiplication, outperforms other representations in intra-layer comparisons, suggesting its effectiveness without end-to-end training. However, as highlighted in section 4, inter-model comparisons indicate that the transformer layers and projection layer l() capture crucial information for relation detection, regardless of the aggregation function. Using context vectors with R_O and R_P generally offers no advantage, though R_O from the 10th and 11th layers performs well, indicating possibly meaningful localized context. Attention scores between entities in the 12th layer yield the best performance, surpassing the baseline of using scores from all layers, indicating strong attention between the entities in the last layer. Figure 4 reveals that the 6th and 9th attention heads are most informative for relation detection. § RELATED WORK Information Extraction Datasets. Several biomedical datasets aim to enhance Information Extraction (IE) system development <cit.>, typically focusing on one or a few entity types and their interactions. AIMed <cit.>, BioInfer <cit.>, and BioCreative II PPI IPS <cit.> formulate protein-protein interactions. The chemical-protein and chemical-disease interactions are modeled by DrugProt <cit.> and BC5CDR <cit.>, respectively. ADE <cit.>, DDI13 <cit.>, and n2c2 2018 ADE <cit.> include drug-ADE (adverse drug effect) and drug-drug interactions. EMU <cit.>, GAD <cit.> and RENET2 <cit.> contain relations between genes and diseases. N-ary <cit.> incorporates drug-gene mutation interactions. The task of event extraction is illustrated by GE09 <cit.>, GE11 <cit.>, and CG <cit.>. DDAE <cit.> includes disease-disease associations. BioRED <cit.> focuses on document-level relations for various entities. Unlike these datasets, ReDReS and ReDAD focus on RS and AD, include entities of up to 82 different semantic types, and model the semantic relation between them. Knowledge Discovery. <cit.> present PREDICT, a method for ranking potential drug-disease associations to predict drug indications. <cit.> release AlzKB, a heterogeneous graph knowledge base for AD, constructed using external data sources and describing various medical entities (e.g., chemicals, genes). Other graph-based efforts model knowledge around AD for tasks such as drug repurposing <cit.>, gene identification <cit.>, or as general knowledge repositories <cit.>. Another paradigm for knowledge discovery is the open information extraction (OIE) setup <cit.>, which faces challenges such as data consistency, performance evaluation, and semantic drift <cit.>. Research efforts <cit.> aim to address these issues and extract knowledge with little or no supervision. Advances in literature-based discovery <cit.> try to identify novel medical entity relations using graph-based <cit.>, machine learning <cit.>, and co-occurrence methods <cit.>. <cit.> stress the potential of large language models (LLMs) to summarize, simplify, and synthesize medical evidence <cit.>, suggesting that LLMs may have encoded biomedical knowledge <cit.>. To exploit this potential, we explore constructing LM representations for knowledge discovery. To the best of our knowledge, no systematic approach assembles knowledge about RS. Unlike previous work, we introduce a, in principle, disease-agnostic framework, to acquire knowledge about RS and AD starting from raw text. § CONCLUSION This work presents an open-source framework for disease knowledge discovery from raw text. We contribute two new annotated datasets for RS (ReDReS) and AD (ReDAD), facilitating further research. Extensive evaluation explores various methods for representing relations and entities, yielding insights into optimal modeling approaches for semantic relation detection, and emphasizing language models' potential in knowledge discovery. § LIMITATIONS One limitation of the paper is that the data pipeline relies on an external mention extractor/linker. However, this aspect introduces flexibility, allowing researchers and practitioners to integrate custom models suited to their specific applications. The creation of gold-standard datasets requires the manual work of medical experts. This process is time-consuming and resource-intensive, potentially limiting the scalability of the approach. Nevertheless, the experiments demonstrate that the supervised models of the study achieve strong performance in semantic relation detection without needing a large training set. Additionally, the cross-disease experiments highlight the robustness of the models in both binary and multi-class setups. This finding enables transfer learning scenarios in semantic relation detection, which can be applied to other diseases or medical aspects, indicating a potential for broader applications and research opportunities. § ETHICS STATEMENT All recruited medical experts provided informed consent before participating in the annotation process. The compensation provided to the annotators was adequate and considered their demographic, particularly their country of residence. § ACKNOWLEDGEMENTS This work is supported by the Research Foundation – Flanders (FWO) and Swiss National Science Foundation (SNSF). Christos Theodoropoulos and Marie-Francine Moens are affiliated with Leuven.AI - KU Leuven institute for AI, B-3000, Leuven, Belgium. acl_natbib § DATA PIPELINE: ADDITIONAL INFORMATION To facilitate effective abstract retrieval, we implement an iterative approach to circumvent the API's limitation of retrieving only 10,000 article IDs per query. This iterative process enables us to access a comprehensive set of PubMed IDs (PMIDs) related to the query. The detailed list of the 82 semantic types of the MetaMapLite-based pipeline is presented in Table 4. In addition to the MetaMapLite-based pipeline, we propose a second pipeline that is based on ScispaCy (Apache License 2.0) (Fig. 5). The difference lies in the selection of entity extractors and linkers that map the extracted entities to knowledge schemes. Unlike MetaMapLite, which adopts an integrated approach where mention extraction and linking are performed simultaneously in a single step and focuses on UMLS mapping, allowing for more precise and targeted extraction of entities, ScispaCy serves a broader range of Natural Language Processing (NLP) tasks. After the retrieval of the abstracts that is described in subsection 2.1, the following steps are executed: Knowledge schema and linker generation, Mention extraction, Entity linking. and Sampling of linked identifiers. Knowledge schema and linker generation. ScispaCy <cit.> harnesses an older version of UMLS (2020AA). This version serves as the foundation upon which ScispaCy trains and constructs its linkers that operate on a char-3grams string overlap-based search mechanism, facilitating efficient and accurate entity recognition and linking processes. Following the paradigm of ScispaCy, we provide scripts for generating updated linkers tailored to a range of knowledge schemes. These include UMLS <cit.>, Gene Ontology (GO) <cit.>, National Center for Biotechnology Information (NCBI) taxonomy <cit.>, RxNorm <cit.>, SNOMED Clinical Terms (SNOMEDCTUS) <cit.>, Human Phenotype Ontology (HPO) <cit.>, Medical Subject Headings (MeSH) <cit.> DrugBank <cit.> and Gold Standard Drug Database (GS)[https://www.nlm.nih.gov/research/umls/ sourcereleasedocs/current/GS/index.html]. Of particular note is the inclusion of UMLS, a unified system encompassing various knowledge bases, vocabularies, taxonomies, and ontologies pertinent to the biomedical domain. Any supported linker maps the concepts to UMLS CUIs enhancing the standardization of medical terminology. Notably, the flexibility of ScispaCy's implementation allows for seamless expansion to incorporate additional knowledge bases, thereby enhancing its versatility and applicability across diverse research needs. Mention extraction. ScispaCy boasts four distinct entity extractors, each trained on different corpora, collectively encompassing a range of entity types. These extractors include named entity recognition (NER) models trained on the CRAFT corpus (with 6 entity types) <cit.>, JNLPBA corpus (with 5 entity types) <cit.>, BC5CDR corpus (with 2 entity types) <cit.>, and BIONLP13CG corpus (with 16 entity types) <cit.>. To maximize the range of the entity extraction, we leverage these diverse extractors in tandem, allowing us to capture mentions of 18 unique entity types (gene or protein, cell, chemical, organism, disease, organ, DNA, RNA, tissue, cancer, cellular component, anatomical system, multi-tissue structure, organism subdivision, developing anatomical structure, pathological formation, organism substance, and immaterial anatomical entity). Entity linking. This process enhances the semantic understanding of the extracted entities, facilitates standardization, which is a key issue in the biomedical field <cit.>, and promotes interoperability with external resources by associating the entities with specific concepts in supported knowledge schemes. Each entity is subjected to a linking process where we attempt to map it to concepts within supported knowledge bases or vocabularies. If a match is found, the entity is assigned a unique identifier, referred to as a CUI, corresponding to the specific concept in the knowledge schema. As entities may be linked to multiple knowledge sources, we merge the extracted CUIs obtained from the different linkers. This consolidation process ensures that each entity is associated with a comprehensive set of identifiers, encompassing diverse perspectives and representations across various knowledge schemes. Sampling of linked identifiers. We address the scenario where multiple CUIs can be extracted for each entity due to the utilization of multiple linkers. We propose a prioritized sampling strategy (Fig. 6) to manage this situation and select the most relevant CUIs effectively. This strategy is designed to sample CUIs based on the predicted type of the entity (e.g., disease, gene, or chemical/drug) by prioritizing mapped CUIs from specific knowledge schemes focused on the entity being processed. For example, if an entity is predicted to be a chemical/drug, the sampling strategy first checks if any linked CUIs exist in RxNorm linker, a specific knowledge schema tailored for chemicals. If linked CUIs are found, they are sampled for inclusion in the final set of linked concepts associated with the entity, otherwise, the search is continued in a prioritized way (Fig. 6). We stress that the sampling strategy can be easily modified by the user based on the requirements of the research or the application. The co-occurrence graph generation step is described in subsection 2.1. Sentence Sampling Algorithm. Given a set of sentences with defined CUIs sent_c and the co-occurrence frequency graph co_g, sample n number of sentences (Alg. 1). Initialize a dictionary f_d and for each sentence save the extracted CUIs pairs c_p (extract_conc(sent)), the frequencies f_p of each pair extracted from the co_g (extract_freq(c_p, co_g)) and the summation of the frequencies t_f. Retrieve the sentence ids, summed frequencies, and the inverted summed frequencies from the dictionary and append them in ids, f_l, and inv_f_l lists respectively. Calculate the total sums of the frequencies t_f_sum, inv_t_f_sum and then utilize them to define the probability distributions 𝒫 and ℐ𝒫. Sample 50% of the sentences from 𝒫 (sample(𝒫, n/2)) and 50% from ℐ𝒫 (sample(ℐ𝒫, n/2)) to ensure a balance of common and potentially novel pairs of co-occurred entities in the dataset. 0.805 § ANNOTATION PORTAL Figure 7 presents the annotation portal with an example from the ReDReS dataset. The annotator's task is to identify the semantic relation between the two highlighted entities, classifying it as either a Positive Relation, Negative Relation, Complex Relation, or No Relation. If the sentence is considered uninformative or if there are errors in entity detection, type, or span, the annotator can remove the sentence or the entities. Furthermore, the annotator is encouraged to provide feedback, including any additional text that can clarify or elaborate on the relationship between the entities. By providing this supplementary information, annotators can contribute to a richer and more nuanced understanding of the relations within the data. § DATASET INSTANCES In this section, we present some instances of different relation types in the datasets. In each example, we highlight the two detected entities. Positive Relation: * Amyloid fibrils are found in many fatal neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, type II diabetes, and prion disease. * AChE has become an important drug target because partial inhibition of AChE results in modest increase in ACh levels that can have therapeutic benefits, thus AChE inhibitors have proved useful in the symptomatic treatment of Alzheimer's disease. Complex Relation: * When the brain's antioxidant defenses are overwhelmed by IR, it produces an abundance of reactive oxygen species (ROS) that can lead to oxidative stress, mitochondrial dysfunction, loss of synaptic plasticity, altered neuronal structure and microvascular impairment that have been identified as early signs of neurodegeneration in Alzheimer's disease, Parkinson's, amyotrophic lateral sclerosis, vascular dementia and other diseases that progressively damage the brain and central nervous system. * Autophagy inhibitor 3-methyladenine (3-MA) attenuated the neuroprotective effect of CA, suggesting that autophagy was involved in the neuroprotection of CA. Negative Relation: * It was not observed in synaptopodin-deficient mice, which lack spine apparatus organelles. * Furthermore, the use of some kinds of antihypertensive medication has been suggested to reduce the incidence of dementia including Alzheimer's disease. No Relation: * Peripheral immune cells can cross the intact BBB, CNS neurons and glia actively regulate macrophage and lymphocyte responses, and microglia are immunocompetent but differ from other macrophage/dendritic cells in their ability to direct neuroprotective lymphocyte responses. * These techniques have thus provided morphological and functional brain alterations mapping of Alzheimer's disease: on one hand grey matter atrophy first concerns the medial temporal lobe before extending to the temporal neocortex and then other neocortical areas; on the other hand, metabolic alterations are first located within the posterior cingulate cortex and then reach the temporo-parietal area as well as the prefrontal cortex, especially in its medial part. § LOCALIZED CONTEXT VECTOR The localized context vector is computed as follows: * Extract the attention scores of the two entities in the last encoding layer of the language model. * Calculate the Hadamard product of the attention vectors. * Calculate the average of the Hadamard product over the attention heads. * Normalize to extract the distribution over the sequence. * Extract the localized context vector by multiplying the token representations of the last encoding layer with the distribution vector. § BASELINE PERFORMANCE: LOWER BOUND To establish a baseline performance (lower bound) for comparison, contrasting with the human evaluation that serves as an upper bound, we randomly assign class labels to each instance of the test set based on the prior class distribution in the training set (Tab. 1). This simulates a classifier with no ability to learn relations between entities. We repeat this experiment 1 million times for robustness and report the average F1-score. In the binary setup, the baseline achieves average F1-scores of 54% (ReDReS) and 53.16% (ReDAD). For the multi-class setup, the average macro F1-scores range from 32.05% (micro: 32.21%) to 32.43% (micro: 32.33%) for ReDReS and ReDAD, respectively. Despite the simplicity of the baseline, the low performance highlights the challenge of the task, especially in the multi-class scenario where the model needs to distinguish between nuanced semantic relations. § DISTANTLY SUPERVISED DATASETS ReDReS and ReDAD include gold annotations for a small fraction of the extracted sentences. The pre-processed text consists of 28,622 and 1,301,429 additional sentences related to RS and AD respectively, without annotations about the semantic relation between the detected entities. Observing that the supervised models achieve performance levels comparable to human experts, we leverage the best-performing models to generate silver labels for the unannotated instances. For the binary setup, we employ: * LaMReDA (PubMedBERT large) with the relation representation R_A (Eq. 9) for the RS corpus. * LaMReDA (PubMedBERT large) with the relation representation R_J (Eq. 18) for the AD corpus. For the multi-class setup, we use: * LaMReDA (PubMedBERT large) with the relation representation R_F (Eq. 14) for the RS corpus. * LaMReDA (PubMedBERT large) with the relation representation R_F (Eq. 14) for the AD corpus. We stress that the selection of the models relies on the performance in the 5-fold cross-validation setup to avoid choosing based on the model performance on the original test set. Each model is trained 10 times with different seeds using the original splits (Tab. 1) of ReDReS and ReDAD correspondingly. The best model weights are saved based on the performance on the development set. Every trained model provides the predictions for the unannotated instances and the final silver labels are extracted through majority voting. The Distantly Supervised Relation Detection dataset for Rett Syndrome (DiSReDReS) contains 304,008 instances with 8,611 unique CUIs and 80 semantic types (Tab. 5). The Distantly Supervised Relation Detection dataset for Alzheimer's Disease (DiSReDAD) comprises 13,608,175 instances with 53,750 unique CUIs and 82 semantic types (Tab. 5). As noisy labeling is inevitable in distantly supervised data and imposes challenges for knowledge extraction scenarios, the two extensive datasets can promote weakly supervised learning. Weakly Supervised Setup. The task formulation remains the same as described in section 3 of the paper. The train sets of ReDReS and ReDAD are replaced by DiSReDReS and DiSReDAD, respectively. The development and test sets remain the same (Tab. 1). To provide a benchmark, we train the LaMReDA (PubMedBERT base) with the relation representation R_A (Eq. 9) for 10 epochs utilizing the ADAM optimizer with learning rate 10-5. The batch is set to 32. The experiments are repeated 10 times with different seeds and the best scores are retained based on the performance on the development set. In the supervised setup (Tab. 1), LaMReDA (PubMedBERT base) with the R_A representation achieves 90.72% and 88.31% F1-score in the binary setup on ReDReS and ReDAD, respectively (Tab. 3). Multi-class macro F1-scores range from 74.52% (micro: 74.49%) to 77.34% (micro: 77.64%) for ReDReS and ReDAD, accordingly (Tab. 3). Table 5 presents the benchmark performance in the binary and multi-class setup for both datasets. Notably, the performance is improved in the weakly supervised setup, indicating the robustness of LaMReDA, when trained with noisy data, and highlighting the quality of the silver labels of DiSReDReS and DiSReDAD. § PROBING: ADDITIONAL EXPERIMENTS We use the same experimental setup as described in subsection 3.3 and the experiments are conducted in the 5-fold cross-validation setting. To provide an inclusive probing analysis on ReDReS, we incorporate additional probing results in this section. Figures 8 and 9 present the experiments in the multi-class setup using PubMedBERT base. Additionally, aiming to explore the probing capabilities of PubMedBERT large, we include the results of further experiments in Figures 10, 11, and 12. These experiments investigate the model's performance in detecting semantic relations, comparing the representations and attention mechanisms at different layers and heads to understand how well the larger LM can discern complex relationships in the biomedical text. § DETAILED ANNOTATION GUIDELINES Task. Determine if there is any semantic relation between the two colored entities in the sentence. General Instructions: * Use Sentence Information Only: Base your annotation solely on the information provided within the sentence. Do not use external knowledge or prior information. * Entity Check: Examine the entities and their types. If an entity is incorrect, if the entity span is inaccurate (includes irrelevant words), or if the entity type is incorrect (e.g., "Rett syndrome" categorized as part of the human body), click the "Remove First Entity" or "Remove Second Entity" button, corresponding to the error. * Removing a Sentence: If a sentence lacks informative content, you have the option to remove it. Use this option if you are confident the sentence is uninformative. Relation Categories: * No Relation: Use this label if there's no semantic relation between the entities in the sentence. * Positive Relation: The two entities are directly, semantically connected. * Negative Relation: The two entities are negatively correlated. This is a rare case, and negative words or phrases (e.g., "no," "absence") often indicate this. * Complex Relation: Entities are related but not straightforwardly positive or negative. Complex reasoning might be needed to determine the semantic relation. Annotation Process: * When presented with a pair, choose the relevant relation category label. * If you change your choice, you can adjust it by clicking a new button corresponding to the revised label. * Important: Once you press "Done", the instance can't be retrieved, so ensure your decision is accurate. * Provide Relation Context: First, you need to finalize your choice for the relation labeling and then provide (if any) the related piece of text. If classifying a pair as related, specify the word or phrase in the sentence that influenced your decision. Use the text box provided and preferably copy-paste to avoid spelling errors. Press "Enter" after inputting the text to store it.
http://arxiv.org/abs/2407.12297v1
20240717033728
WebAssembly and Security: a review
[ "Gaetano Perrone", "Simon Pietro Romano" ]
cs.CR
[ "cs.CR" ]
62 4 7 21 9 27 12 16 25 + + + + + + + - inst1]Gaetano Perrone inst1]Simon Pietro Romano [inst1]organization=University of Naples Federico II, Department of Electrical Engineering and Information Technology, addressline=Via Claudio 21, city=Naples, postcode=80125, state=Naples, country=Italy § ABSTRACT WebAssembly is revolutionizing the approach to developing modern applications. Although this technology was born to create portable and performant modules in web browsers, currently, its capabilities are extensively exploited in multiple and heterogeneous use-case scenarios. With the extensive effort of the community, new toolkits make the use of this technology more suitable for real-world applications. In this context, it is crucial to study the liaisons between the WebAssembly ecosystem and software security. Indeed, WebAssembly can be a medium for improving the security of a system, but it can also be exploited to evade detection systems or for performing crypto-mining activities. In addition, programs developed in low-level languages such as C can be compiled in WebAssembly binaries, and it is interesting to evaluate the security impacts of executing programs vulnerable to attacks against memory in the WebAssembly sandboxed environment. Also, WebAssembly has been designed to provide a secure and isolated environment, but such capabilities should be assessed in order to analyze their weaknesses and propose new mechanisms for addressing them. Although some research works have provided surveys of the most relevant solutions aimed at discovering WebAssembly vulnerabilities or detecting attacks, at the time of writing, there is no comprehensive review of security-related literature in the WebAssembly ecosystem. We aim to fill this gap by proposing a comprehensive review of research works dealing with security in WebAssembly. We analyze  papers by identifying seven different security categories. We hope that our work will provide insights into the complex landscape of WebAssembly and guide researchers, developers, and security professionals towards novel avenues in the realm of the WebAssembly ecosystem. Wasm Security WebAssembly Security Security Review WebAssembly Review Cloud Computing Security § INTRODUCTION WebAssembly is a formal specification for portable machine code, born to allow the realization of portable code developed in any language and executed by modern browsers. This technology was developed to provide a client-side solution for browsers, but several cloud providers are nowadays offering WebAssembly runtime environments for the server side. This reveals the key innovations that WebAssembly promotes in terms of performance, isolation, distribution, and modularity. With the spread diffusion of WebAssembly, it becomes crucial to research the liaison between this technology and security. Security research can involve different topics, such as empirical studies for evaluating the presence of security flaws in WebAssembly design and systems, the realization of solutions aimed at enhancing WebAssembly security, the implementation of novel approaches for discovering vulnerabilities or detecting attacks in WebAssembly, or the proposal of WebAssembly solutions for security-related use cases. Although several studies have been developed in the above-mentioned research fields, to the best of our knowledge, there is no work that provides an exhaustive survey of the existing literature. This work aims to contribute to this direction by providing a comprehensive review of the most relevant works that have investigated the liaisons between security and the WebAssembly ecosystem. We analyze  works, classifying   into seven security categories and describing the remaining  additional works that, while not fitting into a specific security category, provide fundamental knowledge for expanding the literature on WebAssembly and security. We hence provide a security overview of the WebAssembly world, which will hopefully represent a starting point for researchers looking to investigate future works, as well as for security practitioners aiming to integrate this appealing technology into their systems. The remainder of this article is structured in six sections. Section <ref> conveys to the readers the preliminary concepts functional to follow subsequent sections. Section <ref> illustrates the review process we followed for retrieving works relevant to our review. Section <ref> shows existing studies that reviewed WebAssembly research works, with particular emphasis on those that have concentrated their efforts on security, by illustrating the differences with our findings. Section <ref> describes the reviewed works, while Section <ref> provides a discussion on these by summarizing relevant information and research gaps that could be addressed in future investigations. Section <ref> concludes our research path by proposing insights for researchers who want to explore challenging avenues in the WebAssembly security field. § PRELIMINARY CONCEPTS In this section, we first describe the WebAssembly ecosystem by introducing the reader to the basics. Then, we introduce the concepts that can be useful to understand the relevant works that will be illustrated in Section <ref>, namely, smart contracts and security techniques for addressing low-level vulnerabilities. §.§ WebAssembly Introduction In 2015, Brendan Eich announced the creation of WebAssembly, an innovative approach to writing assembly programs for the web <cit.> aimed to replace <cit.>. The main reasons behind the realization of an alternative approach to were to simplify the compiler options and to increase performance by natively decoding code much faster than JavaScript <cit.>. In 2017, the WebAssembly Community Group was created with the mission of providing collaboration on defining a pre-standardization of WebAssembly, and in 2019, WebAssembly became a World Wide Web Consortium (W3C) recommendation <cit.>. Design goals are focused on: * Performance: the code execution should be nearest to native code performance by taking advantage of modern underlying hardware features; * Security: the code should be validated and executed in memory-safe, isolated, and sandboxed environments; * Portability: the code should be independent from language, hardware, and platform. After the WebAssembly foundation, several companies and nonprofit organizations such as Bytecode Alliance <cit.> were interested in supporting the project. WebAssembly was designed to build C/C++ projects, but today, there is a strong community effort to increase the number of high-level languages supported. At the time of writing, it is possible to compile Python, Ruby, PHP, C, and Rust programs and generate WebAssembly code that can be executed into a WebAssembly compiler. As regards the ecosystem, some companies are following the approach of the Docker community, which developed DockerHub, an open search engine and repository for finding ready-to-use Docker images <cit.>. In particular, the Wasmer team provides a complete WebAssembly framework, constituted by a WebAssembly runtime for running WebAssembly binaries <cit.>, a Software Development Kit (SDK) to embed WebAssembly programs in any programming language <cit.>, a serverless platform for executing programs developed through the SDK <cit.>, and a publicly available and searchable repository where it is possible to find any WebAssembly binary developed by the community <cit.>. §.§.§ WebAssembly components The WebAssembly ecosystem consists of several components that enable the execution of programs (at runtime or compiled) written in multiple languages. WebAssembly used to be mainly focused on executing C programs, but currently, it supports the entire set of most popular programming languages. The developers write a program in their own preferred language. Then, the source code can follow two processing steps: (i) it can be compiled by a WebAssembly compiler and executed by a runtime engine, or (ii) it can be interpreted, i.e., a WebAssembly interpreter at runtime processes the lines of code and executes them. The execution is orchestrated by a WebAssembly runtime engine, that abstracts the execution from the underlying system (i.e., the browser, the Operating System, etc.) and finally executes either the WebAssembly binary file (in case of compilation) or the current line of code (in case of interpretation) on the underlying system architecture. Figure <ref> depicts the primary WebAssembly components. The development of WebAssembly was motivated by the need to provide a low-level assembly-like language for compiling arbitrary source code and running it in the browser. As WebAssembly shows a high level of portability, it has been extended to include other platforms. The WebAssembly System Interface (WASI) <cit.> comprises a collection of standardized API specifications aimed at providing secure interfaces for programs, thereby extending the capabilities of WebAssembly. The API facilitates interaction with underlying functionality such as I/O, clocks, filesystem, HTTP driver, and more. WebAssembly programs are organized into modules, i.e., units of deployment, loading, and compilation, that have definitions for types, functions, tables, memories, and globals. Each module declares imports and exports, and upon instantiation, a start function is automatically invoked. §.§.§ WebAssembly memory layout and binary format WebAssembly follows the principle of simplicity by defining a simple linear memory structure, i.e., a mutable array of raw bytes. Each memory has a metadata region that keeps information about the memory layout, size, and properties of the memory in the WebAssembly module. Each program loads and stores values from/to a linear memory at any byte address. The computational model of WebAssembly is based on a stack machine. The program's instructions are executed in order and manipulate values by popping arguments and pushing results into the stack. Each instruction is encoded as opcodes of one byte (multiple bytes for control instructions), followed by immediate arguments if present. There are several instruction types, namely, control, reference, parametric, variable, table, memory, numeric, vector, and expression. WebAssembly has two concrete representations, which map to a common structure: * text format (.wat extension): designed to let developers understand the structure of a WebAssembly module. It can be used for writing code and compiling it in binary format; * binary format (.wasm extension): a compact binary instruction format for a stack-based virtual machine. It is the compilation target of higher-level languages. §.§.§ WebAssembly use-cases WebAssembly is designed to ensure high flexibility and portability. As an example, a program written in C can be compiled and executed in any architecture, in the browser, on x86 desktop machines, and in arm-embedded microcontrollers. Typical use cases are focused on browser-based applications such as games, peer-to-peer applications, or image recognition, but there are also other application scenarios <cit.>. Hilbig et al. (2021) <cit.> sample 100 random WebAssembly binaries used on the web and underline that WebAssembly is used for gaming, text, and media processing, visualization/animation, as well as demonstrations of programming languages. A relevant report is provided by the Cloud Native Computing Foundation, that every year summarizes the state of WebAssembly in the world <cit.>. According to their study: * The principal use-cases of WebAssembly are web applications (58%), data visualization (35%), Internet of things, games, and backend services; * Developers are interested in performance benefits, exploring new use cases and technology, and sharing code between projects. They confirm performance benefits when WebAssembly is adopted; * WebAssembly is used both for realizing new applications and migrating existing applications; * The source code languages used for compiling WebAssembly binaries are primarily JavaScript, C#, C++, and Python. In our work, we highlight the most pertinent cybersecurity-focused use cases documented in the literature (see Section <ref>). §.§ Low-level vulnerabilities in software WebAssembly is an assembly language typically used to compile basic code developed in low-level languages such as C. Therefore, it is crucial to introduce foundational concepts surrounding low-level vulnerabilities that could impact a program. In order to understand this kind of vulnerability, it is essential to offer an overview of program execution in machines. Low-level languages such as C and C++ can lead to various vulnerabilities stemming from unsafe memory management practices: * Stack-based overflow: this vulnerability occurs when an attacker can overwrite a memory buffer located in the stack, including function return addresses, leading to arbitrary code execution; * Heap-based overflow: this vulnerability is similar to stack overflow, but it occurs in the heap memory section; * Integer overflow: this vulnerability occurs when the result of an arithmetic operation exceeds the memory size allocated for the variable holding it, potentially leading to memory corruption or unintended code execution. These vulnerabilities often occur when unsafe functions are input-controllable, allowing a malicious user to send input that overwrites memory buffers. §.§.§ Security approaches to discover software vulnerabilities Software vulnerability discovery techniques can be classified into two broad categories: * Static analysis: involves source code analysis without executing the program. It is useful to detect vulnerabilities caused by improper input validation or type mismatches, but cannot discover runtime errors or vulnerabilities caused by complex interactions with the code; * Dynamic analysis: these approaches try to find vulnerabilities in programs at runtime by injecting malicious inputs (fuzzing) or by analyzing the code during execution (tracing or symbolic execution). These approaches can be further classified into several techniques, as illustrated in Table <ref>. |M0.17|M0.69| 2|c| 2|c|Static techniques 2|c| Control graph flow analysis <cit.> Analyze the graphical representation of all the paths that could be traversed during the execution of a program to discover security issues. Context-sensitive data flow analysis <cit.> Define a set of “contexts”, i.e., scenarios in which a function can be called within the program, and analyze the data flow when the context changes. Code property graph <cit.> Realize a graph-based representation of the analyzed program that can be searched for obtaining relevant information and potentially discover vulnerabilities. State machine representation <cit.> A theoretical model based on program states is used to represent the behavior of the program by potentially revealing security flaws. 2|c| 2|c|Dynamic techniques 2|c| Black-box fuzzer <cit.> Send random inputs to trigger crashes and detect vulnerabilities. No knowledge about the application's internal structure. White-box fuzzing <cit.> Perform fuzzing with full knowledge of the internal structure of the application under test. Grey-box fuzzer <cit.> Combine black-box and white-box approaches to leverage both of them. Symbolic execution <cit.> Analyze the program to determine through symbolic variables the inputs that trigger the execution of a specific part of a program. Concolic execution <cit.> Like symbolic execution, but using concrete values, by increasing the path exploration space. Tracing and control-data flow <cit.> Monitor the program's control flow and data flow and keep track of triggered vulnerabilities during the execution. Bytecode instrumentation and run-time validation <cit.> Modify bytecode at runtime in order to insert additional checks that can be validated to identify security flaws. Static and dynamic techniques to discover vulnerabilities §.§ Smart contracts This section provides context information useful to follow all the considerations exposed in Sections <ref> and <ref>. A common use case for WebAssembly is its application for the implementation of smart contracts. This stems from the distributed nature of smart contracts that fits perfectly with the portability and versatility of WebAssembly. Smart contracts are digital contracts stored on a blockchain, which are executed under a specific set of satisfied conditions. They are inherently related to blockchains, i.e., distributed, shared, and immutable ledgers <cit.>. One of the most widely deployed blockchain platforms is Ethereum <cit.>, which extends the capabilities of the blockchain technology beyond simple financial transactions. It allows developers to create smart contracts as complex applications with self-executing code. These smart contracts get automatically executed if and when specific conditions are met, hence providing transparency, immutability, and trust in transactions. Ethereum also introduced its native cryptocurrency, known as Ether (ETH), which is used to compensate network participants for performing computations and/or validating transactions. The Ethereum documentation <cit.> provides several relevant concepts for understanding Blockchain vulnerabilities: * Ethereum Account: a digital identity on the Ethereum blockchain, allowing users to send and receive Ether, as well as interact with smart contracts; * Address: a unique identifier used for receiving tokens, used to identify the Ethereum Account; * Block: the data structure where the transactions or digital actions are stored; * Blockchain: a database of transactions, duplicated or shared on all computers in the network, ensuring that data cannot be altered retroactively <cit.>; * Smart contract: a program that automatically executes agreements on a blockchain, like a self-enforcing digital contract <cit.>; * Transaction: data committed to the Ethereum Blockchain signed by an originating account, targeting a specific address; * Contract account: an account containing code that executes whenever it receives a transaction from another account; * Ethereum Virtual Machine (EVM): a stack-based virtual machine that executes bytecode; * Externally Owned Account (EOA): the most common type of Ethereum account, controlled by a person through private keys; * Internal transaction: a transaction sent from a contract account to another contract account or an EOA; * Message: an internal transaction thet is never serialized and exists only in the Ethereum execution environment (i.e., a message is like a transaction, except it is produced by a contract and not an external actor); * Message call: the act of passing a message from one account to another; * Delegatecall: a message call that uses data of the calling contract; * Receipt: data returned by an Ethereum client to represent the result of a particular transaction, including a hash of the transaction, its block number, the amount of gas used, and, in case of deployment of a smart contract, the address of the contract. Earlier blockchain platforms suffered from performance issues. For this reason, researchers have proposed different consensus algorithms, such as Proof-of-Share (PoS) and Delegated Proof-of-Stake (DPoS). One of the most relevant DPoS platforms is EOSIO <cit.>. The EOSIO smart contract is composed of several concepts: * action, that represents a single operation for communicating between a smart contract and an account; * transaction, that is composed of several actions; * apply function, that dispatches action handlers for validating smart contracts. The function has a `code' parameter that is used to identify the account that authorized the contract; * eosio.token contract, a standard contract that can be used to transfer tokens with the “transfer” functionality; * dApp, a decentralized application that runs on a decentralized network. §.§.§ Vulnerabilities in smart contracts Smart contracts are affected by vulnerabilities that are strongly related to the smart contract domain and to the adopted blockchain platform. He et al. (2022) <cit.> provide a comprehensive survey of attacks and vulnerabilities in EOSIO systems. Table <ref> describes the most relevant vulnerabilities affecting smart controls. |M0.2|M0.7| Vulnerability Description Fake EOS Transfer An improper apply function implementation does not verify the code parameter properly, hence allowing the execution of code logic under fake EOS tokens. Fake EOS Receipt If the notification of a smart contract allows unsecured relays, an attacker may invoke a transfer in , notify an accomplice who relays the notification to the victim that will pass EOS checks by tricking the victim into providing services to the attacker, such as transferring tokens. Fake EOS Notice A variant of Fake EOS Transfer that exploits other parameters even if the system provides a valid implementation. Insecure Message Call A message call where the authorization is not properly checked. Greedy Smart Contract A greedy smart contract can receive ethers, but it contains no functions to send ethers out. Dangerous Delegatecall Attacker manipulates by changing the function called by the victim's contract. Block Information Dependency The improper utilization of a block timestamp or block number for determining a critical operation. As these variables can be manipulated by miners, they cannot be considered as reliable sources. Mishandled Exceptions Improper exception handling can cause inconsistencies during a transaction. Reentrancy Vulnerability Invoke in a reentrant manner a function that was not designed to be reentrant[A reentrant function is a function designed to be called recursively, or by two or more processes]. Rollback The attacker exploits an unsafe rollback operation on a blockchain system by conditionally reverting smart contracts. Commonly used in gambling dApps. Missing Permission Check Do not properly validate permissions of sensitive operations. Pseudo Random Number Generators Sensitive operations that trust the insecure implementation of pseudo-random number generators can cause security issues. Suicidal Unprotected interface allows the destruction of a victim contract. Call injection An attacker is able to call a sensitive function. Most relevant vulnerabilities in smart contracts. Section <ref> shows the techniques that have been devised to detect this kind of attacks. §.§ Trusted Execution Environments and security enclaves As it will be shown in Section <ref>, several WebAssembly use-cases and enhancements entail an integration with processor security features. A trusted execution environment (TEE) is a hardened memory section of the processor aimed at protecting the confidentiality and integrity of both code and data <cit.>. One of the most utilized technologies that implement TEEs is the Intel Security Guard Extension (SGX) <cit.>. SGX defines the “security enclave” concept, i.e., a secure area where sensitive data and code can be executed. The approach offers scalability by ensuring the execution of an entire application or just a single function in the enclave. Developers can use an API to build secure applications based on Intel SGX. Intel also provides a Software Development Kit (SDK) to handle security enclaves in their code. § LITERATURE REVIEW PROCESS This section illustrates the process used to review the literature on security for WebAssembly. The review has followed a structured process inspired by the methodologies outlined in two relevant works: Webster and Watson (2022) <cit.> and Kitchemann and Brereton (2013) <cit.>. The process proceeded through the following steps: * Automatic search: We performed an automatic search with specific keywords; * Cleaning: we removed duplicates obtained from the automatic search, and excluded out-of-scope works that did not satisfy the inclusion and exclusion criteria; * Backward and forward references review: for each included study, we analyzed both the citations and references to identify other relevant works; * Works classification: we performed a first examination to identify the primary contributions of each study and divided them into categories, as described in Table <ref>; * Works review: we reviewed the retrieved works in detail and developed the concepts presented in Sections <ref> and <ref>. Table <ref> reports the overall number of publications that we reviewed at each step of the aforementioned process. §.§ Automatic search We search for four relevant keyword pairs that relate security to WebAssembly: (i) “webassembly security”; (ii) “webassembly vulnerability”; (iii) “wasm security”; (iv) “wasm vulnerability”. We exclude other keywords, since a quick analysis shows that they basically provide almost equal results to the utilized keyword combinations. For instance, “cybersecurity” would provide the same results as “security”. The same holds true for the use of plural forms such as “vulnerabilities” instead of “vulnerability”. We use two relevant platforms containing peer-reviewed articles, i.e., IEEE Xplore <cit.> and Scopus <cit.>. After obtaining the works, we merge them and remove duplicates. Then, we apply the inclusion and exclusion criteria described further below, to select just in-scope works. Finally, we perform the backward and forward reference investigation to find other relevant works. §.§ Works' inclusion and exclusion criteria We only include works specifically focused on the security domain. However, we include into a secondary category named “others” those works that we consider essential sources for investigating new security research topics in WebAssembly, even if they are not specifically focused on security. We primarily consider peer-reviewed articles, but during the backward and forward reference review, we also include relevant literature from pre-print services <cit.>, as well as relevant technical papers from highly recognized international cyber-security conferences <cit.>. We exclude works not written in English and mention in Section <ref> works that explore security topics but do not provide satisfactory analyses, such as preliminary studies and poster papers. §.§ Works' classification After the document retrieval phase, we first analyze each work's objective to categorize and determine its primary contribution. We delineate several categories and organize items based on their contribution type. Then, we go deeper into each paper to scrutinize its technical details or gain a better understanding of any proposed case studies by exploring the features of other works. Finally, we synthesize our analysis, as detailed in Section <ref>. For the specific category of malware detection, we exclude studies that discover vulnerabilities without involving WebAssembly features, e.g., crypto mining detection through machine-learning models on network traffic <cit.>, or dynamic analysis focused on binaries not specifically generated by WebAssembly code <cit.>. In order to classify the reviewed works, we identify six main categories: * Security Analysis: papers that analyze the security of the WebAssembly ecosystem. These works are useful in providing a preliminary context of security flaws that affect WebAssembly programs; * Empirical studies: papers that investigate security aspects of WebAssembly by analyzing real-world scenarios. These works apport relevant empirical contributions; * Attack Scenarios: these papers illustrate approaches that leverage WebAssembly capabilities for conducting complex attacks; * Vulnerability Discovery: the contribution of these papers is to provide approaches to discover vulnerabilities in applications based on WebAssembly; * Security Enhancements: in these papers, authors aim to extend the WebAssembly components in order to reduce security flaws that are examined in security analysis works; * Use cases in security: in these papers, authors leverage WebAssembly features to increase the security aspects of a proposed solution, system, and/or framework; * Others: some relevant papers are not specifically focused on cybersecurity but can be considered as preliminary studies representing “cornerstones” for conducting novel security-related research activities. § RELATED WORKS Several works offer pertinent reviews on WebAssembly that do not explicitly center on security. WebAssembly runtimes are analyzed by Wang and Wenwen (2022) <cit.>. In their study, the authors review five predominant standalone WebAssembly runtimes and construct a benchmark suite for comparative purposes. They assert that performance is influenced by various factors and outline criteria for selecting the best runtime based on specific use cases. Additionally, they offer insights into enhancing runtime performance by including crucial information into compiled WebAssembly binaries. In the realm of cloud-edge computing <cit.>, Kakati et al. (2023) <cit.> provide a comprehensive review of the adoption of WebAssembly. The Internet of Things (IoT) is another relevant context where WebAssembly is widely adopted <cit.>. Ray and Partha Pratim (2023) <cit.> provide a comprehensive survey about the adoption of WebAssembly in IoT systems by illustrating the potential of WebAssembly characteristics, providing a valuable resource for developers through a comparison of WebAssembly tools and toolchains, and illustrating future insights for continuing this research field. §.§ Security studies Section <ref> delves into related works offering security analysis for WebAssembly compilers, runtimes, and the impacts of porting programs written in unsafe memory languages <cit.>. Notably, Kim et al. (2022) <cit.> and Harnes and Morrison (2024) <cit.> provide a comprehensive survey on techniques and methods for WebAssembly binary security. They also propose future research directions to address open problems identified in their studies, such as protection from malicious WebAssembly binaries and hardening the inherent security of WebAssembly. Tables <ref>, <ref>, and <ref> present a comparative analysis of reviews. Our study extends beyond the scope of previous works by encompassing various types of literature, including those focused on enhancing the security of WebAssembly components, utilizing WebAssembly for security purposes in various use cases, conducting empirical studies, and performing security analyses. Furthermore, we cover five additional categories, culminating in the analysis of  works. It is worth noting that we classified certain publications (<cit.>) under the “other” category instead of grouping them with related studies. Our decision was based on our analysis, which found that these publications lacked sufficient information to be classified as vulnerability discovery efforts. To the best of our knowledge, this is the first work providing a comprehensive review of security research in the WebAssembly domain. § WEBASSEMBLY AND SECURITY: A REVIEW This section presents the works analyzed in this study. As previously outlined, Table <ref> summarizes the proposed categories detailed in Section <ref>, and displays the number of analyzed studies for each category. §.§ Security Analysis Although WebAssembly has been designed with security in mind, relevant works underline that it can suffer from security issues. One significant study conducted by Lehmann et al. (2020) <cit.> highlights this concern. In their research, the authors demonstrate that certain classic vulnerabilities, which are mitigated by compiler system enhancements in native binaries, remain exploitable in WebAssembly. They conduct an extensive security analysis of WebAssembly's linear memory system, present examples of vulnerable applications, outline attack primitives, demonstrate end-to-end exploits for identified vulnerabilities, and propose possible mitigation strategies to harden WebAssembly binaries. In particular, the authors analyze the impact of classic attack primitives and categorize them into three main types: write operations (including stack-based buffer overflow <cit.>, stack overflows, and heap metadata corruptions), data overwrites (such as overwriting stack and heap variables, as well as “constant” data), and triggers for unexpected behavior (such as redirecting indirect calls, injecting code into the host environment, and overwriting application-specific data). They illustrate how these attack primitives can be combined to execute end-to-end attacks such as cross-site scripting, remote code execution, and arbitrary file writing. To mitigate the analyzed security flaws, the authors propose three possible approaches: extending the WebAssembly language, enhancing compilers and toolchains, and developing robust and safe libraries for use during development. Stievenart et al. (2021) <cit.> explore the security vulnerabilities stemming from the absence of protections in WebAssembly compilers. In their study, the authors compile 4.469 C programs containing known buffer overflow vulnerabilities (both stack-based and heap-based) to x86 code and WebAssembly. They reveal that 1.088 programs exhibit a different behavior when compiled to WebAssembly. In particular, WebAssembly is found to be more vulnerable to stack-smashing attacks due to the absence of security measures like stack canaries. As illustrated before, memory management in low-level programs is the primary cause of security flaws. Therefore, the most affected programming language is C <cit.>, as it provides the lowest abstraction level with respect to the underlying machine so that it can fully interact with the hardware. Stiévenart et al. (2022) <cit.> investigate the security implications of cross-compiling C programs to WebAssembly. In their study, the authors compile and execute 17.802 programs with common vulnerabilities for both 64-bit x86 and WebAssembly. They find that 4.911 binaries yield different results due to three main reasons: the absence of security measures in WebAssembly, different semantics of execution environments, and variations in used standard libraries. Critical differences affecting security include the implementation of and functions, and the lack of both stack-smashing and memory protection mechanisms. To address these issues, the authors suggest extending WebAssembly to incorporate memory semantics in C and C++ code. They also emphasize the importance of static analysis in identifying security flaws and highlight related works aiming to enhance the WebAssembly runtime and mitigate the security impacts <cit.>. One of the noteworthy resources for developers is the technical study conducted by Brian McFadden et al. (2018) <cit.>. In this work, the authors delve into various facets of WebAssembly security. In particular, they: * analyze Emscripten's implementation of compiler and linker-level exploit mitigation; * introduce new attack vectors and methods of exploitation in WebAssembly, demonstrating that buffer overflow or indirect function calls can lead to cross-site scripting or remote code execution attacks; * provide examples of memory corruption exploits in the WebAssembly environment; * outline best practices and security considerations for developers aiming to integrate WebAssembly into their products. §.§ Empirical Studies These works present empirical case studies to analyze a specific phenomenon in the WebAssembly ecosystem. Early studies analyze the diffusion of WebAssembly for executing cryptojacking attacks in the wild. Rüth et al. (2018) <cit.> introduce a new fingerprinting method aimed at identifying miners, which revealed a factor of 5,7 more miners compared to publicly available block lists. They also classify mining websites among 138 million domains sourced from the largest top-level domains and the Alexa top 1 million websites <cit.>. The authors demonstrate that the prevalence of browser mining is low, at less than 0,08% and that 75% of the mining sites utilize Coinhive, a web-based mining provider. Musch et al. (2019) <cit.> analyze websites from Alexa top one million ranking <cit.> in detail and find that 1 out of 600 websites uses WebAssembly. However, they reveal that 50% of all sites using WebAssembly employ it for mining and obfuscation purposes. Hilbig et al. (2021) <cit.> also analyze the usage of WebAssembly worldwide, presenting different results. They collect 8.461 WebAssembly binaries from a wide range of sources and conclude that: * 64,2% of the binaries are compiled from memory-unsafe languages, such as C and C++; * less than 1% of all binaries are related to crypto mining, with a greater diffusion of game-based, text processing, visualization, and media applications; * 29% of all binaries are minified, suggesting a need for approaches to decompile and reverse engineer WebAssembly. Romano et al. (2021) <cit.> conduct two empirical studies to perform a qualitative analysis of 146 bugs related to WebAssembly in Emscripten and a quantitative analysis of 1.054 bugs in three open-source compilers, namely, AssemblyScripts, Emscripten, and Rustc/Wasm-Bindgen. They draw the conclusion that sensitive information can be disclosed in Trusted Execution Environments (TEE). Puddu et al. (2022) <cit.> analyze the impacts of executing confidential code on top of a WebAssembly runtime within a TEE and show that using an intermediate representation such as WebAssembly can lead to code leakage. §.§ Use cases in cybersecurity WebAssembly is extensively used in heterogeneous domains where security is a crucial non-functional requirement. The following subsections summarize the most interesting contributions we found in the literature, by underlying the main research paths they have tried to follow. §.§.§ Browser extensions WebAssembly was designed to provide an assembly language for the web, especially for modern browsers. Therefore, there is extensive literature about realizing browser extensions with this technology. As WebAssembly is a portable language, all the works presented in this section could potentially be ported toward browser environments. However, the works that we classify as “Browser extensions” have specifically been realized to extend browser capabilities and increase portability, isolation, and efficiency. Baumgärtner et al. (2019) <cit.> present a Disruption-tolerant Networking (DTN) system for browsers that leverages WebAssembly and the Bundle Protocol 7 draft implementation <cit.>. Narayan et al. (2020) <cit.> define a new framework (RLBox) that implements fine-grained isolation for sandboxing third-party libraries. The framework has been integrated into production Firefox to sandbox the libGraphite font shaping library. Chen et al. (2023) <cit.> propose a novel approach for digital rights management of streaming. The work allows users to easily fetch videos with a plug-and-play approach and enhances decryption efficiency by minimizing interactions with the server. WebAssembly in the browser is employed by Jang et al. (2022) <cit.> to realize a distributed public collaborative fuzzing system for the security of applications emulated inside the browser engine. §.§.§ Cryptography, Trusted Execution Environments and Privacy Various works aim to safeguard data integrity and confidentiality through cryptographic techniques, Trusted Execution Environments, and privacy-preserving methods. These approaches are tailored to specific domains and can be implemented in hardware, embedded systems, or browsers. Sun et al. (2020) <cit.> leverage WebAssembly and the Web Cryptography API to devise a client-side encrypted storage system capable of storing and sharing data across heterogeneous platforms. Attrapadung et al. (2018) <cit.> use WebAssembly to implement an efficient two-level homomorphic public-key encryption within prime-order bilinear groups. Seo et al. (2023) <cit.> introduce a portable and efficient WebAssembly implementation of the Crystals-Kyber post-quantum Key Encapsulation Mechanism <cit.>. Zhao et al. (2023) <cit.> define a novel approach for constructing reusable, rapid, and secure enclaves. They introduce three techniques, namely, enclave snapshot and rewinding, nested attestation, and multi-layer intra-enclave compartmentalization, for enabling rapid enclave reset and robust security. WebAssembly is used to realize publish and subscribe environments for cloud-edge computing. Both Almstedt et al. (2023) <cit.> and Ménétrey et al. (2024) <cit.> design a novel publish and subscribe middleware running in Intel SGX for trustworthy and distributed communications. Qiang et al. (2018) <cit.> utilize the sandbox properties of WebAssembly combined with Intel SGX enclaves to develop a two-way sandbox. This approach offers dual protection: safeguarding sensitive data from potential threats posed by the cloud provider and shielding the host runtime from potential attacks initiated by malicious users. Other works show that WebAssembly can be used to implement a performant, memory-safe, and portable client-side-hashing library (Riera et al. (2023) <cit.>), or even a fully-fledged client-side storage platform (Sun et al. (2022) <cit.>). §.§.§ Identity and Access Management Another integration of Intel SGX and WebAssembly is provided by Goltzsche et al. (2019) <cit.>. In their work, the authors present AccTEE, a framework for realizing fine-grained permissions for resource accounting by preserving the confidentiality and integrity of code and data. Also, Zhang et al. (2021) <cit.> propose EL PASSO, a portable and “zero-configuration” module to allow the setup and configuration of privacy-preserving single-sign-on operations. §.§.§ Cloud, cloud-edge computing and IoT Cloud and cloud-edge computing are common use cases of WebAssembly adoption <cit.>. Edgedancer <cit.> is a platform for supporting portable, provider-independent, and secure migration of edge services. Ménétrey et al. (2022) <cit.> envisage that WebAssembly with TEEs is a promising combination to realize cloud-edge continuum <cit.>. Sun et al. (2022) <cit.> primarily focus on the browser domain as a platform aimed at realizing a secure cloud storage environment. This subject is also addressed by Song et al. (2023) <cit.>, who provide a cross-platform secure storage architecture through the realization of a new primitive called Controllable Outsourced Attribute-Based Proxy Re-Encryption (COAB-PRE) alongside the cross-platform capabilities of WebAssembly. Although WebAssembly provides several security benefits, performance overheads should be investigated when it is adopted in contexts with constrained resources, such as IoT and edge devices. Wen et al. (2020) <cit.> address this problem by defining a novel operating system that integrates Rust and WebAssembly to make IoT applications more efficient and secure. A similar approach is also proposed by Li and Sato (2023) <cit.>, who implement a kernel focused on isolation, security, and customizability through the combination of WebAssembly and Rust. §.§.§ Face recognition systems Preliminary work has been conducted on using WebAssembly to realize face recognition systems. Indeed, WebAssembly ensures a client-side solution that preserves image privacy and could potentially reduce the costs of adopting external services. Pillay et al. (2019) <cit.> provide a real-time browser-based application that is able to attain a facial recognition rate of 91.67%, while Martín Manso et al. (2021) <cit.> compare JavaScript (CPU), WebGL and WebAssembly for face detection, and conclude that WebAssembly strikes a good balance in terms of detection accuracy and time overheads. §.§ Attack scenarios Another research field deals with innovative approaches for conducting attacks against WebAssembly programs. Most articles fall under two primary categories: * Side-channel attacks: these works demonstrate how it is possible to conduct side-channel attacks to disclose sensitive keys or realize covert channels <cit.>; * Evasion techniques for malware detection: these works illustrate in which way it is possible to bypass malware detection systems through WebAssembly <cit.>. Oz et al. (2023) <cit.> provide a divergent work by showing how it is possible to leverage WebAssembly to implement ransomware attacks <cit.>. Table <ref> summarizes the above information. §.§.§ Side-channel attacks Genkin et al. (2018) <cit.> utilize WebAssembly to induce CPU cache contention and extract sensitive data from other programs' memory accesses. They demonstrate that this technique can extract sensitive keys from famous cryptographic libraries. An innovative cache-based attack is also illustrated in the work of Katzman et al. (2023) <cit.>, where authors conclude that transient execution can increase the effectiveness of cache attacks. Easdon et al. (2022) <cit.> leverage WebAssembly features to demonstrate a novel technique for prototyping microarchitectural attacks. They achieve this by implementing two open-source frameworks, libtea and SCFirefox, and developing proof-of-concepts for executing Foreshadow <cit.> and Load Value Injection (LVI) <cit.> attacks. Rokicki et al. (2022) <cit.> introduce the first port contention side-channel attack conducted entirely within the browser environment. Their study demonstrates that certain WebAssembly instructions can induce port contention attacks, enabling the extraction of sensitive data from concurrent processes executing on the same CPU core. §.§.§ Malware detection evasion The other two works propose innovative techniques to evade malware detection in WebAssembly binaries. Cabrera-Arteaga et al. (2023) <cit.> analyze the potential of WebAssembly binary diversification to circumvent detection by antivirus systems. On the other hand, Loose et al. (2023) <cit.> introduce a novel approach to embed adversarial payloads into the instruction stream of binaries, thereby bypassing malware detection systems. Additionally, Romano et al. (2022) <cit.> demonstrate the feasibility of obfuscating JavaScript to evade malware detection systems by converting specific JavaScript instructions into WebAssembly. Cao et al. (2023) <cit.> develop a framework for static binary rewriting, which can be used for various purposes such as obfuscation. Although the primary focus is not solely on evading malware detection systems, the framework can be adapted for that purpose, as well as for patching vulnerabilities or binary instrumentation. §.§ Attack detection solutions These works focus on detecting attacks in WebAssembly programs, primarily targeting cryptojacking attacks. As shown in Table <ref>, current research is essentially focused on detecting crypto-mining activities facilitated by WebAssembly technology. |M2cm|M7,5cm|M2,5cm| Work Detection approach Type of detected attacks Konoth et al. (2018) <cit.> Runtime detection on the basis of three approaches discovered through static analysis of WebAssembly malicious binaries Cryptojacking Wang et al. (2018) <cit.> At runtime, monitor execution through in-line scripts Cryptojacking Rodriguez and Possega (2018) <cit.> Analyze WebAssembly and JavaScript APIs and train a Support Vector Machine (SVM). Show the feasibility of deploying the solution as a browser extension Cryptojacking Kharraz et al. (2019) <cit.> Support Vector Machine classifier Cryptojacking Bian et al. (2020) <cit.> At runtime Cryptojacking Kelton et al. (2020) <cit.> At runtime, through a Support Vector Machine (SVM) classifier Cryptojacking Mazaheri et al. (2020) <cit.> Static analysis by analyzing features of web pages that indicate the presence of time-based side-channel attacks. Side-channel Romano et al. (2020) <cit.> Static analysis, by reconstructing binaries in “standard” WebAssembly format, and analyzing the generated intra-procedural and inter-procedural Control-Flow Graphs (CFGs) Cryptojacking Yu et al. (2020) <cit.> Static analysis with machine learning and runtime detection systems. Provide both detection and blocking capabilities. Cryptojacking Naseem et al. (2021) <cit.> At runtime, leveraging a Convolutional Neural Network (CNN) model trained with a comprehensive dataset of current malicious and benign WebAssembly binaries Cryptojacking Tommasi et al. (2022) <cit.> At runtime, browser extension based on a Support Vector Machine (SVM) classifier Cryptojacking Breitfelder et al. (2023) <cit.> Static WebAssenbly analysis that is able to capture call-, control-, and data-flow graphs for further analysis. The detection capability is one of the discussed use-cases. Cryptojacking Works about detecting attacks against WebAssembly applications §.§ Security enhancements Several studies have extended WebAssembly components or integrated new systems to enhance their security level. Some authors have extended the semantics of the WebAssembly language to include protection against various threats. For instance, Watt et al. (2019) <cit.> propose CT-Wasm, a type-driven secure language for writing cryptographic libraries to protect against time-based side-channel attacks. Similarly, Michael et al. (2023) <cit.> focus on protecting WebAssembly program memory by introducing MSWasm, a WebAssembly semantic addressing memory unsafe vulnerabilities. They introduce a new memory region called segment memory and new instructions for interacting with it. Additionally, Geller et al. (2024) <cit.> extend WebAssembly semantics to define indexed types <cit.> to ensure safety properties over program values. Another language extension aimed at ensuring memory safety is proposed by Zhang et al. (2023) <cit.>, who allow the definition of canaries in the program. Several works modify the WebAssembly compiler to formally verify security properties <cit.> or reduce the attack surface in embedded systems <cit.>. Others focus on enhancing the WebAssembly runtime environment to create a trusted execution environment that increases isolation and prevents memory attacks during execution. In order to protect the execution of WebAssembly programs, several authors have designed additional mechanisms. For example, Sun et al. (2019) <cit.> propose SELWasm, a code protection technique that implements a self-checking mechanism to prevent code execution on unauthorized websites, along with an “Encryption & Decryption” approach for ensuring code confidentiality. Similarly, Brandl et al. (2023) <cit.> present a novel interpreter for WebAssembly capable of performing complex analyses such as inter-procedural control and data-flow information. This study can be considered as a foundation for future static security analysis approaches. Additionally, Pop et al. (2022) <cit.> propose an interpreter that extends WASMI 14 <cit.> by adding pausing, serialization, deserialization, and resumption primitives for implementing a portable enclave service. Other works explore novel approaches to obfuscate WebAssembly binaries, aiming to increase protection against reverse engineering or to highlight the adversarial capabilities of WebAssembly in evading malware detection systems. Other works introduce verification of programs after compilation. Watt (2018) <cit.> mechanizes and verifies the WebAssembly specification using the theorem prover Isabelle <cit.>, illustrating two demonstrations that implement a verified type-checker and interpreter for WebAssembly. This work is extended by Watt et al. (2023) <cit.>, who define a WebAssembly interpreter written in Isabelle/HOL <cit.> to validate the Wasmtime interpreter <cit.>. Johnson et al. (2021) <cit.> introduce VeriWasm, a static offline verifier for native x86-64 binaries compiled to WebAssembly. The verifier checks the boundaries of memory accesses, control flows, and stack usage, ensuring the satisfaction of Software Fault Isolation (SFI) <cit.> in binaries. Another area of research focuses on improving malware detection in WebAssembly binaries. Xia et al. (2024) <cit.> highlight the issue of emerging JavaScript-WebAssembly multilingual malware (JWMM), which reduces the effectiveness of antivirus solutions. The authors introduce a novel approach to semantically reconstruct WebAssembly binaries, generating a high-level structure aimed at enhancing antivirus detection capabilities. Watt et al. (2021) <cit.> extend the WebAssembly semantic and provide two different theorem provers: WasmCert-Isabelle and WasmCert-Coq. §.§.§ Works that enhance WebAssembly indirectly Some works do not extend WebAssembly components directly but address security issues by enhancing other systems. Schrammel et al. (2020) <cit.> improve WebAssembly security indirectly by enhancing the JavaScript (JS) V8 engine. The authors underline that the JS engine compiles some JavaScript parts in WebAssembly and identify several potential attacks against the compiled code. They propose a JS engine extension that provides isolation mechanisms using modern CPUs to enforce domain-based access controls on memory pages. Vassena et al. (2021) <cit.> introduce Blade, an approach to automatically eliminate speculative leaks from cryptographic code by extending “Cranelift” <cit.>, a WebAssembly to x86 compiler. Song et al. (2023) <cit.> propose an interesting method for blocking heap memory corruption attacks by shadowing metadata from linear memory in trusted JavaScript memory. They provide a JavaScript API that WebAssembly programs can call to shadow and validate the metadata of the used linear memory. §.§ Vulnerability Discovery Several works aim to discover vulnerabilities in WebAssembly binaries. Tables <ref> and <ref> categorize these efforts into two groups: (i) works that leverage static security analysis and (ii) works proposing dynamic analysis. Each approach is experimented with various techniques, allowing the discovery of different types of vulnerabilities. Many studies focus on identifying issues in smart contracts. This clearly indicates a significant adoption of WebAssembly in the blockchain domain. M0.2|M0.25|M0.45 Work Static analysis approach Discovered vulnerabilities Quan et al. (2019) <cit.> Control Graph Flow analysis Fake EOS transfer, fake EOS notice Yang et al. (2020) <cit.> Symbolic Semantic Graph Integer overflow, pseudo-random number generator, insecure message call Li et al. (2022) <cit.> Context-sensitive data flow analysis Fake EOS Transfer, Forged Transfer Notification, and Block Information Dependency Brito et al. (2022) <cit.> Code property graph Buffer overflow, use after free, double free, dangerous function usage, and format strings vulnerabilities Tu et al. (2023) <cit.> State Machine Representation Rollback, missing permission check, integer overflow Vulnerability discovery works that propose a static analysis approach M0.2|M0.25|M0.45 Work Dynamic analysis approach Detected vulnerabilities Huang et al. (2021) <cit.> Black-box fuzzer Block information dependency, forged transfer notification, fake EOS transfer Jiang et al. (2021) <cit.> Symbolic execution Greedy, dangerous DelegateCall, block information dependency, reentrancy, mishandled exception He et al. (2021) <cit.> Symbolic execution Fake EOS, fake Receipt, rollback, missing permission check Daniel Lehmann et al. (2021) <cit.> Fuzzing Crash in binary programs, mainly related to buffer overflow conditions Marques et al. (2022) <cit.> Concolic execution Not specified Yang et al. (2022) <cit.> Tracing and control-data flow Access control vulnerabilities Khan et al. (2023) <cit.> Fuzzing data races, null pointer dereferences, out-of-bound accesses, division-by-zero errors in SGX enclave applications Jin et al. (2023) <cit.> Fuzzing Overflow, underflow, arbitrary value transfer (e.g., reentrancy), suicidal, call injection Haßler et al. (2022) <cit.> Coverage-guided fuzzing Crash in binary programs Chen et al. (2022) <cit.> Concolic execution Fake EOS transfer, fake notification, missing authorization verification, blockinfo dependency, and rollback vulnerabilities Zhou et al. (2023) <cit.> Bytecode instrumentation, run-time validation, and grey-box fuzzing Integer and stack overflow Vulnerability discovery works that propose a dynamic analysis approach A comparative analysis of the mentioned works allows us to observe that there is no standardized approach to comparing the effectiveness of the proposed methods. While some works focus on identifying low-level vulnerabilities such as integer and stack-based overflows, others target vulnerabilities specific to the smart contract domain. Some authors use real-world applications for their evaluations, while others employ well-known yet heterogeneous datasets. Evaluating real-world applications effectively demonstrates the capability of security scanners to discover new vulnerabilities. However, this approach does not yield performance metrics such as false positives or negatives, as the exact number of vulnerabilities in the applications under test is unknown. Consequently, each author defines different performance metrics, making it challenging to compare the proposed methods. This variability in evaluation metrics is a common issue in vulnerability discovery approaches <cit.>, and we encourage future research to address this open problem. §.§ Other works Many works focus on aspects of WebAssembly beyond security but are still relevant as they can provide valuable insights for future efforts aimed at enhancing the security of the WebAssembly domain. §.§.§ Preliminary works Some preliminary studies offer promising results that may serve as a foundation for innovative security improvements. Abbadini et al. (2023) <cit.> design an approach that combines eBFP features with WebAssembly to enhance host isolation security in existing runtimes. Narayan et al. (2021) <cit.> provide a tutorial on leveraging WebAssembly's sandboxing feature to run unsafe C code securely. Stievenart and De Roover (2021) <cit.> introduce “Wassail”, one of the first static analysis tools for WebAssembly. Their technology enables several static analyses, such as call graphs, control-flow graphs, and dataflow analysis. Cabrera et al. (2021) <cit.> present the first version of a code diversification framework for WebAssembly, which is further extended in a more recent study <cit.>. Bhansali et al. (2022) <cit.> provide a preliminary overview of applying obfuscation techniques to WebAssembly binaries, demonstrating that specific obfuscation approaches can bypass the MINOS cryptojacking detector (Naseem et al. (2021) <cit.>). Zheng et al. (2023) <cit.> present a software prototype for dynamic program analysis capable of detecting memory bugs and integer overflows in WebAssembly code, serving as an essential reference for investigating vulnerability discovery in this domain. §.§.§ Works about vulnerability discovery approaches Many works investigate novel approaches to inspect WebAssembly code and binary structure, laying the groundwork for future vulnerability discovery research. William Fu et al. (2018) <cit.> and Szanto et al. (2018) <cit.> provide comprehensive overviews of taint tracking <cit.> in WebAssembly. Both studies demonstrate promising results in tracking data flows within WebAssembly binaries but do not provide detailed security evaluations. Therefore, further research is needed to assess their effectiveness in real-world scenarios. Stievenart et al. (2022) <cit.> and Stiévenart et al. (2023) <cit.> introduce methods to minimize WebAssembly binaries through slicing. These approaches are useful for optimizing dynamic and dependency analysis of WebAssembly binaries. Cabrera-Arteaga et al. (2024) <cit.> demonstrate how to automatically transform a WebAssembly binary into variants without altering its original functionality. This approach is effective for fuzzing WebAssembly compilers. Other works can contribute to the realization of static analysis approaches. Notable examples are Wasm Logic, a program logic for first-order WebAssembly proposed by Watt et al. (2019) <cit.>, and “Eunomia” (He et al. (2023) <cit.>), a symbolic execution engine for WebAssembly. Stievenart et al. (2020) <cit.> present a novel procedure based on static analysis of WebAssembly programs based on compositional information flow. This method enables an efficient and precise summarization of function behaviors, facilitating the detection of information flow violations and potential security breaches. Lehmann et al. (2019) <cit.> contribute to the dynamic analysis of WebAssembly with Wasabi, a framework designed to dynamically analyze WebAssembly programs to collect runtime information, enabling the detection of dynamic behaviors such as memory accesses and control flow. An interesting WebAssembly use case is proposed by Kotenko et al. (2023) <cit.>, who devise practical guidelines for implementing secure applications in the power energy sector. Even if WebAssembly is merely mentioned, this work can pave the ground for future research. Namjoshi et al. (2021) <cit.> define an interesting compiler extension that checks for program correctness against an independent proof validator. Although the proposed contribution is related to revealing bugs that occurred in the optimization process of compiled programs, it could be easily broadened in order to also include security verifications. To identify bugs in WebAssembly runtimes, Zhou et al. (2023) <cit.> propose WADIFF, a differential testing framework that could be specialized to find security bugs. Debugging code remains an effective approach to manually discover vulnerabilities <cit.>. Lauwaerts et al. (2022) <cit.> extend the WARDuino WebAssembly microcontroller virtual machine <cit.> to enable debugging features. Future works may extend this by proposing security methodologies to find vulnerabilities in embedded systems designed with WebAssembly through debugging approaches. §.§.§ Comparison with other works Another category of works compares WebAssembly security with other technologies. Dejaeghere et al. (2023) <cit.> compare the security features of WebAssembly with those provided by eBPF, a Linux subsystem that allows the safe execution of untrusted user-defined extensions inside the kernel <cit.>. They demonstrate that different threat models can be defined for these two technologies and emphasize that WebAssembly's design focuses more on security than performance. While WebAssembly permits the execution of unsafe code, unlike eBPF, it reduces security risks and exploitation opportunities by limiting the number of implemented helper system libraries. In their study, Pham et al. (2023) <cit.> assess several non-functional requirements, including performance, maintainability, and security. They demonstrate that WebAssembly is a viable alternative to Docker containers in IoT applications. However, the security evaluation is not exhaustive, suggesting the need for further studies to analyze the security disparities between these technologies. Bastys et al. (2022) <cit.> present a significant contribution with their SecWasm framework, designed for general-purpose information-flow control in WebAssembly. SecWasm can be instrumented to perform several relevant security controls against WebAssembly programs. While the paper provides a comprehensive description of the approach, it lacks evaluations. Future research could explore and analyze this promising approach further. In the realm of kernel isolation, Peng et al. (2023) <cit.> introduce a novel kernel context isolation system that offers enhanced isolation functionality and features an efficient “implicit” context switch to minimize performance overhead. Although the paper compares the system to WebAssembly to demonstrate isolation benefits, further evaluation could enhance understanding. § DISCUSSION In this section, we discuss the results of our survey and summarize the characteristics of the analyzed works. Our research reviewed  works. Of these,  (85,85%) have been classified into seven categories. The remaining  (14,15%) works do not make significant contributions to the security domain in WebAssembly, yet represent relevant references for novel pathways in related research topics. Figure <ref> shows the percentage of works for each category. As it is possible to observe, most contributions (28,9%) aim to harden WebAssembly security. Another large set of works (21,6%) leverages the WebAssembly capabilities for implementing security-based use cases. The other relevant areas are related to vulnerability discovery (16,5%) and attack detection (12,4%), respectively. §.§ WebAssembly security Studies that explore the security of WebAssembly through analyses, empirical studies, or by showing attack scenarios led to the following primary conclusions: * Although WebAssembly was designed to provide better isolation and security, the lack of security protection mechanisms in its semantics and compilers has the opposite effect, increasing the risks of memory-based attacks by porting unsafe programs developed with low-level languages such as C into WebAssembly; * This problem is amplified by the presence of a large number of WebAssembly binaries derived from memory-unsafe languages in real-world applications; * WebAssembly should provide isolation and protect sensitive data from disclosure, but several studies confirm that it is possible to extract sensitive data through specific techniques, such as side-channel attacks based on cache analysis or port contention; * A significant number of WebAssembly binaries in the wild are obfuscated, underscoring the need to expand research into reverse engineering and WebAssembly decompilation; * The obfuscation applied to WebAssembly has led to a performance reduction in malware detection systems, highlighting the importance of exploring the research field of WebAssembly binary reverse engineering; * The relevance of crypto-mining activities based on WebAssembly in the world is a contested matter: while some studies underline that it is the primary usage of such technology, others state that less than 1% of the analyzed binaries are related to crypto-mining activities. §.§ Detect attacks and discover vulnerabilities in WebAssembly Detection efforts primarily focus on cryptojacking attacks, while vulnerability discovery approaches mainly address vulnerabilities in smart contracts. Consequently, there is significant interest in analyzing WebAssembly applications in smart contract environments. As stated in Section <ref>, there are relevant use-cases in web applications, data visualization, and gaming. Future works should analyze the security impact of WebAssembly in these other domains and propose approaches for addressing security problems. Another significant issue is the heterogeneity of used benchmarks and performance metrics. Some authors merely analyze their accuracy, and no common dataset is used for trials and experiments. The Alexa top one million website benchmark <cit.> was extensively employed as “ground truth” for representing real-world WebAssembly cases. Unfortunately, such a source was retired in 2022 and is unusable for further studies. We encourage researchers to expand their scope by including additional sources to create a more reliable ground truth benchmark. §.§ The interest of WebAssembly for cybersecurity use-cases Figure <ref> shows the number of works published for each security-related domain and the reasons behind the WebAssembly adoption. As observed, WebAssembly is utilized across various application types, with particular relevance in cloud and cryptography domains. WebAssembly is chosen for several reasons, notably performance, isolation, and portability. Some studies also highlight its adoption to ensure memory safety. However, as previously stated, several studies confirm security gaps in using WebAssembly for both isolation and memory safety, underscoring the necessity for further investigation into the security of these applications. §.§ Enhance WebAssembly security Figure <ref> reports a heatmap illustrating the number of works extending specific components of WebAssembly to enhance their security properties. What can be inferred from the heat map is that most of the works focus on memory operations and involve modifications to the compiler and runtime. As regards testability, it is analyzed through formal verification methods. Indeed, many improvements could be made through the interpreter since it has less impact compared to changes in the runtime and compiler, thus providing a potential direction for future works. Additionally, confidentiality and code signing are other elements that need to be further explored. §.§ Results availability For each security category, we analyze whether the authors of relevant works have made available the source code used for the experiments they carried out. The results of this analysis are condensed in Table <ref>. A detailed list of the publicly available repositories we were able to check is instead reported in Table <ref> in the Appendix. As it is possible to observe, a significant number of works have publicly released their code. This open-source release of data and code will enable researchers to expand the literature on WebAssembly more rapidly and develop novel approaches in the field. We hope that the collection provided in this work will further boost research advancements. §.§ Research gaps in WebAssembly and security Rethinking the aforementioned considerations, we can summarize the following research gaps: * WebAssembly has been thoroughly analyzed in the real world, especially through the Alexa <cit.> benchmark, which was retired in 2022. Future studies should investigate additional sources to confirm the empirical results of published works; * Empirical studies do not address the adoption of WebAssembly on the dark web. Given that this technology is used for malicious activities such as evading malware detection and crypto-mining, exploring the relationship between WebAssembly and the dark web could be a compelling research direction. * Crypto-mining and smart contracts are highly relevant in WebAssembly research, with different approaches proposed for detecting attacks and discovering vulnerabilities in these fields. However, empirical studies indicate that WebAssembly is also applied to other types of applications. More studies should investigate this relevance and potentially promote research in unexplored fields where WebAssembly is prominently employed. * WebAssembly is adopted for various security use cases for reasons such as performance, portability, and isolation. However, other interesting applications could be examined. One such application is related to cyber ranges, i.e., realistic offensive and defensive scenarios where people in the security field can be trained <cit.>. * Almost all security analysis works assess the security of C programs compiled for WebAssembly. However, WebAssembly developers write programs in other languages, such as JavaScript and Python. Additionally, the Rust language is now a memory-safe alternative to C. We suggest expanding security research to explore these languages. § CONCLUSIONS In this work, we provide a comprehensive overview of the current state of security research in WebAssembly, highlighting key studies that address various security aspects of this emerging technology. We conducted a thorough review of the most recent literature, analyzing a total of  studies, categorizing them into eight distinct categories, and critically evaluating the results. Our analysis reveals significant research gaps in the WebAssembly security field, which we believe future studies should aim to address. Specifically, we identify the lack of exploration of cyber-ranges as a promising yet underutilized security use case in WebAssembly, which has the potential to revolutionize the way we approach security testing and training. To contribute to this area, we plan to investigate the benefits of using WebAssembly in cyber-ranges by comparing our previous research <cit.> with a cyber-range solution developed entirely using this technology. We also highlight the potential for WebAssembly to enable more efficient and effective security testing and discuss the importance of considering the security implications of WebAssembly's unique features, such as its sandboxed execution environment and memory safety guarantees. We hope that this work will be useful to researchers seeking to make a meaningful impact in the challenging field of enhancing the security features of WebAssembly and provide valuable insights for developers and practitioners looking to leverage WebAssembly for their security applications. unsrtnat § APPENDIX: PUBLICLY AVAILABLE SOURCE CODE REPOSITORIES |M0.2|M0.7| 2|c|Security analysis Work Publicly available source Lehmann et al. (2020) <cit.> <https://github.com/sola-st/wasm-binary-security> Stiévenart et al. (2022) <cit.> <https://figshare.com/articles/dataset/SAC_2022_Dataset/17297477> 2|c|Empirical studies Work Publicly available source Romano et al. (2021) <cit.> <https://github.com/sola-st/WasmBench> Hilbig et al. (2021) <cit.> <https://wasm-compiler-bugs.github.io/> Konoth et al. (2018) <cit.> <https://github.com/vusec/minesweeper> Tsoupidi et al. (2021) <cit.> <https://github.com/romits800/Vivienne> 2|c|Use case in cybersecurity Work Publicly available source Narayan et al. (2020) <cit.> <https://github.com/PLSysSec/rlbox-usenix2020-aec> Izquierdo Riera et al. (2023) <cit.> <https://github.com/clipaha/clipaha> Goltzsche et al. (2019) <cit.> <https://github.com/ibr-ds/AccTEE> Ménétrey et al. (2024) <cit.> <https://github.com/JamesMenetrey/unine-opodis2023> Attrapadung et al. (2018) <cit.> <https://github.com/herumi/mcl> Baumgärtner et al. (2019) <cit.> <https://github.com/stg-tud/bp7eval> 2|c|Attack scenarios Work Publicly available source Cabrera-Arteaga et al. (2023) <cit.> <https://github.com/ASSERT-KTH/wasm_evasion> Easdon et al. (2022) <cit.> <https://github.com/libtea/frameworks> Oz et al. (2023) <cit.> <https://github.com/cslfiu/RoB_Ransomware_over_Modern_Web_Browsers> 2|c|Attacks' detection Work Publicly available source Kharraz et al. (2019) <cit.> <https://github.com/teamnsrg/outguard> Bian et al. (2020) <cit.> <https://github.com/cuhk-seclab/MineThrottle> Romano et al. (2020) <cit.> <https://miner-ray.github.io> 2|c|Security enhancements Work Publicly available source Peach et al. (2020) <cit.> <https://github.com/gwsystems/awsm/> Zhang et al. (2021) <cit.> <https://github.com/Zhiyi-Zhang/PS-Signature-and-EL-PASSO> Vassena et al. (2021) <cit.> <https://github.com/PLSysSec/blade> Narayan et al. (2021a) <cit.> <https://github.com/PLSysSec/swivel> Lei et al. (2023) <cit.> <https://github.com/PKU-ASAL/PKUWA> Watt et al. (2023) <cit.> <https://github.com/WasmCert/WasmCert-Isabelle> Menetrey et al. (2021) <cit.> <https://github.com/jamesmenetrey/unine-twine> Geller et al. (2024) <cit.> <https://dl.acm.org/do/10.1145/3580426/full/> Johnson et al. (2023) <cit.> <https://github.com/PLSysSec/wave> Narayan et al. (2023) <cit.> <https://github.com/PLSysSec/hfi-root> Watt et al. (2019) <cit.> <https://github.com/PLSysSec/ct-wasm> Romano et al. (2022) <cit.> <https://github.com/js2wasm-obfuscator/translator> Jay Bosamiya et al. (2022) <cit.> <https://github.com/secure-foundations/provably-safe-sandboxing-wasm-usenix22> Michael et al. (2023) <cit.> <https://github.com/PLSysSec/ms-wasm> Watt et al. (2021) <cit.> <https://github.com/WasmCert> Menetrey et al. (2022) <cit.> <https://github.com/JamesMenetrey/unine-watz> Schrammel et al. (2020) <cit.> <https://github.com/IAIK/Donky> 2|c|Vulnerability discovery Work Publicly available source Khan et al. (2023) <cit.> <https://github.com/purseclab/FuzzSGX> Li et al. (2022) <cit.> <https://github.com/lwy0518/datasets_results> [The work only provides dataset and results, but no source code] Jiang et al. (2021) <cit.> <https://github.com/gongbell/WANA> Brito et al. (2022) <cit.> <https://github.com/wasmati/wasmati> Lehmann et al. (2021) <cit.> <https://github.com/fuzzm/fuzzm-project> Haßler and Maier (2022) <cit.> <https://github.com/fgsect/WAFL> Quan et al. (2019) <cit.> <https://github.com/EVulHunter/EVulHunter> Chen et al. (2022) <cit.> <https://github.com/wasai-project/wasai> Zhou and Chen (2023) <cit.> <https://github.com/HuskiesUESTC/AntFuzzer-WASMOD> 2|c|Other works Work Publicly available source William Fu et al. (2018) <cit.> <https://github.com/wfus/WebAssembly-Taint> Namjoshi et al. (2021) <cit.> <https://github.com/nokia/web-assembly-self-certifying-compilation-framework> Narayan et al. (2021) <cit.> <https://rlbox.dev/> Katherine Hough and Jonathan Bell (2021) <cit.> <https://doi.org/10.6084/m9.figshare.16611424.v1> Stievenart et al. (2021,2022) <cit.> <https://github.com/acieroid/wassail> Stiévenart et al. (2023) <cit.> <https://doi.org/10.5281/zenodo.8157269> Cabrera Arteaga et al. (2021) <cit.> <https://github.com/ASSERT-KTH/slumps/tree/master/crow> He et al. (2023) <cit.> <https://github.com/PKU-ASAL/Eunomia-ISSTA23> Stievenart et al. (2020) <cit.> <https://github.com/acieroid/wassail/releases/tag/scam2020> Lehmann et al. (2019) <cit.> <http://wasabi.software-lab.org/> Bastys et al. (2022) <cit.> <https://github.com/womeier/secwasm> Zhou et al. (2023) <cit.> <https://github.com/erxiaozhou/WaDiff> Cao et al. (2023) <cit.> <https://github.com/security-pride/BREWasm> Dejaeghere et al. (2023) <cit.> <https://gist.github.com/Rekindle2023/ca6a072205698925fa80f928eebe172e> Peng et al. (2023) <cit.> <https://github.com/rssys/uswitch> Cabrera-Arteaga et al. (2024) <cit.> <https://github.com/bytecodealliance/wasm-tools/tree/main/crates/wasm-mutate> Publicly available source codes of analyzed works All the provided links were valid in May 2024. Other works are not included in the analysis
http://arxiv.org/abs/2407.12456v1
20240717100545
Seasonal Changes in the Atmosphere of HD 80606b Observed with JWST's NIRSpec/G395H
[ "James T. Sikora", "Jason F. Rowe", "Jared Splinter", "Saugata Barat", "Lisa Dang", "Nicolas B. Cowan", "Thomas Barclay", "Knicole D. Colón", "Jean-Michel Désert", "Stephen R. Kane", "Joe Llama", "Hinna Shivkumar", "Keivan G. Stassun", "Elisa V. Quintana" ]
astro-ph.EP
[ "astro-ph.EP" ]
James Sikora james.t.sikora@gmail.com 0000-0002-3522-5846]James T. Sikora Lowell Observatory, 1400 W Mars Hill Road, Flagstaff, AZ, 86001, USA Anton Pannekoek Institute for Astronomy, University of Amsterdam, 1098 XH Amsterdam, The Netherlands Department of Physics & Astronomy, Bishop's University, Sherbrooke, QC J1M 1Z7, Canada 0000-0002-5904-1865]Jason F. Rowe Department of Physics & Astronomy, Bishop's University, Sherbrooke, QC J1M 1Z7, Canada 0000-0001-9987-467X]Jared Splinter Department of Earth and Planetary Sciences, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada 0009-0000-6113-0157]Saugata Barat Anton Pannekoek Institute for Astronomy, University of Amsterdam, 1098 XH Amsterdam, The Netherlands 0000-0003-4987-6591]Lisa Dang Institut Trottier de recherche sur les exoplanètes, Université de Montréal, 1375 Ave Thérèse-Lavoie-Roux, Montréal, QC, H2V 0B3, Canada 0000-0001-6129-5699]Nicolas B. Cowan Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada Department of Earth and Planetary Sciences, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada 0000-0001-7139-2724]Thomas Barclay NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771, USA 0000-0001-8020-7121]Knicole D. Colón NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771, USA 0000-0002-0875-8401]Jean-Michel Désert Anton Pannekoek Institute for Astronomy, University of Amsterdam, 1098 XH Amsterdam, The Netherlands 0000-0002-7084-0529]Stephen R. Kane Department of Earth and Planetary Sciences, University of California, Riverside, CA 92521, USA 0000-0000-0000-0000]Joe Llama Lowell Observatory, 1400 W Mars Hill Road, Flagstaff, AZ, 86001, USA 0000-0000-0000-0000]Hinna Shivkumar Anton Pannekoek Institute for Astronomy, University of Amsterdam, 1098 XH Amsterdam, The Netherlands 0000-0002-3481-9052]Keivan G. Stassun Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235, USA 0000-0003-1309-2904]Elisa V. Quintana NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771, USA § ABSTRACT High-eccentricity gas giant planets serve as unique laboratories for studying the thermal and chemical properties of H/He-dominated atmospheres. One of the most extreme cases is HD 80606b—a hot Jupiter orbiting a sun-like star with an eccentricity of 0.93—which experiences an increase in incident flux of nearly three orders of magnitude as the star-planet separation decreases from 0.88 au at apoastron to 0.03 au at periastron. We observed the planet's periastron passage using JWST's NIRSpec/G395H instrument (2.8-5.2 μ m) during a 21 hr window centered on the eclipse. We find that, as the planet passes through periastron, its emission spectrum transitions from a featureless blackbody to one in which CO and CH_4 absorption features are visible. We obtain significant detections of CH_4 during post-periapse phases at 3.7-4.8σ depending on the phase. Following periapse, CO and H_2O are also detected at 3.4σ and 3.1σ, respectively. Furthermore, we rule out the presence of a strong temperature inversion near the IR photosphere—predicted by GCMs to form temporarily during periapse passage—based on the lack of obvious emission features throughout the observing window. Our study demonstrates the feasibility of studying hot Jupiter atmospheres using partial phase curves obtained with NIRSpec/G395H. § INTRODUCTION HD 80606b is a Jupiter-sized planet <cit.> orbiting a bright Sun-like star <cit.>. It was initially discovered using radial velocity (RV) measurements obtained by <cit.> who reported a 111.8 d orbit with an eccentricity of e=0.927—the highest eccentricity of all previously detected exoplanets. Spitzer IR photometry at 8.0 μ m <cit.> was first used to detect the planet's eclipse occurring ≈3 hrs prior to periapse. Subsequent photometric measurements revealed the planet's transit, occurring ≈5.7 d after the eclipse <cit.>. In-transit RV measurements have been used to detect the planet's Rossiter-McLaughlin effect, which suggests that its orbit may be misaligned with respect to the stellar rotation axis <cit.>. Most recently, two of HD 80606b’s transits were detected using TESS photometry <cit.>, which, along with new RV measurements, provide the most precise constraints of the planet’s orbit to date <cit.>. As a result of its high eccentricity, the stellar radiation that HD 80606b receives from its host star varies by a factor of ≈860 as it orbits from apoapse (0.88 au) to periapse (0.03 au) corresponding to an increase in equilibrium temperature from ≈300 to 1500 K (calculated assuming full heat redistribution and zero albedo) (see Fig. <ref>). This rapid change in incident flux, coupled with the planet’s favorable orbital configuration and bright host star, makes HD 80606b one of the best known laboratories for detecting and characterizing dynamical effects predicted to occur within the atmospheres of highly eccentric planets <cit.>. Since HD 80606b was discovered, various models have been generated to study how its atmosphere behaves. Due to its relatively long orbital period, the vertical and horizontal temperature contrast throughout the planet’s atmosphere are expected to be low during most of the orbit <cit.>. During the periapse passage, however, the dayside is rapidly heated, inducing strong acoustic waves <cit.> and accelerating winds <cit.>. Depending on the radiative and advective timescales, the rapid heating may imprint a hotspot that periodically appears and disappears as the non-tidally locked planet rotates causing a “ringing" to be observed in the thermal phase curve <cit.>. It has been interpreted that the periastron passage also forms a transient dayside inversion characterized by a rise of ∼400 K with decreasing pressure <cit.>. As the upper atmosphere gets heated while the planet orbits closer to the star, changes in the abundances and distributions of gasses and aerosols are also expected to occur. Assuming chemical equilibrium, methane (CH_4), which is predicted to dominate the atmosphere when the planet is less irradiated and cooler, may be converted into carbon monoxide (CO) near periastron <cit.>. This process is complex and depends on the chemical timescale, vertical mixing extent, strength, and timescale, and on the abundances of other species that are involved in the CO-CH_4 conversion. In addition, photochemical reactions <cit.> may deviate the atmospheric properties from chemical equilibrium. By coupling a 3D GCM with a 1D photochemical model, <cit.> find that, as HD 80606b's CH_4 is converted into CO, the abundance of water (H_2O) also decreases while the abundances of certain photochemical products like hydrogen cyanide (HCN) and acetelyne (C_2H_2) increase well above what is expected under chemical equilibrium. Comparing condensation curves for various cloud species <cit.> with HD 80606b's temperature changes suggests that clouds may be present throughout much of the orbit. This was explored by <cit.> using 3D GCMs, which demonstrate that clouds composed of MgSiO_3, MnS, and/or Na_2S likely persist even during periapse passage; these clouds may effectively raise the photosphere to lower pressures relative to that of a cloud-free atmosphere thereby changing the observed IR phase curve <cit.>. All of the theoretical predictions developed for eccentric hot Jupiters in general and for HD 80606b in particular depend on certain assumptions about the planet’s rotation period, internal heating, and atmospheric composition. The first empirical insights into HD 80606b's atmospheric dynamics and thermal properties were derived using the 30 hr 8.0 μ m and 80 hr 4.5 μ m partial phase curves obtained with Spitzer during the planet's periapse passage. <cit.> and <cit.> derived a radiative timescale of τ_ rad∼4.5 hrs near the IR photosphere at atmospheric pressures ∼10-100 mbar. No ringing effect is detected in the 80 hr 4.5 μ m planetary light curve <cit.>. This is consistent with the estimated rotation period of P_ rot=93_-35^+85 hr derived by <cit.> using an energy-conserving semi-analytic atmospheric model (here, P_ rot corresponds to the atmosphere's bulk rotation rate including winds and any underlying solid body rotation). HD 80606b's estimated τ_ rad and P_ rot values are somewhat surprising considering theoretical predictions in which (1) cloud-free GCMs computed for HD 80606b have τ_ rad∼8-12 hrs <cit.> and (2) P_ rot is expected to be ∼40 hr under the assumption that the planet rotates at the pseudo-synchronous rotation period <cit.>. While previous IR photometry of HD 80606b have provided important constraints on our understanding of the seasonal changes occurring in eccentric hot Jupiter atmospheres, wider wavelength coverage at higher spectroscopic resolutions is necessary in order to constrain predictions related to their chemical and thermal properties. The high mass of the planet makes HD 80606b a relatively poor target for atmospheric characterization via spectroscopy of the planet during transit. While there is potential evidence for an exosphere based on narrow-band transit spectroscopy <cit.>, eclipse observations in particular are necessary to learn more about the characteristics of HD 80606b's atmosphere. Here we present near-IR spectroscopic measurements of HD 80606b’s periapse passage—including the eclipse—obtained with JWST’s NIRSpec instrument. In Sect. <ref>, we describe the observations and the data reduction methods. In Sect. <ref>, we present our analysis of both the integrated white light curves and the spectroscopic light curves. The results of our analysis are presented in Sect. <ref> followed by a discussion and conclusions presented in Sections <ref> and <ref>. § OBSERVATIONS HD 80606 was observed using JWST's NIRSpec instrument in BOTS mode with the G395H/F290LP grating/filter combination as part of the GO-2488 program (PI:Sikora). A total of 13818 exposures were obtained over a 20.9 hr observing window from Nov. 1, 2022 at 06:25 UTC to Nov. 2, 2022 at 03:11 UTC. The observations were briefly interrupted during two ≈5 min intervals for High Gain Antenna (HGA) moves: once during the eclipse (BJD 2459885.164) and once approximately 1 hr after periastron (BJD 2459885.320). The SUB2048 subarray was read-out using the NRSRAPID read-out pattern and we used five groups for each exposure (corresponding to a 5.43 sec exposure time) in order to avoid saturation for this bright target (J=7.702 mag). Since the data presented in this work correspond to a partial rather than a full phase curve, we do not have a clear baseline that can be used to robustly remove certain systematic trends that are impacting the data. Therefore, carrying out independent reductions is necessary to assess how the systematics may be impacted by the reduction process. We carried out two entirely independent data reductions starting from the uncalibrated FITS files: one using the Pipeline <cit.> and a second using a custom pipeline. We find that both reductions yield comparable results both in terms of the trends seen in the extracted planetary phase curve properties and in the systematic trends seen in the NRS1 and NRS2 measurements (presented in Sect. <ref>). We adopted the -based reduction as the nominal one due to the lower scatter in the light curves and the in the derived phase curve parameters associated with the spectral light curves. §.§ Eureka! Reduction The Pipeline <cit.> combines the first two stages of STScI's JWST Calibration Pipeline () <cit.> with a customized optimal spectral extraction routine. We used version 0.1 of the Python package and 1.6.0 of the Python package. Calibration reference files were taken from context 1230 of version 11.16.12 of the Calibration Reference Data System (CRDS). As briefly described below, we largely adopted the default settings for stages 1-4—defined in the .ecf control files—that are appropriate for NIRSpec G395H BOTS mode observations. In Stage 1 of the pipeline, the measured ramps are fit to get the count rates for each exposure. Bad pixels, which include those pixels identified using the `mask' reference FITS file from the CRDS or those pixels found to be saturated by the pipeline based on the NRS1 and NRS2 `saturation' reference FITS files, were masked. An average of 548 pixels per exposure (0.8 %) were masked for NRS1 and 572 pixels per exposure (0.9 %) were masked for NRS2. Group level background subtraction was carried out using a first order polynomical fit of the first and last 6 pixels in each column where a 5σ threshold was adopted to remove outliers. We skipped the photometric calibration step in Stage 2 because it allows for a straight forward calculation of the photon noise limit using the electron counts. In Stage 3, performs an optimal spectral extraction routine. For NRS1, the spectra were extracted across columns 501 through 1771 while for NRS2, we used columns 5 through 2040. Outliers rejection along the time axis was carried out for each pixel using a 5σ rejection threshold. A Gaussian fit to the source along each column was then used to fit and remove the trace. Background subtraction was carried out by removing the median flux calculated from pixels ≥7 pixels away from the source position while removing >5σ outliers. The spectral extraction used an aperture with a half-width of 5 pixels. Finally, 's Stage 4 was used to generate both the white light curves and the spectral light curves. Integrations obtained near the two HGA moves show large deviations in flux, therefore, we masked 318 integrations obtained within 14 min windows centered on these events. The spectral light curves were generated using 0.02 μ m bin widths resulting in 43 NRS1 light curves and 68 NRS2 light curves. The NRS1 and NRS2 white light curves have estimated root-mean-square (RMS) scatter of 155 ppm and 207 ppm, respectively. In Fig. <ref>, we show the flux obtained from the Eureka! pipeline normalized by the median spectrum. The eclipse (occurring ≈2.6 hr before periapse) is visible in both the NRS1 and NRS2 detectors. The rise in the planet's flux following the eclipse is also clearly visible in the NRS2 detector but is somewhat obscured in NRS1 due to significant wavelength-dependent systematic trends, particularly at wavelengths <3.5 μ m. §.§ Custom Pipeline A fully custom, independent pipeline was written to validate the analysis from Sect. <ref>. The pipeline corrects for instrumental effects and extracts the spectro-photometry using difference imaging techniques. Non-linearity, superbias corrections are made using identical reference pixels as described in Sect. <ref>. Reference pixels corrections used simple means with 3σ rejection for outliers. The pipeline starts with Stage 0 data products (e.g., “uncal.fits") and uses the saturation map reference files for NRS1 and NRS2 to flag any group during an exposure that exceeds saturation. Saturated group values for each pixel were excluded when integrating “up-the-ramp" to estimate the electron counts for each pixel for each integration. Superbias, reference pixel and non-linearity correction are then applied. Bad pixels were flagged through saturation or non-zero values of bit 0 from the data-quality flags. Bad-pixels were replaced by the mean of a 3x3 box centred on the offending group value. The pipeline uses a two-step process to calculate count rates and to correct for “1/f" noise using difference images. The first step uses linear-regression to estimate residual bias (zpt) and the integrated electron count rate. Column and row means of the zpt values for each integration were computed and then used as an additional bias correction for each group image. A stacked median image for each group step was calculated using all valid integrations. This provides five 32x2048 deep stacks based on 13,818 integrations. Difference images for each group set are then used to correct for 1/f noise using column medians. Only pixels more that 9 pixels away from the spectral trace were used when calculating the median. The 1/f corrected groups were used to then calculate count rates based on a robust mean of forward-differences, with outliers excluded at the 2-σ level. A median image was then used to calculate difference time-series images. Aperture photometry along the spectral trace using an aperture radius of 4 pixels was used to extract the difference spectrum using both difference and non-difference images. The median of the spectro-photometry from non-difference images was used to estimate the photometric zero-point to calculate the scaling necessary to convert differential counts to relative spectro-photometry. We estimate 220 ppm and 242 ppm RMS scatter for the NRS1 and NRS2 white light curves obtained from this custom reduction pipeline, respectively; these correspond to 1.4× and 1.2× the RMS scatter obtained from the Eureka! reduction. § ANALYSIS §.§ Analytic Light Curve Model We fitted the light curves obtained from the NRS1 and NRS2 detectors using an analytic model consisting of four components: (1) the planet flux (F_ p), (2) the stellar flux (F_⋆), (3) a polynomial term (F_ poly), and (4) a systematics term (F_ sys) such that F_ obs(t)=(F_⋆(t)+F_ p(t))F_ sys(t)F_ poly(t). For the planet flux, we adopted the asymmetric Lorentzian model proposed by <cit.> in their analysis of Spitzer phase curve measurements of the eccentric hot Jupiter HAT-P-2b. The model has four parameters corresponding to the phase curve amplitude (c_1), which we define in terms of the mean stellar flux (F_0), the peak offset time (c_2), and the timescales over which the flux rises (c_3) and decays (c_4) with respect to the phase curve peak: F_ p(t)=F_0c_1/u(t)^2+1 where u(t)= (t-c_2)/c_3, if t<c_2 (t-c_2)/c_4, if t>c_2. The symmetric version of this model corresponds to the case in which c_3=c_4. The eclipse was modeled using the Python package <cit.> and depends on the orbital period (P_ orb), eccentricity (e), inclination angle (i), argument of periastron (ω), eclipse mid-point (t_ e), the ratio of the semi-major axis to the stellar radius (a/R_⋆), and the ratio of the planetary radius to R_⋆ (R_ p/R_⋆). Since the partial phase curve observations we obtained span only a small fraction of the 111.4 d orbit and do not include the transit, we opted to fix P_ orb, e, i, ω, a/R_⋆, and R_ p/R_⋆ at published values (see Table <ref>), which have been precisely measured using transit, eclipse, and radial velocity measurements <cit.>. The eclipse mid-point, however, was allowed to vary as a free parameter when fitting the integrated white light curves and was fixed at these derived values when fitting the spectral light curves. For the stellar flux, we considered two models: one in which the stellar flux is constant (F_⋆=F_0) and one in which the stellar flux varies sinusoidally. The sinusoidal flux is defined as F_⋆(t)=F_0+F_0A_1sin(2π[tf_1+ϕ_1]) where A_1, f_1, and ϕ_1 are the amplitude, frequency, and phase. While A_1 and ϕ_1 were taken to be free parameters, the f_1 terms were fixed at the highest-amplitude frequencies identified in Lomb Scargle periodograms <cit.>. These periodograms were calculated from the residuals associated with each light curve's best-fit obtained for the F_⋆=F_0 model (0.171 hr^-1 and 0.241 hr^-1 for the NRS1 and NRS2 light curves, respectively). The NRS1 measurements exhibit a systematic downward slope in flux between the start and end of the observing window that is strongly wavelength-dependent (Fig. <ref>). Similar slopes have previously been reported for NIRSpec time series observations <cit.>. We account for these variations by including a multiplicative polynomial term, F_ poly(t)=1+∑_n p_n(t-t_0)^n where p_n are free parameters and t_0 is the approximate eclipse mid-point time, which we fix at BJD 2459885.165. No such systematic slope is visible by eye in the NRS2 white light curve, however we include the F_ poly term in the model selection process described below. The measured y positions of the centroid on the detector (i.e., in the spatial direction) that were obtained from the NRS1 and NRS2 reductions show obvious variability. The largest shift in the relative y positions occurs during the eclipse and coincides with the first HGA move. This jump is characterized by a sharp increase of ≈3 mpxl in both the NRS1 and NRS2 data. The centroid widths (σ_y) also show variations most notably for the NRS1 measurements, which show a linear increase in σ_y of ≈0.08 mpxl/hr. The centroid x positions and widths (i.e., in the dispersion direction) exhibit smaller shifts with the largest being a 0.2 mpxl/hr linear drift associated with NRS2. The centroid positions/widths are plotted in Fig. <ref> in the Appendix. The flux measurements may be correlated with changes in x, y, σ_x, and σ_y due, in part, to intrapixel variations as found in Spitzer and HST time series observations <cit.>. To account for this, we include the F_ sys term shown in Eqn. <ref>, F_ sys=1+c_x x(t)+c_σ_xσ_x(t)+ c_yy(t)+c_σ_yσ_y(t), where c_x, c_y, c_σ_x, and c_σ_y are free parameters and x, y, σ_x, and σ_y are first normalized by subtracting the mean and dividing by the standard deviation. The best fitting parameters and uncertainties for the analytic light curve modelling were derived using the ensemble sampler Python package <cit.>. For each light curve, 24 to 48 walkers were initialized, generating chains ≳5×10^4 steps in length after discarding the first 5000 steps as burn-in. Sampling was continued until the number of steps exceeded 50 times the mean auto-correlation lengths. The R̂ statistic <cit.> was then used to check for convergence where additional samples were generated until R̂<1.01 for each parameter. For both the white light curve fits and the spectral light curve fits, we used uniform priors for all parameters. In all cases, we also included a `jitter' term (σ_ Jit) that is added to the formal measurement uncertainties in quadrature, which accounts for additional white noise that may be present in the data. §.§ Atmospheric retrievals We carried out several sets of atmospheric retrievals using two independent frameworks. Within one framework, we used forward models calculated with <cit.> in which the atmosphere is assumed to be in chemical equilibrium. The chemical composition of the atmosphere is determined by the bulk metallicity ([M/H]) and C/O ratio. The abundances are interpolated from a grid computed using <cit.> and do not incorporate photochemical reactions. We included opacities from gas-phase H_2O, CO, CO_2, and CH_4, which are expected to be the predominant contributors to the total atomic and molecular opacity within the NRS1/NRS2 bandpasses. Continuum opacities associated with H_2-H_2 and H_2-He collision-induced absorption and H_2 and He Rayleigh scattering were also included. Clouds composed of MnS and/or MgSiO_3 could potentially be present during the observed phases <cit.>. In order to account for these species, we include a gray cloud deck in which the flux emitted by layers below the cloud deck pressure (P_ cloud) is blocked at all wavelengths. The model atmosphere was defined using 100 layers logarithmically distributed from 100 bar to 1 μ bar with a temperature described using the 4 parameter profile proposed by <cit.> described by the IR absorption coefficient (κ_ IR), the ratio of optical to IR absorption coefficients (γ), the planet's interior temperature (T_ int), and the irradiation temperature (T_ irr). The second retrieval framework used forward models calculated with <cit.> in which we include all of the opacity sources used in the -based retrievals listed above. In this case, the abundances are uniform with pressure and are allowed to vary freely. Posterior distributions were derived using in which 24 walkers were initialized and 10^5 samples per walker were generated with the first 10^4 steps being discarded as burn-in. After removing chains stuck in low-probability regions (between 0 and 5 chains), we obtained between 1.3×10^6 and 3.6×10^6 samples. Uniform priors were adopted for all parameters. § RESULTS §.§ White light curves Each of the integrated NRS1 and NRS2 white light curves obtained from the reduction were fitted using the modelling framework described in Sect. <ref>. BIC values were calculated for several variations of the general light curve model in order to select the most statistically significant model with the fewest parameters. The model selection process involved first fitting the data using a constant F_⋆ with a symmetric or asymmetric planet phase curve (Eqn. <ref>) and with polynomial terms up to third order (Eqn. <ref>). We then add a sinusoidal F_⋆(t) to the identified best model and test whether the fit is sufficiently improved. All of the derived model parameters for both the NRS1 and NRS2 white light curves are listed in Table <ref>. §.§.§ NRS1 white light curve For NRS1, we first tested 6 constant F_⋆ models having a symmetric or asymmetric planet phase curve (Eqn. <ref>) and a polynomial of degree 1, 2, or 3 (Eqn. <ref>). We found that the symmetric phase curve model with a 2^ nd order polynomial term yielded the lowest BIC and the lowest χ^2 value. The model using a 1^ st order polynomial term (corresponding to the model with the fewest free parameters) exhibits a significantly higher BIC value (Δ BIC=+1182) and therefore was rejected. Next, we calculated a Lomb Scargle periodogram from the residuals of this nominal model, which show a maximum amplitude of 43 ppm at a frequency of 0.171 hr^-1 (the periodogram is plotted in Fig. <ref> in the Appendix). Including a sinusoidal F_⋆(t) (Eqn. <ref>) using the peak amplitude's frequency yielded a much lower BIC value compared to the constant F_⋆ case (Δ BIC=-85). The nominal model we adopted for the NRS1 white light curve therefore consists of a second order polynomial term and a sinusoidal F_⋆(t). It includes 14 free parameters (F_0, c_1, c_2, c_3, t_ e, p_1, p_2, c_x, c_y, c_σ_x, c_σ_y, A_1, ϕ_1, and σ_ Jit). The fit to the NRS1 white light curve using the nominal model is shown in Fig. <ref>. We find that the model achieves a precision of 1.72× the 84 ppm photon noise limit. We also fit the light curve obtained from the custom reduction pipeline (see Sect. <ref>) for which we obtain a lower precision of 2.62× the photon noise limit. In Table <ref>, we list the model parameters and 1σ uncertainties derived for the Eureka!-reduced white light curves. The systematics term included in the model shows a slight correlation between the measured flux and the centroid y position and width (c_y=-5.4±1.4 ppm and c_σ_y-2.5±1.5 ppm) while c_x and c_σ_x are consistent with zero within 1.4σ. We note the large downward slope (p_1=-154.27±0.36 ppm/hr) and the non-zero second order polynomial term (p_2=3.850±0.058 ppm/hr^2), which may bias the derived planetary phase curve. As discussed further in Sect. <ref> in the context of the spectral light curves, we derive similarly high magnitudes for p_1 and p_2 from an analysis of the publicly available NIRSpec/G395H transit observations of GJ486b <cit.>. §.§.§ NRS2 white light curve For NRS2, we first tested 8 constant F_⋆ models having a symmetric or asymmetric phase curve and a polynomial term of degree 0 (i.e., no polynomial), 1, 2, or 3. The lowest BIC value obtained is associated with a symmetric planet phase curve and a zeroth order polynomial term. This model also has the fewest number of free parameters and is therefore favored over the models using the asymmetric planet phase curve and the models that include higher-order polynomial terms. The periodogram calculated from the residuals exhibits a maximum amplitude of 42 ppm at a frequency of f_1=0.241 hr^-1 (see Fig. <ref>). The second step of the model selection process where we incorporate a sinusoidal F_⋆(t) with frequency f_1 yields a large change in the BIC value of Δ BIC=-116. Therefore, the nominal model we adopt for the NRS2 white light curve consists of a sinusoidal F_⋆(t) and no polynomial term. It includes 12 free parameters (all of the parameters listed above for the NRS1 model except for p_1 and p_2). The fit to the NRS2 white light curve is shown in Fig. <ref> and the derived parameters are listed in Table <ref>. We also list the parameters derived when including a second order polynomial term as well for comparison with the nominal NRS1 model. In this case, both p_1 and p_2 are found to be consistent with zero within 1.2σ. For the nominal NRS2 model, we obtain a precision of 1.92× the 107 ppm photon noise limit and 2.24× the photon noise for the light curve obtained from the custom reduction. For the NRS2 systematics term (Eqn. <ref>), all four coefficients (c_x, c_σ_x, c_y, and c_σ_y) are all consistent with zero within 1σ. Comparing the NRS2 and NRS1 phase curves, we find large differences with the former having a much higher amplitude (c_1), peak offset (c_2), and rise/decay timescale (c_3), which all differ by ≳5σ. The eclipse mid-point times differ slightly by about 2.3σ (HJD 2459885.16465±0.00020 and HJD 2459885.16419±0.00014 for NRS1 and NRS2, respectively). The NRS2 bandpass is comparable to the Spitzer 4.5 μ m bandpass allowing for rough comparisons to be made with the phase curve presented by <cit.>. These authors report an amplitude of 738±52 ppm and is therefore comparable to our value of 826.8±7.4 ppm. The Spitzer 4.5 μ m phase curve has a notably lower peak offset of between ≈-1 hr and 0 hr relative to periapse (for NRS2, we derive an offset of 0.988±0.039 hr). The timescale over which the planet flux decreases post-periapse is also noticeably higher for NRS2 relative to the Spitzer phase curve: between the time of peak flux and t-t_ p≈8 hr (near the end of our observing window), F_ p/F_⋆ decreases from ≈830 ppm to 450 ppm compared to a decrease of ≈740 ppm to 180 ppm for Spitzer. On the other hand, the rise in flux from t-t_ p≈-12 hr associated with the two phase curves are in close agreement. §.§ Spectral light curves Each of the spectral light curves (43 for NRS1 and 68 for NRS2) was fitted using the nominal models adopted for the white light curves (Sect. <ref>). We used the same modelling framework except for the fact that we (1) fixed the eclipse mid-points (t_e) at the values derived from each white light curve and (2) used the ϕ_1 posterior (i.e., the sinusoidal F_⋆ phase parameter) obtained from the white light curves as priors for the spectral light curve models. For comparison, we also fit the spectral light curves obtained from the custom data reduction pipeline (Sect. <ref>). The derived parameters associated with each of the NRS1 and NRS2 spectral light curves (and those derived for the white light curves) are plotted in Fig. <ref>. As expected based on the observed light curves shown in Fig. <ref>, the linear slope (p_1 in Eqn. <ref>) included in the NRS1 model's polynomial term is highly correlated with wavelength. The distribution of p_1(λ) is roughly Gaussian in shape with a maximum magnitude occurring at ≈3.2 μ m. The second-order polynomial coefficient (p_2) shows a characteristically similar wavelength-dependence with a peak occurring at the same wavelength. Essentially the same trends are also found when fitting the light curves reduced using the custom pipeline; however, small differences between the p_1 and p_2 values are apparent, particularly for λ<3.2 μ m. The highly wavelength-dependent systematics affecting our NRS1 observations are also shown to impact NIRSpec/G395H transit observations of GJ486b <cit.> (see their Fig. 1). Both data sets were obtained using a similar instrument configuration using low group numbers in order to avoid saturation (HD 80606 was observed using 5 groups while those of GJ486 used 3). We analyzed the out-of-transit baseline exposures from the publicly available GJ486 data in order to characterize the wavelength dependence of p_1 and p_2 (see the Appendix). This analysis yields a similar Gaussian-like trend in p_1(λ) while p_2 is nearly constant with a value of ≈50 ppm/hr^2 (Fig. <ref>). The phase curve amplitudes (c_1) show a nearly monotonic increase with wavelength from ∼400 ppm at 2.9 μ m to ∼1000 ppm at 5.1 μ m. Small decreases in c_1 are apparent at ≈3.0-3.5 μ m and at ≈3.2-5.1 μ m. Similar decreases are apparent in the peak offset (c_2), which range from ≈0-1.6 hr. Lastly, the rise/decay timescales (c_3) are nearly constant for both the NRS1 and NRS2 light curves, however, NRS1 exhibits a systematic shift downwards (c_3≈5 hr for NRS1 compared to ≈7.5 hr for NRS2). The custom reduction pipeline yields similar trends to those derived using the reduction. Both reductions show good agreement for the NRS1 and NRS2 phase curve parameters. §.§ Phase-resolved planet spectra We used the fitted spectral light curves to extract the planet-to-star flux contrast (F_ p/F_⋆) and the planet's brightness temperatures (T_ bright) at several phases in order to identify and characterize any time variability in the planet's emission spectrum. Eclipse depths were calculated from the mean F_ p/F_⋆ values associated with the phase curve model averaged over a 2 hr window centered on the eclipse mid-point. We then carried this out for nine additional phases—totalling four pre-eclipse phases and six post-eclipse phases. The F_ p/F_⋆ spectra are presented in Sect. <ref> alongside the atmospheric retrieval results. For reference, all ten spectra are shown in Fig. <ref> in the Appendix—we include both the nominal spectra obtained from the reduction and the spectra obtained from the custom reduction. Brightness temperatures were calculated for each wavelength and orbital phase using the F_ p/F_⋆ measurements. The stellar spectrum used to derive the planet's thermal emission (F_ p) was estimated using a PHOENIX model with T_ eff=5600 K, logg=4.5, and [M/H]=0.5. Instrumental broadening assuming R=2700 was applied to the model stellar spectrum, which was then interpolated onto the native NIRSpec resolution and binned using the 0.02 μ m bin widths adopted for the observed spectral light curves. The planet-to-star radius ratio used for the calculation of T_ bright was taken to be (R_ p/R_⋆)^2=0.01019±0.00023 <cit.>. The brightness temperatures derived for five of the ten phases spanning the observing window are shown in Fig. <ref>. Comparing the NRS1 and NRS2 spectra, it is apparent that some phases may be more impacted by an offset in the planet flux derived from each detector, particularly near the start and end of the observing window. This can likely be attributed to a degeneracy between the systematic NRS1 trends absorbed by the 2^ nd order polynomial term (Eqn. <ref>) and the phase curve rise/decay timescale (c_3) and peak offset (c_2) (Eqn. <ref>). Next, we proceeded to fit the F_ p/F_⋆ spectra for each phase assuming that the planet radiates as a blackbody. Best-fitting blackbody temperatures (and uncertainties) associated with the planet were estimated by minimizing the χ^2 values using the function from the Python package where the stellar flux is given by the PHOENIX stellar model noted above. Two fits were carried out using the -reduced light curves: one using only the NRS2 measurements, for which we find no obvious evidence of instrument-related biases, and a second using both detectors while including an additive flux term to the NRS1 measurements in order to account for possible detector offsets. For the NRS2-only fits, the derived blackbody temperatures increase from T_ bb=918±40 K at the start of the observations, to a peak value of 1328±30 K during the first post-periapse phase (t-t_ p=1.4 hr) before decreasing to 1118±40 K. The fits yield reduced χ^2 values ranging from χ^2_ red=0.96 to 1.1 for the first 5 phases and χ^2_ red=1.4 to 2.6 for the last 5 phases where the maximum value coincides with the t-t_ p=3.4 hr phase. Comparable but slightly lower temperatures are obtained for the fits carried out with both detectors: T_ bb ranges from 900±36 K to a maximum of 1331±28 K. A similar trend in χ^2_ red is obtained with the last 4 phases exhibiting the largest deviation from a blackbody based on the χ^2_ red=1.9 to 2.5. The best-fitting flux offsets vary from +20 to +31 ppm during the first 5 phases and from 0 to 52 ppm during the last 5 phases. In Fig. <ref>, the post-periapse phases show the largest deviations from a blackbody with the appearance of several potential absorption features. Within the NRS2 bandpass, a decrease in T_ bright is found at λ>4.3 μ m, which is qualitatively consistent with CO and CO_2 absorption <cit.>. Within the NRS1 bandpass, a decrease in T_ bright is found from 3.1-3.4 μ m, which may be due to absorption by CH_4 <cit.>. These features are discussed further below. §.§ Atmospheric retrievals We carried out the atmospheric retrievals described in Sect. <ref> for two cases: one using only the NRS2 F_ p/F_⋆ spectra and one using both NRS1 and NRS2 and with the inclusion of a flux offset (f_ NRS1), as done for the blackbody fits. BIC values were then calculated for the maximum a posteriori (MAP) solutions obtained from the chemical equilibrium retrievals and were compared with those of the blackbody fits (Sect. <ref>). We find that for the first five phases, the blackbody fits are strongly preferred while for the last five phases, Δ BIC ranges from -7 to -115 in favor of the retrievals. This is consistent with the change in χ^2 values associated with the blackbody fits that imply a poorer quality fit at later phases. For the NRS2-only case, the retrieval is strongly preferred only during the last four phases where Δ BIC ranges from -35 to -72. In Fig. <ref>, we show the results of the chemical equilibrium retrievals carried out with both detectors at the eclipse phase (t-t_ p=-2.6 hr) and at the second-last observed phase (t-t_ p=3.4 hr), which is the spectrum that yields the highest constraints on the retrieval parameters. The left columns compare the observed spectra with models generated from the retrieval posteriors and with the GCM-based predictions published by <cit.>. The GCM-based predictions assume a solar-metallicity atmosphere with a 100 K internal temperature. The right column shows the derived PT profile constraints along with the GCM PT profiles. A comparison of the observed spectra and derived PT profiles for all ten phases are also included in the Appendix in Figures <ref> and <ref>. We find that the PT profiles derived from the chemical equilibrium retrievals in the two cases noted above are consistent: no significant differences are apparent when fitting both the NRS1 and NRS2 spectra compared to when fitting only the NRS2 spectra. The photospheric temperatures at pressures of 1-10 mbar are generally well-constrained with the last three phases having the narrowest uncertainty bands. Comparing the derived PT profiles with those of the GCMs, we see that during the first five phases through the eclipse (-10.6≤ t-t_ p≤-2.6 hr), the retrieved profiles exhibit notably higher photospheric temperatures. Phases 6 and 7 (-0.6≤ t-t_ p≤1.4 hr) show good agreement while the GCM temperatures are higher during the last three phases. These differences are manifest in the spectra as a vertical shift across the observed bandpasses (Fig. <ref> in the Appendix). Additionally, the retrieved PT profiles do not show clear evidence of an inversion near the infrared photosphere as found in the GCMs and no obvious emission features are seen in the extracted NIRSpec spectra that would otherwise indicate an inversion. The posteriors for the pressure of the gray cloud deck included in the retrievals are poorly constrained during the first five phases. For the post-eclipse phases, we obtain 2σ lower limits on the cloud deck's altitude at pressure layers ≳0.3 bar. As demonstrated in Fig. <ref>, the increase in temperature occurring from the pre- to post-periapse phases is such that the photospheric temperatures reach the condensation point of MnS clouds (yellow dash-dotted line) <cit.> (discussed further in Sect. <ref>). We estimated the detection significance of H_2O, CO, CO_2, and CH_4 for the last five extracted phases (i.e., those phases for which we find statistically significant differences between the spectra obtained from the chemical equilibrium retrievals and the blackbody fits). This was done using a similar method presented by <cit.> in which we carry out additional chemical equilibrium retrievals using with each gas species removed one at a time. We used to generate 24 chains 2×10^4 steps in length each with the first 10^4 steps discarded as burn-in yielding 2.4×10^5 samples. The detection significance was then calculated from the difference in the median χ^2 values derived from these MCMC samples. The resulting significance levels are shown in Fig. <ref> for each molecule and for each phase. The same analysis was also carried out for the NRS2-only retrievals, which are overplotted in the same figure. Adopting a 3σ detection threshold, we find that when using both detectors, CH_4 is detected at all of the five post-periapse phases (3.7-4.8σ), CO is detected at a single phase (3.4σ), and H_2O is detected at a single phase (3.1σ). When including only NRS2, CO is the only species detected (3.4σ). In Fig. <ref>, the derived C/O ratio and metallicity ([M/H]) are plotted for the last five phases. In all instances, both parameters are consistent within their estimated 1σ uncertainties for both the retrievals done using NRS1 and NRS2 and for those that only included NRS2. The fact that comparable uncertainties are obtained with and without including the NRS1 data can be attributed to the flux offset term, which is correlated with the metallicity (see Fig. <ref> in the Appendix showing an example of the marginalized posteriors). The tightest constraints on C/O and [M/H] are obtained from the t-t_ p=3.4 hr phase, which corresponds to the only phase in which CO is detected; here, the NRS2-only retrieval yields C/O=0.58_-0.16^+0.09 and [M/H]=-0.11_-0.55^+0.48 while including NRS1 with an offset yields C/O=0.69±0.14 and [M/H]=0.40_-0.75^+0.91. In addition to the chemical equilibrium retrievals, we also carried out free retrievals in which the abundances of H_2O, CO, CO_2, and CH_4 are uniform throughout the atmosphere, as described in Sect. <ref>. We find no significant difference between the resulting MAP solutions derived for the last five phases compared to those obtained from the chemical equilibrium retrievals with both analyses having essentially the same reduced χ^2 values of 1.0-1.2. Therefore, we find no evidence of disequilibrium chemistry and no indications that the observed spectra are strongly impacted by photochemistry. § DISCUSSION §.§ Partial phase curves The analysis presented here demonstrates both the feasibility and challenges of using partial phase curve observations obtained with NIRSpec/G395H to study hot Jupiter atmospheres. The long-wavelength NRS2 detector (3.8-5.2 μ m), which is largely sensitive to CO, is not found to exhibit any obvious systematics that may potentially bias the inferred planetary signal. On the other hand, the short-wavelength NRS1 detector (2.8-3.7 μ m), which is largely sensitive to CH_4 and H_2O, does have systematics that manifest as strong, non-linear and wavelength-dependent slopes (Fig. <ref>). Based on the derived polynomial coefficients, these slopes are consistent for the two independent data reduction pipelines used in this work (see Fig. <ref>) suggesting that the trends cannot easily be removed at the reduction level and/or must be detrended using additional parameters not included in our light curve model. The small number of NIRSpec/G395H time series data sets that are publicly available suggest that bright targets observed with few group numbers like HD 80606b (J=7.7 mag) and GJ486b (J=7.2 mag) may be more strongly impacted by these systematics compared to dimmer targets observed with much higher group numbers such as WASP-39b (J=10.7 mag) <cit.>; therefore, partial phase curves of such bright targets using the NIRSpec/G395H instrument mode should be approached with caution until a robust method of removing the systematic trends is developed. Comparing the phase curve parameters derived from the NRS1 and NRS2 detectors suggests that our adopted method of detrending the NRS1 systematic slopes in order to accurately recover the planetary signal is effective but likely imperfect. For instance, while the peak offsets (c_2) do not exhibit an obvious jump between the two detectors, the NRS1 rise/decay timescale (c_3)—and, to a lesser extent, the phase curve amplitudes (c_1)—appear to show a systematic offset (Fig. <ref>). The c_3 parameter is likely correlated with the polynomial's 1^ st order coefficient (p_1) such that a steeper inferred slope may yield a higher c_3. This correlation may explain why the NRS1 spectra require a flux offset term in order to be more consistent with a blackbody spectrum. The flux offsets derived for the blackbody fits and the chemical equilibrium retrievals are ∼+30 ppm for the earlier/later phases and ∼0 to -60 ppm closer to the peak flux. We found that if the planet's phase curve is modified to account for the offsets derived from the retrievals, the inferred c_1 values are reduced by ≈50 ppm while the rise/decay timescales are increased by ≈1.7 hr thereby removing the apparent discrepancy between the NRS1 and NRS2 c_1 and c_3 values. We note that the 2^ nd order polynomial used to detrend the NRS1 systematics would likely be more difficult and more susceptible to biases if the planet phase curve is not symmetric (our analyses of the NRS1 and NRS2 light curves independently indicate a highly symmetric phase curve). This is apparent from several of the NRS1 spectral light curves in which, when fitting using the asymmetric phase curve model, we obtain a much higher c_4 value compared to the c_3 value. This is not the case for the NRS1 white light curve, which yields consistent c_3 and c_4 values (thus, the asymmetric model was rejected in favour of the symmetric model). §.§ Phase curve properties and clouds In Fig. <ref>, we compare the phase curve parameters derived from the NRS1 and NRS2 spectral light curves with those obtained by fitting the GCM phase curves computed by <cit.>. The GCM phase curves were fit using the asymmetric version of the analytic phase curve model (Eqn. <ref>) where c_1, c_3, and c_4 were included as free parameters and c_2 was fixed at the time of peak flux. We find that the phase curve amplitudes are generally in good agreement particularly in the case of NRS1. The measured NRS2 amplitudes are slightly lower than the GCM predictions, which may be partially attributed to the CO absorption features at λ>4.3 μ m seen in our spectra post-periapse. The observed peak offsets, which are expected to be sensitive to the planet's rotation period/wind speed (e.g., Fig. 13 of and Fig. 8 of ), are significantly lower than the GCMs (∼0.5-1.5 hr compared to ∼2.5-4.5 hr). The narrow drop in the offset near 3.3 μ m, which is seen in both the models and in the observations, is potentially attributed to methane in the atmosphere based on the fact that it mirrors the characteristic spectroscopic methane feature (e.g., see the GCM model plotted in Fig. <ref>). A similar decrease in the observed peak offset is also apparent within the CO absorption band at λ≳4.3 μ m. In both cases, these peak offset decreases may be caused by the fact that the molecular absorption bands probe higher in the atmosphere, a feature also found in HST and Spitzer phase curve observations of WASP-43b <cit.>. Unlike for the observed NRS1 and NRS2 phase curves, the GCM phase curves are highly asymmetric and exhibit much longer decay timescales. This is evident in Fig. <ref>, which shows brightness temperatures associated with the integrated NRS1 and NRS2 best-fitting phase curves. A shorter observed rise/decay timescale may indicate that HD 80606b has a shorter radiative timescale and/or a shorter rotation period than that associated with the GCM predictions published by <cit.>. A rotation period that is shorter than the assumed ≈40 hr pseudo-synchronous rotation period <cit.> (i.e., a higher apparent wind speed) will result in more of the planet's cooler side becoming visible towards the end of the observing window causing a more rapid cooling rate to be inferred. The more rapid decrease in flux relative to the <cit.> cloud-free GCM predictions may also be due to the advection of nightside clouds onto the dayside during the post-periapse phases, which would cause the observed flux to be suppressed. This scenario was proposed by <cit.> and <cit.> to explain the low brightness temperatures derived from HD 80606b's Spitzer 4.5 μ m phase curve. As noted in Sect. <ref>, the rate at which the Spitzer 4.5 μ m flux rises during the pre-periapse phase is consistent with the NRS2 white light curve while the post-periapse decrease in flux is notably slower compared to NRS2. This could be caused by differences in the distribution of the advected clouds between the two observing epochs. Based on our analysis, HD 80606b's emission spectrum appears to transition from one that is initially consistent with a blackbody to one that exhibits detectable molecular absorption features of CO and CH_4. It is plausible that this transition is caused by the rapid evaporation of clouds near the photosphere. This scenario is consistent with the fact that the transition occurs when the photospheric temperature near 10 mbar increases to ∼1200 K coinciding with the condensation point of MnS clouds <cit.>. Along with MgSiO_3 and Na_2S, MnS was previously identified as a potentially significant condensate in HD 80606b's atmosphere using GCMs post-processed to include clouds <cit.>. §.§ Temperature structure Ignoring the potential impact of clouds on the observed emission spectral features, the transition from a blackbody spectrum pre-periapse may simply be caused by a change in the temperature gradient near the photosphere. In this case, a (nearly) isothermal PT profile will produce weak or non-existent absorption features. We find that the derived PT profiles at all phases exhibit relatively low temperature gradients near the photosphere and we do not find clear evidence of a temperature inversion near the IR photosphere. This stems from the lack of obvious emission features appearing in any of the extracted emission spectra. The GCM-based predictions from <cit.> show a transient inversion forming near the IR photosphere that persists throughout the 21 hr observing window, which leads to the formation of subtle but detectable emission features (e.g., Fig. <ref>). <cit.>, <cit.>, and <cit.> make similar predictions based on 1D radiative transfer models, 3D GCMs, and 1D radiative-convective equilibrium models, respectively. §.§ Chemical composition The atmospheric retrievals that we carried out for the five post-periapse phases imply that HD 80606b's atmosphere has an approximately solar metallicity with [M/H]=-0.15±0.60—slightly below the host-star metallicity of 0.348±0.057 <cit.>—and a solar C/O=0.58±0.18. We might expect that, as the planet approaches its host star, CH_4 is converted into CO through thermochemical reactions in response to the increase in temperature within the atmosphere (see Fig. <ref> showing the derived PT profiles compared with the expected points along which CH_4 and CO are in equal abundance), however, in the case of HD 80606b, these reactions are expected to occur over timescales much longer than the orbital timescale <cit.>. The photochemical models generated for HD 80606b by <cit.> predict that the abundances of CH_4, H_2O, and CO undergo significant changes in response to the increase in incident UV flux that occurs during periapse passage. The resulting change in CH_4 and CO abundance, along with that of several photochemical byproducts such as C_2H_2 and HCN, can differ by orders of magnitude from chemical equilibrium predictions that neglect photochemistry. Using both detectors (and including a detector flux offset as a free parameter), CH_4 is detected in all five of the extracted post-periapse emission spectra while CO and H_2O are only detected at a single phase; when using only the NRS2 detector, which does not appear to be impacted by the significant systematics associated with NRS1, CO is the only molecule that is detected (Fig. <ref>). We find that the chemical equilibrium retrievals yield high-quality fits for all phases (χ^2_ red=1.05-1.24) and are statistically preferred over the free retrievals due to fewer free parameters. Therefore, we find no evidence that changes in the CH_4, H_2O, and CO abundances that may be taking place during periapse passage are photochemical rather than thermochemical in nature. We note that the 1D atmospheric retrievals we used to derive the temperature structure and chemical composition of HD 80606b's atmosphere do not include photochemistry and do not capture some of the complex dynamical effects that likely take place during periapse passage such as the predicted increase in zonal wind speeds <cit.>. 2.5D retrievals <cit.>, for instance, may provide a more accurate point of comparison for the published GCMs of HD 80606b <cit.> and for those generated in the future. § CONCLUSIONS We present the first partial phase curve observations of an exoplanet obtained with JWST. These NIRSpec/G395H measurements of HD 80606b's periapse passage reveal an atmosphere that undergoes significant temperature changes during the 21 hr observing window. This work demonstrate that partial phase curves with JWST can provide reliable data at twice the photon noise precision level. Time dependent spectral changes in the NIRSpec/G395H bandpass are observed in the emission spectrum of the planet during this observation. Our analysis suggests that, prior to periapse, the atmospheric layers probed at these wavelengths are predominantly isothermal based on the fact that no spectral features are detected and the derived planet spectrum is consistent with that of a blackbody. During the post-periapse phases, we detect CH_4 and H_2O spectral absorptions at 4.8σ and 3.1σ, respectively, (when considering both the NRS1 and NRS2 detectors and including including a flux offset parameter for NRS1) that appear chemically mixed. Absorption from CO is detected at 3.4σ (with and without including NRS1). No significant spectral emission features are detected at any of the observed phases, which, along with the PT profiles derived from atmospheric retrievals, rules out the presence of a predicted temperature inversion <cit.>. Furthermore, we find that the post-periapse emission spectra in which molecular features are detected are consistent with thermochemical equilibrium models. Considering the significantly higher precision of JWST NIRSpec observations relative to the Spitzer 4.5 μ m and 8 μ m partial phase curves, additional measurements extending over a longer timespan may be capable of detecting a ringing effect caused by the planet's rotation/advection <cit.>. This would provide important and entirely unique constraints that are needed to accurately model the atmospheres of eccentric hot Jupiters like HD 80606b and hot Jupiters in general. Additional phase curve measurements of HD 80606b obtained at different wavelengths will also help to constrain the chemical and photochemical reactions that are likely occurring in the atmosphere as well as the composition and size of cloud particulates <cit.> <cit.>. High-resolution near IR observations have the potential to detect HCN <cit.> that is expected to be found at high altitudes in hot Jupiter atmospheres <cit.>. Therefore, future high-precision observations of HD 80606b and other eccentric hot Jupiters such as HAT-P-2b <cit.>, HD 17156b <cit.>, and XO-3b <cit.> using JWST or ground-based high-resolution spectrographs may provide important clues into the nature of hot Jupiter atmospheres and the impact of photochemistry and clouds. JTS and JL acknowledge support from NSF AAG award AST2009343. JTS acknowledges funding support from grant 22JWGO1-17 awarded by the CSA. This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-03127 for JWST. These observations are associated with program #2488. KDC acknowledges support for program #2488 was provided through a grant from the STScI under NASA contract NAS5-03127. J.M.D acknowledges support from the Amsterdam Academic Alliance (AAA) Program, and the European Research Council (ERC) European Union’s Horizon 2020 research and innovation program (grant agreement no. 679633; Exo-Atmos). This work is part of the research program VIDI New Frontiers in Exoplanetary Climatology with project number 614.001.601, which is (partly) financed by the Dutch Research Council (NWO). We thank Everett Schlawin, Shang-Min Tsai, and Maria Steinrueck for helpful discussions that improved the analysis of the data and the interpretation of the results presented in this work. All the JWST data used in this paper can be found in MAST: [10.17909/yb30-hj11]http://dx.doi.org/10.17909/yb30-hj11. JWST (NIRSpec) <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, , <cit.>, <cit.>, easyCHEM <cit.>, <cit.>. § APPENDIX <cit.> show that the two NIRSpec/G395H transit observations of GJ486b (GO-1981, PI:Stevenson) exhibit strong, wavelength dependent slopes similar to what we find in our analysis of the NRS1 measurements of HD 80606b. Here we compare the linear and quadratic terms derived for the GJ486b and HD 80606b NRS1 data sets. We first reduced the publicly available G395H measurements of the two GJ486b transits using essentially the same control files used for the HD 80606b data. We then extracted 43 spectral light curves to match the wavelength bins used in our study and, for each light curve, we fit a second order polynomial to the out-of-transit baseline measurements. The 1^ st and 2^ nd order polynomial coefficients (p_1 and p_2) derived for the two data sets are shown in Fig. <ref>. Fitting a Gaussian function to the p_1(λ) values, we obtain a mean and standard deviation of 3.168±0.005 μ m and a 0.122±0.007 μ m, respectively. Comparable values are found for the second observations. Comparing with the p_1(λ) trend found for the HD 80606b NRS1 measurements, we find that the Gaussian mean and width are in good agreement while the amplitude and zero point differ significantly. Unlike for the HD 80606b observations, the GJ486b observations do not show a wavelength dependence for p_2. < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > Same as the right panels shown in Fig. <ref> but for all 10 phases. aasjournal
http://arxiv.org/abs/2407.13019v1
20240717211433
MagAO-X Phase II Upgrades: Implementation and First On-Sky Results of a New Post-AO 1000 Actuator Deformable Mirror
[ "Jay K. Kueny", "Kyle Van Gorkom", "Maggie Kautz", "Sebastiaan Haffert", "Jared R. Males", "Alex Hedglen", "Laird Close", "Eden McEwen", "Jialin Li", "Joseph D. Long", "Warren Foster", "Logan Pearce", "Avalon McLeod", "Jhen Lumbres", "Olivier Guyon", "Joshua Liberman" ]
astro-ph.IM
[ "astro-ph.IM" ]
Anomalous Nernst effect based near field imaging of magnetic nanostructures Georg Woltersdorf July 22, 2024 =========================================================================== § ABSTRACT MagAO-X is the extreme coronagraphic adaptive optics (AO) instrument for the 6.5-meter Magellan Clay telescope and is currently undergoing a comprehensive batch of upgrades. One innovation that the instrument features is a deformable mirror (DM) dedicated for non-common path aberration correction (NCPC) within the coronagraph arm. We recently upgraded the 97 actuator NCPC DM with a 1000 actuator Boston Micromachines Kilo-DM which serves to (1) correct non-common path aberrations which hamper performance at small inner-working angles, (2) facilitate focal-plane wavefront control algorithms (e.g., electric field conjugation) and (3) enable 10 kHz correction speeds (up from 2 kHz) to assist post-AO, real-time low-order wavefront control. We present details on the characterization and installation of this new DM on MagAO-X as part of our efforts to improve deep contrast performance for imaging circumstellar objects in reflected light. Pre-installation procedures included use of a Twyman-Green interferometer to build an interaction matrix for commanding the DM surface, in closed-loop, to a flat state for seamless integration into the instrument. With this new NCPC DM now installed, we report on-sky results from the MagAO-X observing run in March—May 2024 for the Focus Diversity Phase Retrieval and implicit Electric Field Conjugation algorithms for quasistatic speckle removal and in-situ Strehl ratio optimization, respectively. § INTRODUCTION MagAO-X <cit.> is the “extreme" coronagraphic adaptive optics (AO) instrument for the 6.5-meter Magellan Clay telescope (Figure <ref>) which is optimized to image at visible wavelengths. The instrument includes an array of deformable mirrors (DM) including: a high-stroke 97 actuator woofer ALPAO DM97, a 2040 actuator tweeter DM (Boston Micromachines; BMC), and an additional DM for non-common path correction (NCPC) in the coronagraph. The woofer-tweeter DM architecture<cit.> is driven by a ∼ 3 kHz transmissive pyramid wavefront sensor. The 50×50 2040-actuator tweeter provides high order wavefront correction with low-order modes offloaded to a 11×11 97-actuator woofer DM. One unique aspect of MagAO-X is its additional DM for NCPC and Low-Order Wavefront Sensing (LOWFS) within the coronagraph arm which we recently swapped with a 1000 actuator BMC Kilo-DM as part of the MagAO-X "Phase II" upgrades<cit.>. This new NCPC DM surface required characterization to confirm device quality and for commanding the reflective face sheet to a flat state for efficient installation. The main scientific motivation for the abovementioned Phase II upgrades is to significantly enhance MagAO-X's ability to image reflected starlight at close inner-working angles (IWAs). Currently, one of the main limitations for this mode of direct imaging is noise due to non-common path aberrations (NCPAs) which are instrumental in origin and originate downstream of the wavefront sensor (WFS) arm, so the AO system is unable to correct for them in typical configurations. Furthermore, the “quasi-static" speckles due to NCPAs are difficult to remove through conventional post-processing methods (e.g., angular differential imaging<cit.>) because they evolve too slowly to be averaged and too quickly to be removed using a single speckle pattern realization for a typical science observation. After characterization and installation of the Kilo-DM, we used it to address these concerns. Specifically, this upgrade led to faster LOWFS correction speeds and facilitated the Focus-Diversity Phase Retrieval (FDPR) and implicit Electric Field Conjugation (iEFC) algorithms on-sky for quasi-static speckle removal and in-situ Strehl ratio optimization, respectively. § DM CHARACTERIZATION We replaced MagAO-X's NCPC DM as part of the Phase II upgrades. Namely, we swapped the old ALPAO DM97 with a new BMC Kilo-DM which features a 32 × 32 952 microelectromechanical system (MEMS) actuator grid rated for 3.5 μm of peak-to-valley (PV) surface stroke. Before the DMs could be exchanged, the Kilo-DM needed to be characterized to avoid aberrating MagAO-X's point-spread function (PSF) and maintaining accurate wavefront control. §.§ Procedures We set up and aligned the DM on a test bench such that the surface could be measured in real time with a 4D Technology PhaseCam (Twyman-Green) interferometer (Figure <ref>) utilizing a HeNe laser source (λ = 632.8 nm). We used the BMC-supplied API to command the DM and the Python API provided by 4D Technology for interferometer operation, post-processing, and writing the measurements to disk. Because the diameter of the stock source beam was not large enough to measure the full test surface in a single exposure, we made use of additional lenses to expand the beam size. These expanding optics introduced low-order aberrations which were subtracted in post-processing through the use of a reference made from a 4” flat mirror (λ/20 PV; Figure <ref>). We built an interaction matrix to command the DM surface using the same general procedures established during past characterization of MagAO-X's tweeter and woofer DMs<cit.>. Namely, we first commanded a global DM surface command to a voltage bias of 70% to maximize single actuator stroke then measured single-actuator IFs by recording positive and negative DM pokes with amplitude 0.16 μm (λ/4). We then computed the difference image to remove the surface form at its bias position; see Van Gorkom et al. <cit.> for a representative IF for a MEMS DM. To expedite measurements of these IFs, we built a Python-based pipeline to automate DM and interferometer operations supported by a simple file monitoring class. On both the DM and 4D computer, the monitor process passively watched for changes to an empty file before allowing a DM command or interferometer capture to take place. For either a successful DM command or interferometer measurement, the pipeline then updated the empty file on the conjugate computer and the process continued, in sync, until all measurements were written to disk. After capturing the IFs, we flattened and stacked the images to make the response matrix. Put mathematically, s⃗ = Fa⃗ where s⃗ is a vector portraying the form of the DM surface, F is the response matrix, and a⃗ is a vector of actuator commands. Furthermore, the backbone for the closed-loop flattening operation is projection of the measured DM surface onto the interaction matrix, which is given by the pseudo-inverse of F, to obtain an approximation in DM space: a⃗ = F^†s⃗ where F^† denotes the pseudo-inverted response matrix. To summarize, we assume that we can approximate an arbitrary surface shape s⃗ given a sufficient span of orthogonal IF measurements F. Since we measured the IFs of single-actuator pokes, no IF can be reconstructed from any combination of IFs from the other actuators thus this condition is satisfied. Minimizing the surface rms about the mean, we used the abovementioned interaction matrix F^† to drive the surface to a flat state, in closed-loop, using the interferometer measurements (i.e., the WFS) to obtain s⃗. We found solutions to two types of flats: a “relaxed" flat where tip/tilt and defocus aberrations were fit and removed in post-processing and thus were ignored when evaluating the fit metric and an “absolute" flat where only tip/tilt was ignored (Figure <ref>). The relaxed flat converged in 12 iterations and the absolute flat converged in 11. We chose the relaxed flat to maximize the available stroke across all actuators<cit.> and opted to instead compensate for the remaining defocus by translating MagAO-X's science and LOWFS cameras as well as the focal plane mask along the z-axis. The DM surface rms at the bias position was ∼47 nm and was reduced to ∼10 nm after the flattening procedure (Figure <ref>). §.§ Installation and Optimization After obtaining the best-fit command map for a relaxed flat, we installed the new NCPC DM in MagAO-X in late 2023 (see the right panel in Figure <ref> to see the new DM installed in the instrument). We then aligned it in the lab using the internal telescope simulator and a commanded binary pattern marking the shape and position of the expected beam footprint on the NCPC DM. However, we observed some remaining static aberrations even with the best-fit flat command. To scrub out these residual aberrations, and for general utility, we constructed an orthogonal basis for use over the area of beam footprint using Zernike modes up to Z51. In practice, this basis is useful for manually dialing out or inducing specific aberrations at the detector focal plane or for use with, e.g., the “eye-doctor"<cit.> algorithm for automated instrument Strehl ratio optimization. Figure <ref> details the process we used to construct such a basis. We note that due to the highly elliptical illuminated area on the DM, the question of how best to control the actuators outside of this region is still being actively investigated. § ON-SKY PERFORMANCE MagAO-X went on-sky with the new NCPC DM during the observing block in March—May 2024, spearheading enhancements to MagAO-X’s unique ability to do coronagraph-stage, real-time wavefront control. We had ample opportunity to test the upgraded configuration during engineering time and science observations; the results of these tests are briefly discussed in the next few subsections. §.§ NCPC and LOWFS NCPAs originate within the instrument due the disconnect between the WFS and science arms. Since NCPAs are not sensed by the high-order WFS, they become a limiting factor for the contrast at the imaging plane. One way to fight this phenomenon is adding additional WFSs further along in the optical train. MagAO-X makes use of light rejected by the reflective coating of the Lyot focal plane mask for LOWFS (yellow beam in Figure <ref>) for NCPA correction <cit.>. In addition to a considerable increase in the number of modes LOWFS can correct for, the new Kilo-DM also increased the maximum closed-loop correction speeds (from 2kHz to 10 kHz), increasing robustness to instrumental jitter. We plan to perform more comprehensive tests with 10kHz correction speeds as well as measuring and calibrating mode linearity with LOWFS during the next MagAO-X observing block in Fall 2024. §.§ Mitigating quasi-static speckles with iEFC Quasistatic speckles are a source of noise that stems from NCPAs and are debilitating for AO imaging performance at close IWAs<cit.>. During the last MagAO-X observing run in 2024A, we demonstrated focal plane wavefront sensing, on-sky, using the iEFC algorithm<cit.> to mitigate this noise source. Briefly, this algorithm combines Electric Field Conjugation<cit.> with an empirical calibration step via the NCPC DM (i.e., no optical model is needed to reconstruct the electric field). With knowledge of the morphology of the electric field, the NCPC DM can then cancel by instilling the opposite electric field. Our on-sky iEFC efforts produced a rectangular dark hole of size 11 × 13 λ/D which is a region of high-contrast almost completely devoid of quasistatic speckles (Figure <ref>). These results will be described in detail in a later publication (Haffert et al., in prep). One avenue for future work is studying the effects of DM registration to the detector plane. §.§ In-situ Strehl ratio optimization with FDPR The FDPR algorithm<cit.> is one way to measure the spatial distributions of phase and amplitude at the pupil plane in an AO system. At a high level, FDPR attempts to minimize the error between PSF models and images at multiple planes of defocus to avoid solution degeneracies. We report successful on-sky use of closed-loop FDPR leading to drastic improvements in Strehl ratio as part of an initial calibration step before observing a given science target. To supply the algorithm with the necessary components, we use the NCPC DM to apply precise amounts of defocus to the PSF image which is recorded on one of MagAO-X's science cameras. After the requested amount of iterations is completed on a bright (V ≲ 5 mag) star, we use the NCPC DM command map that minimizes differences between the PSF images and models as the new flat command for any subsequent science observations in a nearby portion of the sky. Figure <ref> shows the degree of improvement for the Strehl ratio after 10 iterations of closed-loop FDPR while imaging through a narrow CH4 filter (875 nm) on the bright star β Corvis. We note, however, that FDPR does instantaneous band-limited measurements for the Strehl ratio thus neglecting vibrations; the true Strehl ratio is likely lower by about 0.1 to 0.2. Currently, for on-sky use, the defocused PSF images are first averaged for ∼ 5 seconds to temper the effects of atmospheric turbulence before feeding it into the algorithm. There would be value in further study as to determine optimal FDPR parameters based on the current observing conditions (e.g., how do we best average the PSF images over turbulence?). § SUMMARY AND FUTURE WORK We upgraded MagAO-X's NCPC DM to a BMC Kilo-DM as part of a comprehensive suite of ongoing upgrades<cit.> to be completed in the next couple of years. We detailed successful characterization of the device prior to installation within the instrument. We also shared our on-sky results using the new NCPC DM during the MagAO-X observing block in 2024A. The key takeaways from our on-sky tests with this new DM are listed as follows: * MagAO-X's unique ability to do coronagraph-stage wavefront control has been greatly improved. * We demonstrated quasi-static speckle removal in a 11 × 13 λ/D control radius "dark hole" without any loss of stroke from the 2040-actuator tweeter DM. * The FDPR algorithm is an effective way to drastically improve Strehl ratio as part of an initial calibration step before observing a given science target. * LOWFS on MagAO-X is much improved with faster correction speeds, increasing robustness to instrumental jitter. We are thankful for support by the NSF MRI Award no. 1625441 (MagAO-X). This work and the rest of the MagAO-X Phase II upgrades are made possible by the generous support by the Heising-Simons Foundation. spiebib
http://arxiv.org/abs/2407.13454v1
20240718123155
Processing of hydrocarbon dust in star-forming galaxies revealed with AKARI
[ "Tsubasa Kondo", "Akino Kondo", "Katsuhiro L. Murata", "Takuma Kokusho", "Shinki Oyabu", "Toyoaki Suzuki", "Risako Katayama", "Hidehiro Kaneda" ]
astro-ph.GA
[ "astro-ph.GA" ]
t.kondo@u.phys.nagoya-u.ac.jp 1Graduate School of Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8602, Japan 2Okayama Observatory, Kyoto University, 3037-5 Honjo, Kamogata-cho, Asakuchi, Okayama, 719-0232, Japan 3Institute of Liberal Arts and Sciences, Tokushima University, 1-1 Minami-josanjima-cho, Tokushima-shi, Tokushima, 770-8502, Japan 4Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa, 252-5210 Japan infrared: galaxies — galaxies: star formation — galaxies: ISM Processing of hydrocarbon dust in star-forming galaxies revealed with AKARI Hidehiro Kaneda1 =============================================================================== § ABSTRACT Hydrocarbon dust is one of the dominant components of interstellar dust, which mainly consists of polycyclic aromatic hydrocarbons and aliphatic hydrocarbons. While hydrocarbon dust is thought to be processed in interstellar radiation fields or shocks, detailed processing mechanisms are not completely understood yet. We investigate the processing of hydrocarbon dust by analyzing the relation between the luminosities emitted by hydrocarbon dust and the total infrared luminosities (L_IR) for 138 star-forming galaxies at redshift z < 0.3. Using near-infrared 2.5–5 μ m spectra obtained with AKARI, we derived the luminosities of the aromatic hydrocarbon feature at 3.3 μ m (L_aromatic) and the aliphatic hydrocarbon feature at 3.4–3.6 μ m (L_aliphatic). We also derived L_IR and the radiation field strength by modeling the spectral energy distributions of the 138 galaxies with AKARI, WISE and IRAS photometry data. We find that galaxies with higher L_IR tend to exhibit lower L_aliphatic/L_aromatic ratios. Furthermore, we find that there is an anti-correlation between L_aliphatic/L_aromatic ratios and the radiation field strength, and also that the galaxies with low L_aliphatic/L_aromatic ratios are dominated by merger galaxies. These results support that hydrocarbon dust is processed through photodissociation in strong radiation fields and/or shocks during merging processes of galaxies; the L_aliphatic/L_aromatic ratio is likely to decrease in such harsh interstellar conditions since the aliphatic bonds are known to be chemically weaker than the aromatic bonds. § INTRODUCTION There are various types of interstellar dust with different sizes, compositions and structures. Hydrocarbon is one of the most important ingredients of interstellar dust, the presence of which is often identified through near- and mid-infrared (IR) spectral features due to vibrations of C–H or C–C bonds of aromatic or aliphatic hydrocarbons. Their smallest forms are known as polycyclic aromatic hydrocarbons (PAHs) or mixed aromatic-aliphatic organic nanoparticles (MAONs; <cit.>, k z2013). It has also been proposed that small hydrocarbon dust is mostly hydrogenated amorphous carbon materials (a-C:H; <cit.>), which are associated with aliphatic to aromatic transformation. Both aromatic and aliphatic hydrocarbons are mostly composed of carbon and hydrogen, and hydrocarbon dust is thought to contain 50–1000 carbon atoms (<cit.>). The primary difference between them is their structure; aromatic hydrocarbons contain benzene rings, while aliphatic hydrocarbons are organic chemical compounds which do not contain benzene rings. These hydrocarbons are excited by UV and optical radiation from stars and emit IR radiation in the near- to mid-IR wavelengths. The hydrocarbon emission is observed in a variety of objects and environments, including planetary nebulae, reflection nebulae, young stellar objects, star-forming galaxies and even quiescent galaxies (e.g., <cit.>; <cit.>; <cit.>; <cit.>; <cit.>). Aromatic hydrocarbons show strong emission features at 3.3, 6.2, 7.7, 8.6, 11.3 and 12.7 μ m (e.g., <cit.>), while aliphatic hydrocarbons show emission features at 3.4 and 6.9 μ m (e.g., <cit.>). The hydrocarbon features show wide diversities in the spectral position, shape and relative intensity. These diversities come from different properties of hydrocarbon dust, such as the chemical composition, ionization state and size distribution (e.g., <cit.>; <cit.>; <cit.>). Several past studies have reported the relationship between the hydrocarbon features and the interstellar conditions, and proposed the processing mechanism of hydrocarbon dust (e.g., <cit.>, <cit.>). For example, for the nearby edge-on starburst galaxy M82, <cit.> indicated that the aliphatic to aromatic flux ratio increases with the distance from the galactic center. They concluded that shocks by galactic superwinds produce small carbonaceous grains which emit the aliphatic features through fragmentation of larger carbonaceous grains, while dense molecular clouds protect aromatic hydrocarbons from the fragmentation near the galactic center region. <cit.> found that the aliphatic to aromatic intensity ratio decreases with the intensity of the radiation field in the reflection nebula NGC 7023, indicating that aliphatic C–H bonds are photodissociated by strong UV radiation. For hydrocarbon dust around young stars, <cit.> also found that the aliphatic C–H bonds are more easily destroyed than the aromatic C–H bonds by UV photons. In Galactic H II regions, <cit.> found that the ratio of the aromatic to the aliphatic hydrocarbon intensity decreases with the ionization degree of PAH that is affected by the UV radiation field, which is likely to be caused by the fact that aliphatic hydrocarbons are more fragile than aromatic hydrocarbons. As a result of the JWST (James Webb Space Telescope; <cit.>) observation of the nearby galaxy NGC 7469, <cit.> found that the ratio of the aromatic to the aliphatic intensity varies spatially in its star-forming ring. Toward the galactic nucleus of NGC 7469, the ratio shows a decreasing trend, which again indicates that the aliphatic hydrocarbons are preferentially destroyed by harder radiation fields around its active galactic nucleus (AGN). Hence it has been shown that the aliphatic to the aromatic flux ratios are significantly variable, indicating that hydrocarbon dust can be processed in interstellar radiation fields and/or shocks. Yet, there are few statistical studies on the relationships between the aliphatic and aromatic components for a large number of objects with different interstellar conditions. Among past studies closely related to the present one, <cit.> confirmed that the aromatic hydrocarbon to the total infrared luminosity ratios decrease with the total infrared luminosities for 101 star-forming galaxies at redshift z < 0.3. This trend was particularly seen in the ultra-luminous infrared galaxies (ULIRGs: L_IR > 10^12 L_⊙). They suggested that shocks induced by galaxy mergers might have destroyed aromatic hydrocarbons. Moreover, <cit.> presented evidence that aromatic hydrocarbons are destroyed not only by shocks but also by strong radiation fields, analyzing the physical properties of merger and non-merger galaxies. In this paper, we systematically explore the relation of the aliphatic and aromatic hydrocarbon dust features with physical properties of star-forming galaxies, using a large number of near-IR spectra (2.5–5 μ m) obtained with the IRC (Infrared Camera; <cit.>) aboard AKARI (<cit.>). § DATA ANALYSIS §.§ AKARI mid-IR excess sample Our sample consists of 230 galaxies, which are based on the AKARI mission programs, Mid-infrared Search for Active Galactic Nuclei (MSAGN: <cit.>) and Evolution of ULIRGs and AGNs (AGNUL: <cit.>, imanishi2010). These two programs performed near-IR spectroscopy at 2.5–5 μ m wavelengths with a spectral resolution of R ∼ 120 using AKARI/IRC (<cit.>). In <cit.>, 184 galaxies were selected according to the criterion of F(9 or 18 μ m)/F(K_s) > 2, where F(9 or 18 μ m) and F(K_s) are the flux densities at 9 or 18 μ m and in the K_s band obtained with AKARI and 2MASS, respectively. In this study, we re-evaluated F(9 or 18 μ m) with photometry using the latest version of the AKARI mid-IR all-sky map (<cit.>). As a result, the number of sample galaxies (230) has increased by 46 from the sample in <cit.>. These sample galaxies are mid-IR-excess sources, which are regarded as infrared galaxies with active star formation and/or AGNs. §.§ Near-IR spectral fitting We carry out model fitting to the AKARI/IRC 2.5–5 μ m spectra which include various spectral features of the interstellar medium, such as aromatic hydrocarbon, aliphatic hydrocarbon, H_2O ice, and atomic and molecular hydrogen. In order to study the hydrocarbon features, we model the spectra as follows: F_ν = e^-τ(F_continuum + F_aromatic + F_aliphatic), where τ is the optical depth for the H_2O ice absorption, and F_continuum, F_aromatic and F_aliphatic are the flux densities of the near-IR continuum, aromatic hydrocarbon feature at 3.3 μ m and aliphatic hydrocarbon features at 3.4–3.6 μ m, respectively. We assume the full screen geometry for the H_2O ice absorption. For the near-IR continuum, we use a power-law model, which represents thermal dust and stellar continua and is defined as F_continuum∝λ^Γ. We use a Drude profile (<cit.>) for the 3.3 μ m aromatic hydrocarbon feature, which is defined as F_aromatic = a b^2/[(λ / λ_r) - (λ_r / λ)]^2 + b^2, where λ_r, a and b are the center wavelength of the feature, the peak flux density at λ_r and the full width at half maximum (FWHM) divided by λ_r, respectively. We fix λ_r at 3.3 μ m in the rest frame. The aliphatic hydrocarbon features at 3.4–3.6 μ m are thought to be composed of several components. Here we assume 4 representative components, the center wavelengths of which are 3.41, 3.46, 3.51 and 3.56 μ m. We use a Lorentzian profile for the 3.41 μ m feature and Gaussian profiles for the 3.46, 3.51 and 3.56 μ m features (<cit.>; <cit.>). These models are selected depending on whether or not the emission features are spectrally resolved with the AKARI/IRC. We fix the widths of the 3.46, 3.51 and 3.56 μ m features at the instrumental resolution of 0.034 μ m. Finally, we use a Lorentzian profile for the H_2O ice absorption feature, the center wavelength of which is fixed at 3.05 μ m. With this fitting, we calculate the aromatic and aliphatic hydrocarbon luminosities (L_aromatic and L_aliphatic). §.§ Spectral energy distribution fitting We carry out the fitting of spectral energy distributions (SEDs), using near- to far-IR photometric data obtained with AKARI, WISE (Wide-field Infrared Survey Explorer; <cit.>) and IRAS (Infrared Astronomical Satellite; <cit.>). We use the same photometric data and SED model based on DustEM (<cit.>) as presented in <cit.>. The SED model consists of the following three dust components: PAHs, amorphous silicate (aSil) and hydrogenated amorphous carbon (amC). PAHs are composed of neutral and ionized PAHs. We fit the SEDs of the sample galaxies on the condition that the abundance ratio of the neutral to ionized PAHs is free. The amC is divided into large amC (LamC) and small amC (SamC) based on the dust size. In addition, we include two blackbody components with the temperatures of 600 K and 3000 K, which represent a hot dust and a stellar continuum, respectively. Finally, we calculate the total infrared luminosity (L_IR) and the radiation field strength (G_0) to estimate the physical environments of the sample galaxies. Here, G_0 is the far-UV (6.0–13.6 eV) radiation field strength normalized to the solar neighborhood value (<cit.>). §.§ The final sample We classify the sample galaxies into three categories according to the star formation activity and the presence of AGNs evaluated by the near-IR spectral fitting. If star formation is active in a galaxy, we detect a strong 3.3 μ m aromatic hydrocarbon feature. On the other hand, if there is an AGN in a galaxy, PAHs are destroyed in strong radiation fields near the AGN, and therefore the 3.3 μ m aromatic hydrocarbon feature is expected to be suppressed. Hence, we regard the equivalent width of the 3.3 μ m aromatic hydrocarbon feature, EW_3.3 μ m, as a tracer of the star formation activity. In this study, the star formation is considered to be active in the case of EW_3.3 μ m > 40 nm (<cit.>; <cit.>; <cit.>). Moreover, AGNs make near-IR continuum slopes steeper due to the contribution of a hot dust continuum. Therefore, we regard the near-IR continuum slope, Γ (F_continuum ∝ λ^Γ), as a standard of whether or not the contamination of AGNs is strong. In this study, the contamination of AGNs is considered to be strong in the case of Γ>1 (<cit.>; <cit.>). Consequently, the sample galaxies are classified into the following 3 categories: (1) star-forming galaxies (EW_3.3 μ m > 40 nm and Γ < 1), (2) composite galaxies which show both star-forming and AGN activities (EW_3.3 μ m > 40 nm and Γ > 1) and (3) AGN-dominated galaxies (EW_3.3 μ m < 40 nm and Γ > 1). Figure <ref> shows examples of the spectra thus categorized. Furthermore, we also classify the sample galaxies into another 3 categories according to their L_IR derived from the SED fitting. The galaxies are classified into the following 3 categories: (1) infrared galaxies (IRGs; L_IR < 10^11 L_⊙), (2) luminous infrared galaxies (LIRGs; 10^11 L_⊙ < L_IR < 10^12 L_⊙ ) and (3) ultra-luminous infrared galaxies (ULIRGs; L_IR > 10^12 L_⊙). Figure <ref> shows examples of the SED fitting results for star-forming galaxies belonging to these categories. Finally, our sample contains 138 star-forming galaxies, 21 composite galaxies and 19 AGN-dominated galaxies. We excluded the other 52 galaxies from our original sample of the 230 galaxies because they were not fitted well by the spectral model or the SED model on a χ^2 test with a 90 % confidence level. The spectral types and the luminosity classes are summarized in table <ref>. In this paper, we discuss the 138 star-forming galaxies, which consist of 15 IRGs, 74 LIRGs and 49 ULIRGs. § RESULTS We examine the relations between the hydrocarbon emission luminosities (L_aromatic and L_aliphatic) and the total infrared luminosities (L_IR) for the 138 star-forming galaxies. Figures <ref>a and <ref>b display L_aromatic/L_IR and L_aliphatic/L_IR, respectively, plotted against L_IR. As already reported in <cit.>, L_aromatic/L_IR significantly decreases with L_IR at high L_IR (≥ 10^11 L_⊙), while L_aromatic/L_IR shows a nearly constant value (L_aromatic/L_IR ∼ 10^-3) at low L_IR (< 10^11 L_⊙). As a new result, we find that L_aliphatic/L_IR also significantly decreases with L_IR, similarly to the trend of L_aromatic/L_IR. These results are reasonable for the interpretation that both aromatic and aliphatic hydrocarbons are of the same carriers for the band emission (<cit.>, k z2013). Figure <ref> shows the relationship between L_aliphatic/L_aromatic and L_IR. As can be seen in the figure, L_aliphatic/L_aromatic shows a decreasing trend with a large scatter at higher L_IR. For a robustness check of this trend, we carry out model fitting for the three spectra which are created by stacking the spectra at the wavelengths of 2.6–3.9 μ m in their rest frames for each luminosity class, normalizing the flux densities at 3.3 μ m, the peak wavelength of the aromatic hydrocarbon feature. Figure <ref> displays the result of fitting the spectra stacked for IRGs, LIRGs and ULIRGs, where the best-fit L_aliphatic/L_aromatic are 0.262 ± 0.012, 0.247 ± 0.007 and 0.111 ± 0.012 for IRGs, LIRGs and ULIRGs, respectively. As shown in figure <ref>, L_aliphatic/L_aromatic indeed decreases with the luminosity class, and thus it is likely that hydrocarbon dust tends to be more aromatic-rich in galaxies with higher L_IR. § DISCUSSION Below we discuss main causes to generate the trend of L_aliphatic/L_aromatic as seen in figure <ref>. §.§ The effect of the absorption feature due to carbonaceous dust First of all, in order to investigate whether or not the variations of L_aliphatic/L_aromatic are really intrinsic, we evaluate the effect of the absorption feature due to carbonaceous dust. For example, a significant fraction of dusty AGNs are known to exhibit the absorption feature at the wavelength of 3.4 μ m due to cooler carbonaceous dust (e.g., <cit.>, imanishi2010). If the absorption is significant in our sample, the tendency between L_aliphatic/L_aromatic and L_IR could be affected, since we do not consider the 3.4 μ m absorption feature in our spectral fitting. It is difficult to separately estimate the aliphatic hydrocarbon emission and the carbonaceous dust absorption in the spectral fitting. Instead, we assume that the optical depth of carbonaceous dust, τ_3.4 μ m, correlates with the optical depth of H_2O ice, τ_3.1 μ m, where we take τ_3.4 μ m/τ_3.1 μ m of 0.85 as derived from AKARI observations (<cit.>, imanishi2010), and estimate τ_3.4 μ m from the measured τ_3.1 μ m. Figure <ref> shows the relationship between L_aliphatic/L_aromatic thus corrected for the possible presence of the absorption feature and L_IR, from which we find that the absorption-corrected L_aliphatic/L_aromatic still decreases with L_IR significantly. Hence this result rules out a possibility that the carbonaceous dust absorption is the main cause of the decrease in L_aliphatic/L_aromatic, verifying that L_aliphatic/L_aromatic is intrinsically variable. §.§ The effect of photodissociation in strong radiation fields Here and hereafter we consider the processing mechanism of hydrocarbon dust to change L_aliphatic/L_aromatic. First, we evaluate the effect of strong radiation fields using G_0 derived from the SED fitting. Figure <ref> shows L_aliphatic/L_aromatic plotted against G_0. All the galaxies with very low L_aliphatic/L_aromatic (< 0.05) show G_0 larger than 30, indicating that there is some threshold on G_0 to significantly reduce L_aliphatic/L_aromatic. The threshold on G_0 suggests that the strong radiation field is likely to cause the decrease in L_aliphatic/L_aromatic, since the aliphatic bonds are known to be chemically weaker than the aromatic bonds for photodissociation. Yet there is a large scatter in L_aliphatic/L_aromatic at G_0 larger than 30 where some galaxies show relatively high L_aliphatic/L_aromatic. Hence we cannot explain the variation of L_aliphatic/L_aromatic by G_0 alone. §.§ The effect of shocks with galaxy mergers Next, we explore the relationship between L_aliphatic/L_aromatic and the galaxy merger. Most galaxies with high L_IR like ULIRGs are thought to have experienced galaxy mergers. The merger of gas-rich galaxies triggers starburst with large-scale shocks, considerably affecting the interstellar environments of the galaxies. It is generally known that the scale and the speed of the shocks increase at later stages of the galaxy merger (<cit.>). <cit.> investigated the presence and the stage of the galaxy merger for their sample of 55 star-forming galaxies, all of which are included in our galaxy sample and categorized as 15 early-stage merger galaxies, 20 late-stage merger galaxies and 20 non-merger galaxies. Figure <ref>a shows the relationship between L_aliphatic/L_aromatic and L_IR as classified with the merger stage. The average values of L_aliphatic/L_aromatic for non-merger galaxies, early-stage merger galaxies and late-stage merger galaxies are 0.306 ± 0.175, 0.263 ± 0.104 and 0.176 ± 0.144, respectively. According to a t-test with a 95% confidence level, there is a significant difference in the distribution of L_aliphatic/L_aromatic between the non-merger galaxies and the merger galaxies; the group of aliphatic hydrocarbons may selectively be removed in merger galaxies, causing a change in the structure of hydrocarbon dust. On the other hand, there is no significant difference in the distribution of L_aliphatic/L_aromatic between the early-stage merger galaxies and the late-stage merger galaxies, according to a t-test with a 95% confidence level. Therefore, although the galaxy merger is likely to be one of the main causes for the processing of hydrocarbon dust, its dependence on the merger stage is not clear. Figure <ref>b shows the relationship between L_aliphatic/L_aromatic and G_0 as classified with the merger stage. As can be seen in the figure, above the threshold of G_0 = 30, there is a clear difference in L_aliphatic/L_aromatic between the early-stage merger galaxies and the late-stage merger galaxies, although the statistics are rather poor. Only late-stage merger galaxies show very low L_aliphatic/L_aromatic (< 0.05) at high G_0 (> 30), and thus it is likely that strong shocks significantly affect the structure of hydrocarbon dust at the late stage of the galaxy merger. In order to further probe the environments of the merger galaxies, we examine the relationship between L_aliphatic/L_aromatic and [O I]λ6300Å/Hα, a shock tracer as used in <cit.>, in figure <ref>, where we find that L_aliphatic/L_aromatic is anti-correlated with [O I]/Hα for the merger galaxies with high G_0 (> 30). Therefore, the strength of the shock in the galaxy merger is indeed a likely cause to modify the structure of hydrocarbon dust. §.§ A possible contribution of AGNs in the star-forming galaxies Based on the near-IR continuum slope, Γ, which is expected to reflect the degree of the contribution from AGNs, we have removed AGNs (i.e., AGN-dominated and composite galaxies) from our sample. Nevertheless, the remaining star-forming galaxies, especially those with Γ near the threshold (Γ = 1) for the classification, may have a non-negligible contribution of relatively weak AGNs, which can still process and destroy the hydrocarbon dust. In order to check this possibility, we investigate the dependence of L_aliphatic/L_aromatic on Γ to evaluate a possible contribution of such AGNs in the star-forming galaxies. In figures <ref>a and <ref>b, we show the same plots as figures <ref> and <ref>, respectively, but color-coded with Γ, in which we confirm that there is no systematic difference in the L_aliphatic/L_aromatic–L_IR and L_aliphatic/L_aromatic–G_0 distributions with different Γ. Hence it is unlikely that our sample star-forming galaxies suffer a significant contribution of weak AGNs. Moreover, we match our sample with the XMM-Newton X-ray source catalog (<cit.>). We find that 27 star-forming galaxies in our sample are detected in the X-ray. Using the information on the X-ray luminosity, L_X, and the X-ray hardness ratio, HR, the latter of which is defined as HR = CR_2.0-12.0 keV - CR_0.2-2.0 keV/CR_2.0-12.0 keV + CR_0.2-2.0 keV, where CR_0.2-2.0 keV and CR_2.0-12.0 keV are the X-ray photon count rates of XMM-Newton/EPIC in the energy bands of 0.2–2.0 keV and 2.0–12.0 keV, respectively. Figure <ref> shows the relationships between L_aliphatic/L_aromatic and these X-ray parameters. In figure <ref>a, we find that L_aliphatic/L_aromatic is not correlated with L_X, confirming that the decrease in L_aliphatic/L_aromatic is not caused by AGNs which are expected to have high L_X. On the other hand, figure <ref>b shows that L_aliphatic/L_aromatic is anti-correlated with the X-ray hardness ratio. Besides AGNs, supernovae in the starburst caused by the galaxy merger can also increase the X-ray hardness ratio in the star-forming galaxies, where shocks and hot plasma environments associated with young supernova remnants may have processed the hydrocarbon dust to the aromatic-rich. § CONCLUSION Based on the luminosities of the aromatic hydrocarbon feature at 3.3 μ m (L_aromatic) and the aliphatic hydrocarbon feature at 3.4–3.6 μ m (L_aliphatic) estimated with the AKARI 2.5–5 μ m spectra, we have studied the relationships between L_aromatic, L_aliphatic and the total infrared luminosity, L_IR, for 138 star-forming galaxies. As a result, we find that galaxies with higher L_IR (i.e., LIRGs and ULIRGs) tend to exhibit lower L_aliphatic/L_aromatic ratios. The galaxies with low L_aliphatic/L_aromatic ratios are dominated by merger galaxies, showing strong radiation fields (G_0) estimated with the total infrared SED and strong shocks traced by [O I]/Hα. We search for any non-negligible effect of AGNs contaminating the star-forming galaxy sample to find no clear signature for that, except for significant anti-correlation of L_aliphatic/L_aromatic against the X-ray hardness ratio. The overall results indicate that hydrocarbon dust is destroyed through photodissociation in strong radiation fields and/or shocks during the merging processes of galaxies. Since the aliphatic bonds are known to be chemically weaker than the aromatic bonds, the group of aliphatic hydrocarbons may selectively be removed, causing a change in the structure of hydrocarbon dust to the aromatic-rich. This research is based on observations with AKARI, a JAXA project with the participation of ESA. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration, and data products from the Infrared Astronomical Satellite (IRAS), which is a joint project of the US, UK and the Netherlands. This work is financially supported by JST SPRING, Grant Number JPMJSP2125. The principal author thanks the “THERS Make New Standards Program for the Next Generation Researchers.” [Acke et al.(2010)]acke2010 Acke, B., Bouwman, J., Juhász, A., Henning, T., van den Ancker, M. E., Meeus, G., Tielens, A. G. G. M., & Waters, L. B. F. M. 2010, , 718, 558. doi:10.1088/0004-637X/718/1/558 [Brandl et al.(2006)]brandl2006 Brandl, B. R., et al. 2006, , 653, 1129. doi:10.1086/508849 [Compiègne et al.(2011)]compiegne2011 Compiègne, M., et al. 2011, , 525, A103. doi:10.1051/0004-6361/201015292 [Draine & Li(2007)]d l2007 Draine, B. T. & Li, A. 2007, , 657, 810. doi:10.1086/511055 [Elbaz et al.(2011)]elbaz2011 Elbaz, D., et al. 2011, , 533, A119. doi:10.1051/0004-6361/201117239 [Galliano et al.(2008)]galliano2008 Galliano, F., Madden, S. C., Tielens, A. G. G. M., Peeters, E., & Jones A. P. 2008, , 679, 310. doi:10.1086/587051 [Gardner et al.(2023)]gardner2023 Gardner, J. P., Mather, J. C., Abbott, R., et al. 2023, , 135, 068001. doi:10.1088/1538-3873/acd1b5 [Habing(1968)]habing1968 Habing, H. J. 1968, , 19, 421 [Hemachandra et al.(2015)]hemahcandra2015 Hemachandra, D., et al. 2015, , 454, 818. doi:10.1093/mnras/stv2001 [Imanishi & Dudley(2000)]imanishi2000 Imanishi, M. & Dudley, C. C. 2000, , 545, 701. doi:10.1086/317863 [Imanishi et al.(2008)]imanishi2008 Imanishi M., Nakagawa T., Ohyama Y., Shirahata M., Wada T., Onaka T., Oi N., 2008, PASJ, 60, S489. doi:10.1093/pasj/60.sp2.S489 [Imanishi et al.(2010)]imanishi2010 Imanishi, M., Nakagawa, T., Shirahata M., Ohyama, Y., & Onaka, T. 2010, , 721, 1233. doi:10.1088/0004-637X/721/2/1233 [Ishihara et al.(2010)]ishihara2010 Ishihara, D., et al. 2010, , 514, A1. doi:10.1051/0004-6361/200913811 [Jones et al.(2017)]jones2017 Jones A. P., Köhler M., Ysard N., Bocchio M., & Verstraete L., 2017, A&A, 602, A46. doi:10.1051/0004-6361/201630225 [Kaneda et al.(2005)]kaneda2005 Kaneda, H., Onaka, T., & Sakon, I. 2005, , 632, L83. doi:10.1086/497913 [Kaneda et al.(2008)]kaneda2008 Kaneda, H., Onaka, T., Sakon, I., Kitayama, T., Okada, Y., & Suzuki, T. 2008, , 684, 270. doi:10.1086/590243 [Kondo et al.(2012)]kondo2012 Kondo, T., et al. 2012, , 751, L18. doi:10.1088/2041-8205/751/1/L18 [Kwok(2007)]kwok2007 Kwok, S. 2007, Physics and Chemistry of the Interstellar Medium by Sun Kwok. University Science Books, 2007. [Kwok & Zhang(2011)]k z2011 Kwok, S., & Zhang, Y. 2011, , 479, 80. doi:10.1038/nature10542 [Kwok & Zhang(2013)]k z2013 Kwok, S., & Zhang, Y. 2013, , 771, 5. doi:10.1088/0004-637X/771/1/5 [Lai et al.(2023)]lai2023 Lai, T. S.-Y., et al. 2023, , 957, L26. doi:10.3847/2041-8213/ad0387 [Li(2020)]li2020 Li, A. 2020, Nature Astronomy, 4, 339. doi:10.1038/s41550-020-1051-1 [Li & Draine(2001)]l d Li, A. & Draine, B. T. 2001, , 554, 778. doi:10.1086/323147 [Moorwood(1986)]moorwood1986 Moorwood, A. F. M. 1986, , 166, 4 [Mori et al.(2012)]mori2012 Mori, T. I., Sakon, I., Onaka, T., Kaneda, H., Umehat,a H., & Ohsawa, R. 2012, , 744, 68. doi:10.1088/0004-637X/744/1/68 [Mori et al.(2014)]mori2014 Mori, T. I., Onaka, T., Sakon, I., Ishihara, D., Shimonishi, T., Ohsawa, R., Bell, A. C. 2014, , 784, 53. doi:10.1088/0004-637X/784/1/53 [Murakami et al.(2007)]murakami2007 Murakami, H., et al. 2007, , 59, S369. doi:10.1093/pasj/59.sp2.S369 [Murata et al.(2017)]murata2017 Murata, K. L., Yamada, R., Oyabu, S., Kaneda, H., Ishihara, D., Yamagishi, M., Kokusho, T., & Takeuchi, T. T. 2017, , 472, 39. doi:10.1093/mnras/stx1902 [Neugebauer et al.(1984)]neugebauer1984 Neugebauer, G., et al. 1984, , 278, L1. doi:10.1086/184209 [Ohyama et al.(2007)]ohyama2007 Ohyama, Y., et al. 2007, , 59, S411. doi:10.1093/pasj/59.sp2.S411 [Onaka et al.(2007)]onaka2007 Onaka, T., et al. 2007, , 59, S401. doi:10.1093/pasj/59.sp2.S401 [Oyabu et al.(2011)]oyabu2011 Oyabu, S., et al. 2011, , 529, A122. doi:10.1051/0004-6361/201014221 [Peeters et al.(2002)]peeters2002 Peeters, E., Hony, S., Van Kerckhoven, C., Tielens, A. G. G. M., Allamandola, L. J., Hudgins, D. M., & Bauschlicher, C. W. 2002, , 390, 1089. doi:10.1051/0004-6361:20020773 [Pilleri et al.(2015)]pilleri2015 Pilleri, P., Joblin, C., Boulanger, F., & Onaka T. 2015, , 577, A16. doi:10.1051/0004-6361/201425590 [Rich et al.(2015)]rich2015 Rich, J. A., Kewley, L. J., & Dopita, M. A. 2015, , 221, 28. doi:10.1088/0067-0049/221/2/28 [Risaliti et al.(2006)]risaliti2006 Risaliti, G., et al. 2006, , 365, 303. doi:10.1111/j.1365-2966.2005.09715.x [Rosen et al.(2016)]rosen2016 Rosen, S. R., et al. 2016, , 590, A1. doi:10.1051/0004-6361/201526416 [Smith et al.(2007)]smith2007 Smith, J. D. T., et al. 2007, , 656, 770. doi:10.1086/510549 [Tielens(2008)]tilens2008 Tielens, A. G. G. M. 2008, , 46, 289. doi:10.1146/annurev.astro.46.060407.145211 [Wright et al.(2010)]wright2010 Wright, E. L., et al. 2010, , 140, 1868. doi:10.1088/0004-6256/140/6/1868 [Yamada et al.(2013)]yamada2013 Yamada, R., Oyabu, S., Kaneda, H., Yamagishi, M., Ishihara, D., Kim, J. H., & Im, M. 2013, , 65, 103. doi:10.1093/pasj/65.5.103 [Yamagishi et al.(2012)]yamagishi2012 Yamagishi, M., Kaneda, H., Ishihara D., Kondo, T., Onaka, T., Suzuki, T., & Minh, Y. C. 2012, , 541, A10. doi:10.1051/0004-6361/201218904
http://arxiv.org/abs/2407.13520v1
20240718135554
EaDeblur-GS: Event assisted 3D Deblur Reconstruction with Gaussian Splatting
[ "Yuchen Weng", "Zhengwen Shen", "Ruofan Chen", "Qi Wang", "Jun Wang" ]
cs.CV
[ "cs.CV" ]
EaDeblur-GS Y.Weng et al. China University of Mining and Technology, 221000 Xuzhou, China EaDeblur-GS: Event assisted 3D Deblur Reconstruction with Gaussian Splatting Yuchen Weng10009-0005-4219-3108 ZhengWen Shen10000-0001-8882-5960RuoFan Chen10009-0002-0382-7479Qi Wang10009-0003-1553-0493Jun Wang10000-0002-8926-156X Corresponding author ================================================================================================================================================================================= § ABSTRACT 3D deblurring reconstruction techniques have recently seen significant advancements with the development of Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS). Although these techniques can recover relatively clear 3D reconstructions from blurry image inputs, they still face limitations in handling severe blurring and complex camera motion. To address these issues, we propose Event-assisted 3D Deblur Reconstruction with Gaussian Splatting (EaDeblur-GS), which integrates event camera data to enhance the robustness of 3DGS against motion blur. By employing an Adaptive Deviation Estimator (ADE) network to estimate Gaussian center deviations and using novel loss functions, EaDeblur-GS achieves sharp 3D reconstructions in real-time, demonstrating performance comparable to state-of-the-art methods. § INTRODUCTION Reconstructing 3D scenes and objects from images has long been a research hotspot in computer vision and computer graphics. The advent of Neural Radiance Fields (NeRF) <cit.> has brought revolutionary advancements in photo-realistic novel view synthesis. Recently, The introduction of 3D Gaussian Splatting (3DGS) <cit.> has revolutionized 3D scene representation using Gaussian ellipsoids, achieving high-quality 2D rendering and faster speeds. However, in real-world applications, factors like camera shake and shutter speed often lead to image blurriness and inaccurate camera poses estimation, challenging the clear neural volumetric representation. Therefore, several methods have been developed to handle blurriness in NeRFs and 3DGS. NeRF's deblurring technology has undergone relatively early development, with Deblur-NeRF <cit.> being the first framework to tackle this issue, which utilized an analysis-by-synthesis approach to recover sharp NeRFs from blurry inputs. MP-NeRF<cit.> further enhances this by introducing a multi-branch fusion network and prior-based learnable weights to handle extremely blurry or unevenly blurred images. But the NeRF-based methods always consume extensive training time and rendering time. Some methods based on 3DGS has been developed because of its advantages in rendering and training speed. For instance, Wenbo Chen et al. <cit.> proposed Deblur-GS, which models the problem of motion blur as a joint optimization involving camera trajectory and time sampling. B. Lee etal. <cit.> assign the corrections on the rotation and scaling matrix of 3D gaussians by using a small MLP, enhancing the clarity of scenes reconstructed from blurred images. Nonetheless, these methods can only achieve clear 3D reconstruction results with mildly blurred input images. Consequently, additional data sources like event cameras have been introduced into 3D deblurring reconstruction. Event cameras, a bio-inspired sensor, offers high temporal resolution and has advantages in motion deblur. For NeRFs, EventNeRF <cit.> used event integration and color simulation for colored 3D representations, while Qi et al. <cit.> combined blurred images with event streams using innovative loss functions and the Event Double Integral (EDI) approach. For 3DGS, Yu proposed EvaGaussian<cit.>, which enhanced 3DGS with assistance of event streams. Despite achieving excellent performance in 3D deblurring reconstruction, the aforementioned methods still have limitations. For instance, RGB single-modality deblurring 3DGS and NeRF are often effective only in mildly blurred or simple camera motion scenarios. When input images are severely blurred, these methods usually fail to reconstruct 3D objects and can only produce relatively clear 2D renderings from certain angles. On the other hand, techniques using event cameras for NeRF 3D reconstruction are limited by NeRF's training and rendering speed. Additionally, methods that incorporate event data into 3DGS deblurring reconstruction face challenges, such as inaccurate camera motion trajectory estimation. To overcome these limitations, we propose a novel integration of event streams with 3DGS, namely Event-assisted 3D Deblur Reconstruction with Gaussian Splatting(EaDeblur-GS), aiming to guide the learning of better 3D Gaussian representations and tackle issues arising from blurry inputs. Our contributions are summarized as follows: * We propose Event-assisted 3D Deblur Reconstruction with Gaussian Splatting (EaDeblur-GS), which incorporates blurry RGB images and event streams into Gaussian Splatting to recover sharp 3D representations. * We introduce a novel Adaptive Deviation Estimator(ADE) network to simulate the shaking motion during exposure accurately by estimating the deviations of Gaussians. * We comprehensively evaluate the proposed method and compare it with several baselines, demonstrating that our EaDeblur-GS achieves performance comparable to state-of-the-art methods while enabling real-time sharp image rendering. § METHOD As shown in Fig.<ref>, our method starts with the input of blurry RGB images and the corresponding event streams. We first employ the Event Double Integral (EDI) technique <cit.> to generate a set of latent sharp images. These images are then processed by COLMAP to achieve enhanced initial reconstruction and precise camera pose estimation. From this enhanced reconstruction, we create a set of 3D Gaussians. Subsequently, the positions of these Gaussians and the estimated camera poses are fed into our proposed ADE network to determine the positional deviations of the Gaussians. These adjusted 3D Gaussians are then projected onto each viewpoint, including corresponding latent viewpoints, to produce sharp image renderings. Furthermore, we integrate a Blurriness Loss to simulate the generation of authentic blurry images, and an Event Integration Loss to guide the Gaussian model in accurately capturing the true shapes of objects. This allows the model to learn precise 3D volume representations and achieve superior 3D reconstructions. The overview of our method is illustrated in Fig.<ref>. In the following sections, we detail how the ADE network estimates deviations, followed by an in-depth introduction to blurriness loss and event integration loss. §.§ Adaptive Deviation Estimator Motion blur caused by camera shake impedes sparse initial reconstruction due to unclear input images. To mitigate this issue, we employ the EDI method <cit.>, which combines blurry images and corresponding event streams. The EDI model transforms a blurry image I_blur into multiple sharp images I_0,⋯,I_k, under the assumption that the blurry image is the temporal average of latent sharp images, each represented by the accumulated events. Given a blurry image and its corresponding event bins {B_k}_k=1^b, the sharp image of viewpoint I_0 and each latent sharp image I_k can be expressed as: I_0=(b+1)I_blur/1+e^Θ∑_i=1^1B_i+⋯+e^Θ∑_i=1^bB_i. I_k=(b+1)I_blure^Θ∑_i=1^kB_i/1+e^Θ∑_i=1^1B_i+⋯+e^Θ∑_i=1^bB_i. We then utilize COLMAP to estimate the poses of EDI-processed sharp images {𝐏_k}_k=0^b=COLMAP({I_k}_k=0^b), which also results in a more accurate initial sparse point cloud. Inspired by <cit.> and <cit.>, we represent latent poses as displacements of Gaussian ellipsoid centers x_j. To estimate the deviations of Gaussians in this context, we employ Adaptive Deviation Estimator network, which is consisted of a small Multi-Layer Perceptron (MLP) ℱθ with three hidden linear layers and an embedding layer which transforms low-frequency positional information into high-frequency expressions. The ADE ℱθ takes the EDI-predicted poses 𝐏kk=0^b and original positions of Gaussians x_j to estimate deviations: {(δ x_j^(i)}_i=1^l=ℱ_θ(γ(x_j),γ(P)) where l means the number of estimated latent poses, and δ x_j^(i) denotes the i-th predicted position offset of j-th Gaussian. We then obtain extra l sets of 3D Gaussians {{x̂_j^(i)}_i=1^l}_j=1^N_G by adjusting the positions of the original 3D Gaussians, where N_G is the number of Gaussians, x̂_j^(i) represents the position offset scaled by λ_p: δ x_j^(i) by λ_p.x̂_j^(i)=x_j+λ_pδ x_j^(i). This method computes l different sets of 3D Gaussians for each viewpoint, which can be rasterized to l sharp latent images {I_i}_i=0^l. Additionally, we rasterize an image with original Gaussians positions as the image rendered from original viewpoint. During forward rendering process, the MLP network and multiple rendering are not necessary, ensuring real-time inference speed comparable to original 3D Gaussian. §.§ Loss Functions Blurriness loss. To model the motion blur process during exposure time, we take the average sum of rendered images as follows: Î_blur=1/b+1∑_i=0^bI_i, I_i=Rasterize({G(x̂_j^(i))}_j=1^N_G) where I_i is a sharp image generated by the estimated corrections, I_blur is the estimated blurry image. The blurriness loss is computed as the difference between estimated blurry image and input blurry image I_blur, combined with a D-SSIM loss as follows: L_blur=(1-λ_D-SSIM)I_blur-Î_blur_1+λ_D-SSIM L_D-SSIM where we set λ_D-SSIM=0.2 for all experiments. Event Integration Loss. Leveraging the high time-resolution event stream, we adopt an Event Integration Loss to guide the network in learning a fine-grained sharp 3D reconstructions. We integrate the events polarities between two input frames as follows: 𝐄(t)=∫_t_0^t_0+δ t𝐞(t)dt. where δ t is time interval between two input frames. The logarithm difference between the last rendered frame and first rendered frame of multiple rendered frames {I_i}_i=0^l gives the estimated event integration maps 𝐄̃(t). The event integration loss is then: L_event=𝐄(t)-𝐄(t)_1 The final loss function is the combination of the blurriness loss and event integration loss as follows: L=L_blur+λ_eventL_event § EXPERIMENTS §.§ Comparisons and Results To evaluate the effectiveness of using event data to aid in deblurring RGB images, we compare our method with the original Gaussian Splatting (GS), which utilizes blurry images as input and supervision. Since COLMAP fails with only blurry images, we estimate the initial point cloud using EDI-deblurred images. We also compare our method with Deblurring 3D Gaussian Splatting(Deblurring-GS) to highlight the advantages of incorporating event data. Additionally, we benchmark our approach against Event Enhanced Neural Radiance Fields from Blurry Images(E2NeRF), the state-of-the-art method that leverages event data for deblurring RGB images. Our evaluation metrics include Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Frames Per Second (FPS). All methods are tested on the synthetic data from E2NeRF, which includes perfectly sharp images from the same viewpoint as blurry input images with corresponding event streams. The implementation details are provided in the supplementary materials. The quantitative comparison results are presented in Table <ref> and the qualitative comparison is also provided in Supplementary materials. Our results demonstrate that our method achieves performance comparable to the current state-of-the-art model in terms of PSNR and SSIM, while maintaining real-time rendering capabilities with noticeable FPS. The advantage of using event data for deblurring RGB images is further evidenced by comparing Deblurring 3D Gaussian Splatting with our method, showing a significant increase in PSNR. §.§ Ablation studies Blurriness loss and Event Integration Loss. To validate the effectiveness of training losses, we conduct the experiments on the "hotdog" scene from E2NeRF dataset. The results in the Tab.<ref> demonstrates that both proposed losses significantly improve the deblurring performance, with the event integration loss further slightly enhancing sharp rendering quality. With/Without MLP Latent Poses Estimation We conduct the experiments to further analyze the contribution of ADE module. Using initial latent poses estimated by COLMAP with EDI preprocessed images, we compare results with and without the ADE network. As shown in Table <ref>, the ADE module improves all evaluation metrics, indicating its effectiveness in estimating latent poses during exposure. Further ablations are provided in the supplementary materials. § CONCLUSION In this paper, we introduce Event-Assisted 3D Deblur Gaussian Splatting (EaDeblur-GS), which integrates event data and RGB images to achieve sharp neural 3D representations. We propose an Adaptive Deviation Estimator network to estimate latent poses during exposure by computing Gaussian deviations. Our method, evaluated against state-of-the-art techniques and through extensive ablation studies, shows significant improvements over the original 3D Gaussian Splatting and performance comparable to Event-RGB deblur NeRF (E2NeRF). splncs04
http://arxiv.org/abs/2407.11929v1
20240716172312
Stimulated absorption of single gravitons: First light on quantum gravity
[ "Victoria Shenderov", "Mark Suppiah", "Thomas Beitel", "Germain Tobar", "Sreenath K. Manikandan", "Igor Pikovski" ]
gr-qc
[ "gr-qc", "hep-th", "quant-ph" ]
Department of Physics, Stevens Institute of Technology, Hoboken, NJ 07030, USA Cornell University, Ithaca, NY 14853, USA Department of Physics, Stevens Institute of Technology, Hoboken, NJ 07030, USA Massachusetts Institute of Technology, Cambridge, MA 02139, USA Department of Physics, Stevens Institute of Technology, Hoboken, NJ 07030, USA Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden Nordita, KTH Royal Institute of Technology and Stockholm University, SE-106 91 Stockholm, Sweden Corresponding author: pikovski@stevens.edu Department of Physics, Stevens Institute of Technology, Hoboken, NJ 07030, USA Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden § ABSTRACT In a recent work we showed that the detection of the exchange of a single graviton between a massive quantum resonator and a gravitational wave can be achieved. Key to this ability are the experimental progress in preparing and measuring massive resonators in the quantum regime, and the correlation with independent LIGO detections of gravitational waves that induce stimulated absorption. But do stimulated single-graviton processes imply the quantization of gravity? Here we analyze this question and make a historic analogy to the early days of quantum theory. We discuss in what ways such experiments can indeed probe key features of the quantized interaction between gravity and matter, and outline five experimental tests. This capability would open the first window into experimental exploration of quantum gravity. Based on essay written for the Gravity Research Foundation 2024 Awards for Essays on Gravitation. Stimulated absorption of single gravitons: First light on quantum gravity Igor Pikovski July 22, 2024 =========================================================================== Introduction. The quantization of gravity implies the existence of the graviton. In practice, a graviton constitutes an indivisible quantum of energy E=hf that forms gravitational waves of frequency f, and where h is Planck's constant. While gravitational waves have now been directly confirmed with LIGO from numerous compact binary mergers <cit.>, such waves are highly energetic and deeply in the classical regime. Nevertheless, it was recently shown by some of us that the detection of a single graviton is possible and realistic with near-future quantum technology <cit.>. One key element is that due to the very small scattering cross-section, of order σ∼ l_p^2 <cit.>, where l_p ≈ 10^-35 m is the Planck length, only very few gravitons from such a classical wave actually interact with the detector. It is therefore possible to design a resonant system that would absorb only a single graviton from a passing gravitational wave. Detecting this single event also requires the ability to prepare the quantum ground state of a massive resonator, time-continuous sensing of quantum jumps in discrete energy steps in the detector, and the cross-correlation to LIGO events to confirm absorption from the independently measured gravitational wave (it turns out neutron-star-mergers <cit.> are the best candidates). While challenging, these capabilities are realistically achievable, and thus single-graviton detection can be realized with kg-scale resonators <cit.>, potentially augmented with lower-mass transducers for quantum sensing <cit.>. In essence, such a detection would confirm the stimulated absorption of a single graviton. But how much can we learn about quantum gravity from such a stimulated event, extracted from a very energetic classical wave? This is analogous to the photoelectric effect, where stimulated absorption of photons was observed <cit.>. In 1905, Einstein proposed the quantization of light and applied this hypothesis to explain the photoelectric effect <cit.>, which sparked the development of quantum theory. After much resistance, the quantum of light was fully accepted in the mid 1920ies. Today, however, it is well-known that stimulated emission such as in the photoelectric effect can be described with semi-classical optics <cit.>. Remarkably, a wide range of quantum phenomena can be explained through semi-classical models, in which matter is quantized but radiation is treated classically, including spontaneous emission, the Lamb shift or vacuum polarization <cit.>. A rigid, modern benchmark for the existence of photons is thus the measurement of quantum statistics of light, which can indicate the quantum nature of the radiation field through sub-Poissonian statistics or a vanishing intensity-intensity correlation function g^(2)(0), because of the indivisibility of a single photon energy <cit.>. An even more stringent benchmark is the characterization of the quantum state of the incoming radiation, with quantum features typically attributed to a negative Wigner function rather than a negative Glauber-Sudarshan-P-function that is sufficient for non-classical statistics <cit.>, or even the violation of Bell's inequalities as the ultimate test of non-classicality <cit.>. Given these modern developments, it has been a natural question whether tests of quantum gravity can demonstrate such benchmarks. However, even tests of quantum statistics are far outside the reach of experiments <cit.>. In this essay, instead, we argue that a more modest approach that relies only on stimulated processes can still experimentally reveal a great deal about the possible quantization of gravity. We first briefly review our results <cit.> that show how single gravitons can be detected through stimulated absorption, and then discuss how this can be used to infer and test aspects of quantum gravity. We draw a historic analogy to the early days of quantum mechanics <cit.>, when the existence of the photon was accepted long before the development of full QED, let alone modern quantum optical tests. We carefully revisit the arguments that led to the acceptance of the photon, after tremendous resistance. We then apply these insights for tests of single-graviton exchange as we proposed, discussing what aspects of quantum gravity can be probed with such capabilities and similar experiments. Our results show that important foundational questions about the quantized interaction between matter and gravity can be probed, offering critical experimental input on quantum gravity. Single-graviton detection. Whether a graviton can be detected even in principle has been a fascinating question for a long time. The conventional answer has been that this is nearly impossible. Weinberg <cit.> (and later Boughn and Rothman <cit.>) for example, showed that atomic transitions involving gravitons would proceed roughly only once every 10^40 s, while Dyson argued that LIGO would need a 37 orders of magnitude improvement in sensitivity to detect single gravitons[Dyson estimates the number of gravitons in a typical gravitational wave by dividing the wave's energy density ρ_E= c^2/32 π G h^2 ν^2 by the energy density of a single graviton with E= ħν in a cubic box of size c/ν.] <cit.>, both vastly impractical. We revisited this question <cit.>, and as summarized below, we show that inferring energy exchanges at the level of a single graviton is much more practical than previously thought. Before outlining this result, we wish to briefly comment on why neither LIGO nor individual atoms are good probes for inferring energy exchanges at the level of a single graviton. Firstly, LIGO measures displacements with very high precision, but this is not the best quantum mechanical observable to address the question of energy quantization. To infer energy exchanges in discrete amounts, one should think of a sensor that is sensitive to discrete changes in energy eigenstates, rather than continuous position. Secondly, atomic systems have traditionally been used to test quantum mechanics, but today quantum experiments are no longer confined to the atomic realm. For the interaction with gravity, the mass quadrupole is the relevant charge, thus scaling up to massive systems can be of great advantage. Finally, it is not necessary to improve the sensitivity to correspond to a single graviton strain amplitude for inferring the existence of gravitons. While low intensity measurements are crucial for many modern table top experiments and quantum information tasks, most quantum experiments in fact include bright sources. A historically important example is the photoelectric effect in which bright light shining on metal surfaces excites electrons, from which Einstein inferred the existence of a light-quantum. Dyson in fact also considered stimulated absorption for gravitons <cit.>, but in his scenarios it remained impractical. Instead, we looked at several modifications to previous consideration, which enable single graviton detection. One key element is the use of a collective mode in a massive quantum system, as opposed to a single atom. Today such massive systems can be brought into quantum states, and quantum mechanical effects are explored on macro-scales <cit.>. That would amount to the use of a quantum version of classic Weber-bar detectors. But it is also necessary to combine it with appropriate quantum sensing capabilities to monitor discrete changes in energy, just as for an atom <cit.>, as opposed to the conventional transduced position measurements. Notably there are still Weber bars in operation which look for classical gravitational wave signals, mostly at high frequencies <cit.>, but traditional designs can operate below kHz-frequencies <cit.>. Finally, we circumvent the limitation of not being able to create gravitational waves by correlating the signal to independent LIGO detections: this provides heralded detection with independent confirmation of the amplitude and frequency of the incoming gravitational wave. However, because occurrence of such events is a priori unknown, a continuous quantum sensing scheme is required <cit.>. The starting point of our analysis is deriving the interaction Hamitonian for a 1D collective quantum mode interacting with the gravitational wave, which is <cit.> H_int = -1/2 h_μν T^μν = M L ḧ(t) ∑_l=1,3,...( -1)^(l+1)/2/π^2 l^2ζ_l - M ḧ(t)/8∑_lζ_l^2 . Here ζ_l are the collective acoustic modes (labeled by l), and h(t) is the generally chirping gravitational wave. The quadrupole moments of each atom are oscillations about their mean position, which add up to produce the collective coupling. The leading contribution to the interaction increases with the total mass M of the system and its length L. The collective enhancement of each acoustic mode dominates over the second term, the tidal force on each atom by the gravitational wave as considered by Weinberg and others <cit.>. It becomes a sub-leading correction. Using this interaction Hamiltonian one can compute the absorption and emission processes. While spontaneous emission remains untenable, one can achieve stimulated emission rates as good as 1 Hz: For example with a 1800 kg Al-cylinder, stimulated by a gravitational wave with amplitude h = 5 × 10^-22. Thus in each detection event, only a few gravitons are exchanged. Importantly, we can solve the dynamics exactly, and find that an initial ground state of the massive system evolves to |0⟩→|β(t) e^-i ω t⟩ with the coherent state amplitude |β(t)| = √(M L^2/π^4 ωħ) | ∫_0^t ds ḧ(s) e^-i ω s| = √(M L^2/π^4 ωħ) χ(h(t), ω, t) . From this, we can deduce the probability of detecting an energy exchange at the level of a single graviton when measured in the energy (Fock) basis: P_0 → 1 = | ⟨1|β(t) e^-i ω t⟩|^2 = | β(t) |^2 e^- | β(t) |^2. The maximum probability of a single graviton absorption is thus P_0 → 1^(max) = 1/e ≈ 36% for | β(t) | = 1. From this we obtain the optimal mass required for a bar detector to infer a single graviton exchange event, which expressed in terms of the speed of sound v_s is given by M_opt = ħπ^2 ω^3/v_s^2 χ^2(h(t), ω, t) . The exact mass depends on the gravitational wave profile through χ as given in eq. (<ref>), which can either be found from available data, or by an analytic approximation for binary mergers <cit.>. For sources like the GW170817 neutron star - neutron star merger as detected by LIGO <cit.>, this yields an ideal mass on the order of M_opt≈ 20 kg for beryllium. Thus ground-state-cooled kg-scale detectors can be used to observe the absorption of a single graviton, if their energy levels are monitored through continuous quantum measurement. Such monitoring could also be implemented at much lower masses through transducers <cit.>. As a simple example, for a monochromatic incident wave at angular frequency detuned from the detector's resonant frequency by Δ, and for Δ≪ω, the excitation probability becomes P_0 → 1 = h^2 M L^2 ω^3 t^2/4 ħπ^4sinc^2(t Δ /2). In the long-time limit and on resonance, this yields the usual Fermi's Golden rule. This expression highlights the direct correspondence to the main features of the photoelectric effect: The probability of excitation, or the number of absorption events, increases with the intensity h^2, but the energy of transition is completely determined by the resonance condition Δ≈ 0. The energy changes in discrete amounts at or near the resonant frequency. This is the well-known dependence on frequency and independence on intensity from the photoelectric effect. What is missing is the ability to measure the matter excitations through the voltage – instead one can monitor the quantum jumps directly through time-continuous quantum non-demolition measurements of energy <cit.> and thus deduce the exchange of a single graviton. While challenging, energy measurements in massive quantum systems, albeit at lower masses, have been demonstrated <cit.>. Thus in summary, a single graviton detection is possible. But apart from improved technology (quantum sensing in low-noise massive resonators and ground state cooling), it relies on stimulated emission from incoming gravitational waves. It thus cannot serve as a proof of the quantization of gravity – stimulated processes can be described in the semi-classical limit. Nevertheless it can provide indirect evidence of the quantum nature of gravity. Clearly the exchange of quanta implies the change of energy in discrete amounts from the gravitational wave, thus by energy conservation assumed to hold for each individual event, one can deduce that indeed gravitons are absorbed. But it is also useful to revisit the historic arguments for the acceptance of quantum theory. This analogy is very instructive, as the experiments on gravity we envision today are in many ways technologically constrained to indirect hints of quantumness as the early tests of quantum phenomena. Thus, rather than using modern benchmarks for non-classicality, here we focus on the historic arguments and experiments around quanta of light. While it took decades for light quanta as predicted by Einstein to be accepted, no doubt remained even long before quantum optical tests were devised. Today, a direct link to quantum gravity can be made: our work showed that the exchange of quanta between matter and gravitational waves can be observed, but a probe of the quantum statistics or even the quantum state of the gravitational field seems out of reach. Revisiting the historic arguments that conclusively showed the existence of a photon, despite immense resistance in the physics community, is thus a blueprint to explore questions about quantum gravity with similarly limited experimental capabilities in the near future. Historic perspective. Historically, Planck's black-body calculation in 1900 showed the discrete exchange of energy between light and matter, in packages of E=h f <cit.>. Nevertheless, it was only Einstein that took the leap in 1905 to postulate that light actually consists of such discrete energy quanta, and that it is not simply limited to an anomalous interaction. Einstein realized that Planck's law required the modification of equipartition as predicted by classical electromagnetic theory, which would have led to what is today known as the Rayleigh-Jeans black-body behavior. Instead, he focused on the experimentally verified Wien's distribution law for high frequencies of the black-body spectrum. He realized that this law – the high-frequency limit of Planck's distribution – can be derived as if radiation behaved thermodynamically as ideal point particles, each with energy E=hf. This yields the correct change in entropy as function of volume, in direct analogy to an ideal Boltzmann gas, thus suggesting a gas of particles of light. He thus made the leap from the hypothesis of a quantized interaction between light and matter, to postulating that light itself behaves as if consisting of quanta <cit.>. Einstein applied his hypothesis of quantized interaction to the photoelectric effect. Nevertheless, despite the success of his predictions, the theory of quantized light was not accepted for over 20 years. Even after his famous measurements of Planck's constant through Einstein's formula for the photoelectric effect in 1916 <cit.>, Millikan still considered Einstein's quantized radiation hypothesis “untenable” and even “reckless” <cit.>. Thus, while the quantized exchange of energy was gradually accepted, the leap to a quantization of the free radiation field was a much bigger challenge. What then, beyond the photoelectric effect but long before quantum optical tests, lead to the acceptance of quantized radiation? And what lessons can we draw for the gravitational case? The developments after 1905 largely focused on the quantization of matter and the behavior of solids: Einstein realized that his quantum hypothesis, if applied to matter alone just as for light, implies a specific heat of materials that reproduced experiments at the time, but gave new predictions at low temperatures <cit.>. Such experiments started to be realized in the early 1910s and confirmed Einstein's predictions, which drew attention to his quantum theory. Much of the work then focused on developing a more detailed quantum theory for matter, most notably Bohr's atomic model. Nevertheless, a quantum theory for radiation seemed unnecessary to most, including Planck, Millikan and even Bohr himself. Even Einstein had his doubts, but by 1917 was fully convinced about the reality of light quanta. Two developments were key in the final acceptance of quantized radiation. Firstly, Einstein realized in 1917 that the quantized momentum also plays a key role <cit.>. Light must also carry the discrete momentum p=hf/c, in addition to discrete energy E=hf. He realized this property when considering energy fluctuations in the context of molecular equilibrium under radiation pressure, in connection also with his famous result on the spontaneous and stimulated emission and absorption coefficients A_21, B_21, and B_12. The direct experimental confirmation of quantized momentum came through the observation of the Compton effect <cit.>, which is inconsistent with scattering of a classical wave. Secondly, alternative views have become increasingly nonviable. Early attempts to explain the photoelectric effect without photons, such as Lenard's triggering hypothesis <cit.>, were dismissed due to experimental contradictions to such models. It was realized by Einstein and others, that if giving up energy conservation, one can explain the photoelectric effect statistically without light quanta but that this is undesirable[At the first 1911 Solvay conference, Einstein remakred: “One can choose between the [quantum] structure of radiation and the negation of an absolute validity of the energy conservation law. [...] We will agree that the energy principle should be retained.” <cit.>]. This in fact remains a feature of the modern quantum optical semi-classical limit, where energy is not conserved for individual absorption or emission events <cit.>. In our proposal to detect single gravitons <cit.>, we argue precisely on these grounds of energy conservation that single graviton exchange can be inferred, when single quantum jumps are recorded. However, it is possible to envision energy-non-conserving alternatives. A famous one in the context of electromagnetic radiation was put forward by Bohr, Kramers and Slater (BKS) in 1924 <cit.>, as a means to avoid the need for photons. It attempted to keep quantized matter without quantizing the field, conserving energy and momentum on average, but violating these conservation laws for the individual exchange of quanta between field and matter. It is in this spirit similar to today's semi-classical limit. But this theory was quickly dismissed experimentally through Bothe's and Geiger's coincidence measurements between scattered photons and electrons, showing event-by-event conservation of energy and momentum <cit.>. Thus, from limited but diverse experimental input and without direct observation, it was possible to confirm the existence of quanta of light. Some lingering doubts remained, which today we might call “loopholes”. For example by Erwin Schrödinger <cit.>, who considered the possibility of a classical electromagnetic wave which would Bragg-scatter from matter-waves to explain the Compton effect, and keeping the spirit of BKS theory by abandoning conservation laws for single events. This eventually motivated modern quantum optical coincidence experiments demonstrating sub-Poissonian statistics <cit.>, anti-bunching <cit.> and Hong-Ou-Mandel interference <cit.>, putting most reservations to rest. Nevertheless, it should be stressed that long before tests of quantum statistics or QED, quantization of radiation was confirmed to a high degree. While semi-classical models were able to account for some experimental features, they generally failed to capture different observations <cit.>. This is the benchmark we wish to apply to the gravitational case today. Tests with single-graviton interactions. From the above historical developments and considerations, we thus motivate the following 5 tests of the quantum interaction between matter and gravitons, that are enabled by the setup we proposed: (i) Is the energy content in the graviton the same as in a photon? In other words, is Planck's constant that governs the exchange of energy with gravity the same as for all other interactions, ħ_G = ħ? One can for example imagine a natural number k such that ħ_G = k ħ, and thus the energy exchange with matter proceeds as E = k h f. One would then expect not the transition to the first excited state, but to the k-th excited state. We note that this would not violate any current experimental data. Similarly, for a general ħ_G ≠ħ, one can monitor for seemingly off-resonant excitations for which the frequencies between wave and matter would match to f_G = ħ/ħ_G f. (ii) Is ħ_G universal, or does it depend on the properties of matter and gravitational waves? This would replicate some of the early tests of quantum theory, measuring ħ for various materials. It is conceivable that the quantized interaction with gravity would follow different rules than other interactions. Such non-universality for the gravitational interaction would of course also imply (i) above, since ħ is known to be universal. (iii) Is the probability of stimulated absorption the same as the probability of stimulated emission? This would amount to testing the gravitational Einstein coefficients, and whether B_12^(G) = B_21^(G). Such a test would require the preparation of the resonator not just in the ground state, but also in the first excited state, monitoring quantum jumps to the ground state due to stimulated emission of gravitons. (iv) Does the absorption of gravitons follow the quadrupole-formula as expected? Making use of independent LIGO detections, one can test if the probability of single graviton events is indeed as expected from a quadrupolar interaction with the collective mode, as given by equation (<ref>). This would provide experimental input on possible scalar or vector components of the interaction at the quantum level. (v) Do gravitational waves also carry quantized momentum p = h f/c? This, for example, is central to quantum noise from gravitons in interferometers <cit.>, but such fluctuations remain very difficult to observe. However, if such observations of graviton noise or imprints of gravitons on the CMB <cit.> could be made, in combination with tests of the E=h f relation as we proposed, they would firmly establish a particle picture as they test complementary aspects of quantization. Together such tests could therefore provide a bona-fide test of quantum gravity. This list is by no means exhaustive. For example, one can also consider tests of the wave-particle duality through statistical properties, as evident in Einstein's early works <cit.>. These examples show the potential of a single-graviton-detector as we proposed for fundamental explorations of the interaction between matter and gravity at the single graviton level. Thus, while such experiments with stimulated emission cannot serve as definite proofs, they clearly provide a wide range of opportunities to test aspects of quantum gravity. While we do not anticipate deviations from expected physics, it would provide empirical experimental evidence where none exists so far, and there is always room for surprises (sometimes leading to unexpected explanations for other open problems in physics). Furthermore, together with tests of gravitational shot noise from fluctuations in interferometers or the CMB, they would rule out most semi-classical models which otherwise might explain each effect in isolation, and thus can provide very strong evidence of quantization. Conclusions. We conclude this essay with a positive outlook on experimental tests of quantum gravity. Similar to the development of the early quantum theory of the 20th century, the mid 21st century will likely see first experimental demonstrations of the quantized interaction between gravity and matter, such as through our proposed single graviton exchange <cit.>. Other experimental proposals exist as well that probe other aspects of quantum features, such as testing how quantum superpositions source Newtonian gravity and cause entanglement <cit.>, or testing speculative phenomenological models <cit.>. Here we have argued that a single graviton detector would provide a complementary approach and offer a range of opportunities to test the very basic principles of quantized interaction between gravitational waves and matter, in close analogy to historic experiments that established the existence of the photon in the early 20th century. We are thus hopeful that there are now realistic paths for experimental input to quantum gravity from laboratory experiments. We thank Frank Wilczek for discussions and Isabella Marotta for her assistance in reviewing historic works on the photoelectric effect. This work was supported by the National Science Foundation under grant 2239498. G. T. acknowledges support from the General Sir John Monash Foundation. S.K.M. was supported by the Wallenberg Initiative on Networks and Quantum Information (WINQ).
http://arxiv.org/abs/2407.13085v1
20240718011819
On the Hardy-Hénon heat equation with an inverse square potential
[ "Divyang G. Bhimani", "Saikatul Haque", "Masahiro Ikeda" ]
math.AP
[ "math.AP", "35K05, 35K67, 35B30, 35K57" ]
[2000]35K05, 35K67, 35B30, 35K57 § ABSTRACT We study Cauchy problem for the Hardy-Hénon parabolic equation with an inverse square potential, namely, ∂_tu -Δ u+a|x|^-2 u= |x|^γ F_α(u), where a≥-(d-2/2)^2, γ∈, α>1 and F_α(u)=μ |u|^α-1u, μ|u|^α or μ u^α, μ∈{-1,0,1}. We establish sharp fixed time-time decay estimates for heat semigroups e^-t (-Δ + a|x|^-2) in weighted Lebesgue spaces, which is of independent interest. As an application, we establish: * Local well-posedness (LWP) in scale subcritical and critical weighted Lebesgue spaces. * Small data global existence in critical weighted Lebesgue spaces. * Under certain conditions on γ and α, we show that local solution cannot be extended to global one for certain initial data in the subcritical regime. Thus, finite time blow-up in the subcritical Lebesgue space norm is exhibited. * We also demonstrate nonexistence of local positive weak solution (and hence failure of LWP) in supercritical case for α>1+2+γ/d the Fujita exponent. This indicates that subcriticality or criticality are necessary in the first point above. In summary, we establish a sharp dissipative estimate and addresses short and long time behaviors of solutions. In particular, we complement several classical results and shed new light on the dynamics of the considered equation. Forecasting Supernova Observations with the CSST: I. Photometric Samples [ July 22, 2024 ======================================================================== § INTRODUCTION §.§ Fixed-time estimates for heat semigroup e^-t (-Δ + a |x|^-2) Consider the linear heat equation associated with the inverse square potential, namely ∂_t u(t,x) + ℒ_au(t,x)=0 u(0,x)= u_0(x) (t,x) ∈ℝ^+ ×ℝ^d, where u(t,x)∈ℂ. In this paper, we assume that a≥ a_*:=-(d-2/2)^2, d≥ 2, unless it is explicitly specified. The Schrödinger operator with inverse square potentials =-Δ+ a |x|^-2 is initially defined with domain C^∞(∖{0}). Then it is extended as an unbounded operator in weighted Lebesgue space L_s^q() that generates a positive semigroup {e^-t}_t≥0 provided 1<q<∞ and σ_-<d/q+s<σ_++2, where σ_-, σ_+ defined by σ_∓=σ_∓(d,a):=d-2/2∓1/2√((d-2)^2+4a) are the roots of s^2 - (d - 2)s - a = 0, see <cit.>. Here, the weighted Lebesgue space L_s^q() is defined by the norm f_L^q_s: = |·|^sf_L^q (s∈ℝ). The study of ℒ_a is motivated from physics and mathematics spanning areas such as combustion theory, the Dirac equation with Coulomb potential, quantum mechanics and the study of perturbations of classic space-time metrics. See e.g. <cit.> and the references therein. The aim of this article is to understand the dynamics of solutions of Hardy-Hénon heat equations (<ref>) and (<ref>) when a singular potential is present, in light of the research programme initiated by Zhang <cit.>, Pinsky <cit.>, Ioku et al. in <cit.>, Ishige <cit.> and Ishige-Kawakami in <cit.>, and Bhimani-Haque <cit.> (cf. <cit.>). We also note that there is a extensive literature on Hardy-Hénon heat equation without potential, i.e. (<ref>) with a=0, we refer to recent work of Chikami et al. in <cit.> and the references therein, see also Remark <ref>. We begin by stating our dissipative estimates in weighted Lebesgue spaces in the following theorem. Let σ_-,σ_+ be as defined in (<ref>). Let s_1,s_2∈ and q_1,q_2∈ (1,∞). Then e^-tℒ_af_L^q_2_s_2≤ Ct^-d/2(1/q_1-1/q_2)-s_1-s_2/2f_L^q_1_s_1 ∀ t>0, ∀ f∈ L^q_1_s_1(^d) if and only if σ_-<d/q_2+s_2≤d/q_1+s_1<σ_++2, and s_2≤ s_1. Theorem <ref> deserve several comments. * The case a=0: In this case e^-tℒ_0f=e^tΔf=k_t∗ f (where k_t:=t^-d/2exp(-|·|^2/4t)) and σ_-=0,σ_++2=d. * subcase s_1,s_2=0: The sufficiency part (<ref>) is a consequence of Young's convolution inequality. See <cit.>. This argument holds even if we replace strict inequities in (<ref>) by equalities and thus q_1,q_2 can take the extreme values 1,∞. * subcase s_1 or s_2∈∖{0}: For q_1≤ q_2, this is due to Chikami-Ikeda-Taniguchi <cit.>. Theorem <ref> removes the assumption q_1≤ q_2 in <cit.>. * The case a∈ [a_*,∞): * subcase s_1,s_2=0: In this subcase, the sufficiency part (<ref>) is due to Ioku-Metafune-Sobajima-Spina <cit.>. However, their method of proof is different than ours, which rely on embedding theorems and interpolation techniques. The * subcase s_1 or s_2∈∖{ 0 }: In this case, both necessity and sufficiency part of Theorem <ref> is new. This is the main contribution of this article. * The power of t in (<ref>) is optimal which follow by a standard scaling argument, see Lemma <ref>. * Using Symmetry (in x,y variable) of heat kernel g_a(t,x,y) (see Subsection <ref>) associated with the operator e^-t, it follows by duality and the relation σ_++2=d-σ_- that (<ref>) holds for (q_1,s_1,q_2,s_2) if and only if (<ref>) holds for (q_2',-s_2,q_1',-s_1) (here q_j' is the Hölder conjugate of q_j). * For s_1=-σ_-, Theorem <ref> holds even for end point cases q_1∈{1,∞} (hence allowing equality in the last strict inequality in (<ref>)). For s_2=σ_-, Theorem <ref> holds even for end point cases q_2∈{1,∞} (hence allowing equality in the first strict inequality in (<ref>)). * It is indispensable to consider weighted Lebesuge spaces in Theorem <ref> in order to treat Hénon potential |x|^γ (γ>0) while establishing well-posedness for (<ref>). §.§ Hardy-Hénon equations (HHE) with inverse-square potential We consider (<ref>) with an inhomogeneous power type nonlinearity: ∂_tu(t,x)+ ℒ_au(t,x)=|x|^γF_α(u(t,x)) u(x, 0)= u_0(x) (t,x) ∈ [0,T) ×, where γ∈, T∈ (0,∞], and α>1 and u(x,t)∈ℝ or u(x,t)∈ℂ. We assume that the non-linearity function F_α:→ satisfies the following conditions: |F_α(z)-F_α(w)|≤ C_0(|z|^α-1+|w|^α-1)|z-w| for z, w ∈ℂ F_α(0)=0. The typical examples of F_α would be F_α(z)=μ |z|^α-1z, μ |z|^α or μ z^α (μ∈ℝ). The potential |x|^γ is called Hénon type if γ>0 and is called Sobolev type if γ<0. The equation (<ref>) with γ<0 is known as a Hardy parabolic equation, while that with γ>0 is known as a Hénon parabolic equation. Equation (<ref>) is called Hardy-Hénon parabolic equation with an inverse square potential. The elliptic part of (<ref>) when a=0, i.e. -Δ u= |x|^γ|u|^α-1u was proposed by Hénon <cit.> as a model to study the rotating stellar systems and has been extensively studied in scientific community, see e.g. <cit.>. The equation (<ref>) is invariant under the following scale transformation: u_λ(t,x) := λ^2+γ/α-1u(λ^2 t, λ x), λ>0. More precisely, if u is a solution to (<ref>), then so is u_λ with the rescaled initial data λ^2+γ/α-1u_0(λ x). Then the following identity holds u_λ(0)_L^q_s = λ^-s+ 2+γ/α-1- d/qu_0_L^q_s, λ>0. Hence, if q and s satisfy s + d/q = 2+γ/α-1, then the identity u_λ(0)_L_s^q = u_0_L_s^q holds for any λ>0, i.e., the norm u_λ(0)_L^q_s is invariant with respect to λ. Denote τ= τ (q,s,d):=s + d/q and τ_c=τ_c(γ,α):=2+γ/α-1. We say Cauchy problem (<ref>) scale L_s^q- subcritical if τ<τ_c critical if τ=τ_c supercritical if τ>τ_c . For τ=τ_c, we get s=2+γ/α-1- d/q=:s_c(q,γ,α,d) (often denoted by s_c for shorthand). In particular, when s=s_c=0, γ≥-2, we have q=q_c:=d(α-1)/2+γ=d/τ_c. So L^d(α-1)/2+γ() is the critical Lebesgue space without weight. We recall the notion of well-posedness in the sense of Hadamard. Let T∈(0,∞], s∈ℝ and 1≤ q ≤∞. – We say that u is an L_s^q-integral solution on [0,T) to (<ref>) if u∈ C([0,T); L^q_s(^d)) and satisfies u(t)=e^-tℒ_au_0+∫_0^te^-(t-τ)ℒ_a[|·|^γF_α(u(τ))]dτ for any t∈ [0,T). Maximum of such T is denoted by T_m. – Let X,Y ⊂𝒮'() be Banach spaces. Then (<ref>) is called locally well-posed (in short LWP) from X to Y if, for each bounded B⊂ X, there exist T>0 and a Banach space X_T↪ C([0,T], Y) so that (a) for all u_0∈ B, (<ref>) has a unique integral solution u∈ X_T (b) u_0↦ u is continuous from (B, ·_X) to C([0,T],Y). If X=Y we say (<ref>) is locally well-posed in X. If T=∞, then we say (<ref>) is globally well-posed in X. We briefly mention some history on several facets of (<ref>). We define Fujita exponent by α_F=α_F(d, γ,a)= 1+(2+γ)^+/σ_++2 which is often known to divide the existence and nonexistence of positive global solutions. * By taking a=γ=0 and F_α(z)=z^α in (<ref>), we get classical heat equation ∂_tu - Δ u= u^α, u(0)=u_0. We recall following known results for (<ref>): * Let q_c be as in Remark <ref>. If q≥ q_c and q>1 or q>q_c and q≥ 1, Weissler <cit.> proved the existence of a unique local solution u ∈ C([0, T), L^q()) ∩ L^∞_loc (0, T], L^∞()). Later on, Brezis-Cazenave <cit.> proved the unconditional uniqueness of Weissler's solutions. * If q<q_c, there are indications that there exists no (local) solution in any reasonable weak sense, see <cit.>. Moreover, it is known that uniqueness is lost for the initial data u_0=0 and for 1+ 1/d< q < d+2/d-2, see <cit.>. * Fujita <cit.> proved, for 1< α <α_F(d, 0,0), (<ref>) has no global solution (i.e. every solution blows up in finite time in L^∞-norm), whereas for α> α_F(d, 0,0), classical solution is global for small data. * Taking a=0, F_α(z)=z|z|^α-1 in (<ref>), we get classical Hardy-Hénon heat equation ∂_tu- Δ u= |x|^γ u |u|^α-1, u(0)=u_0. In this case, Chikami et al. in <cit.> introduced weighted Lebesuge space L^q_s() to treat potential |x|^γ, and establish well-posedness results. Later, Chikami et al. in <cit.> generalize these results in weighted Lorentz spaces. In this paper, we could establish analogue of these results in the presence of potential, i.e. for (<ref>) with a≠ 0 and relaxed conditions on other parameters γ,α,q,s. See Remarks <ref> and <ref> below. * Several authors considered (<ref>) with some mild restriction on external potential: ∂_t u -Δ u -V (x)u= b(x)u^α, u(0)=u_0, and showed sharp contrast between existence of classical global solution and finite time blow-up in L^∞-norm by finding appropriate Fujita exponent. We recall some of them here: * Let V(x)=a/|x|^2 and b ∈ C^β() (β∈(0,1]) with b(x)∼|x|^γ for large |x|. In this case, for 1<α≤α_F(γ,d,a), Pinsky <cit.> proved (<ref>) does not posses global solution for any u_0>0, and establish classical global solutions for α> α_F(γ, d). See <cit.>, <cit.>. * Let d≥ 3, α= α_F(d,0) or 1, and V(x)=a/1+|x|^b (b>0) in (<ref>). In this case, Zhang <cit.> found Fujita exponents under certain conditions on a,b. Later, Ishige <cit.> considered d≥ 2, and potential V(x)= a/|x|^2 with a>0, and b=1 and determined the Fujita exponent α_F(d,0,a). See also recent work of Ishige and Kawakami in <cit.>. §.§ Dynamics of HHE with inverse square potential We are now ready to state our well-posedness result in the following theorem. Let q∈(1,∞) and σ_-,σ_+ be as defined in (<ref>). Let γ∈ (-2,∞) if a≤ 0 if a> 0 and α satisfies α∈ (1 ,1+γ+2/σ_-) if a≤ 0 (1+max(γ+2/σ_-, 0),∞) if a>0 . Let s≥γ/α-1, s>σ_–d/α and τ,τ_c be as in (<ref>) and satisfy σ_-<τ<σ_++2 and τ≤τ_c. Then Cauchy problem (<ref>) is locally well-posed in L_s^q(^d), and for the critical case we also have small data global existence. In the subcritical case, if we impose further restriction q>α and τ< σ_++2+γ/α, then one has uniqueness in C([0,T_m),L_s^q()). Theorem <ref> is new for a≠ 0 and γ>0. Up to now, we could not know the well-posedness of (<ref>) with γ>0 in the mere L^q-spaces but in weighted L^q_s-spaces. See Remark <ref>(<ref>). We prove Theorem <ref> via fixed point argument. To this end, the main new ingredient required is our fixed-time estimate established in Theorem <ref>. We have several comments on Theorem <ref>. – Theorem <ref> recover results mentioned in Remark <ref>(<ref>) and is the main part of a detailed well-posedness Theorem <ref>. – For a=0 and τ=τ_c, we have from (<ref>) that τ_c<d⟺α> α_F. In this case, Theorem <ref> along with below Theorem <ref>, recover <cit.> and remove the assumption q≥α and allows s=γ/α-1. See Remark <ref>(<ref>). – For a=0, Theorem <ref> eliminate technical hypothesis (1.13) and α>α_F from <cit.> in the subcritical case. – Assume s=0,γ<0. Then for a=0 Theorem <ref> recovers <cit.> and for a≠0 Theorem <ref> recovers <cit.>. – For V(x)= a_*/|x|^2 and d≥ 3 in (<ref>), Ioku and Ogawa <cit.> proved small data global existence for 1+ 4/d+2 < α < 1+ 4/d-2. Theorem <ref> relaxes this assumption and prove the result for any α>α_F (note that α_F<1+ 4/d+2 for d≥2). See Remark <ref>. – In the subcritical case with assumption (<ref>), Theorem <ref> shows uniqueness of solution in C([0,T_m),L_s^q()). While <cit.> established uniqueness for (<ref>) in a proper subset of C([0,T_m),L_s^q()). See Remark <ref>. – For detail comments on hypotheses of Theorem <ref>, see Remarks <ref>, <ref>, <ref>. We now strengthen and complement Theorem <ref> by establishing following result. Assume that τ≤τ_c. Let d,γ, α,q,s be as in Theorem <ref> (so local wellposedness for (<ref>) holds). Let F_α satisfies F_α(z)=z^α for z≥0[for example F_α(z)=μ|z|^α-1z, μ|z|^α or μ z^α]. Further assume d+γ< α d if a=0 α (d-2) if a≠0 . Then there exists initial data u_0∈ L_s^q() such that T_m(u_0)<∞. Moreover if τ<τ_c, one has a unique blow-up solution to (<ref>) with initial data u_0 in the following sense: there exist a unique solution u of (<ref>) defined on [0, T_m) such that T_m< ∞ and lim_t↑ T_mu(t)_L_s^q=∞. We have several comments for Theorem <ref>. – For the critical case τ=τ_c, similar blowup happens in a Kato norm: If T_m<∞, one would have u_𝒦_k,s^p,q(T_m)=∞ for certain choice of (k,p). See Section <ref> for definition of Kato norm. – Take γ=s=0 in (<ref>), and so τ <τ_c⇔ q> d(α-1)/2. Weissler <cit.> established blow-up solution for (<ref>) in L^q(ℝ^d). Theorem <ref> is compatible with this classical result. – For V(x)= a_*/|x|^2, u_0∈ L^d(α-1)/2(ℝ^d) with α≤ 1+ 4/d+2, Ioku and Ogawa <cit.> pointed out that (<ref>) have blow-up solution in finite time in L^∞-norm. However, we are not aware of any previous results on finite time blow-up solution in L^q_s-norm for a, s, γ≠ 0 and q≠∞. Thus Theorem <ref> is new. – Assume d≥3, 1+γ+2/d-2<α<1+γ+2/σ_- for a≤0 1+γ/d<α<∞ for a=0 1+max(γ+2/σ_-,γ+2/d-2)<α<∞ for a>0 and the hypothesis on γ,q,s from Theorem <ref>. Let F(z)= |z|^α or |z|^α-1z or z^α. Then Theorem <ref> reveals that, there exists data in L_s^q() such that the local solution established in Theorem <ref> cannot be extend to global in time. In the critical case, it also says that small data assumption in Theorem <ref> is essentially optimal to establish global existence. Let u_0∈ L^1_loc(^d), then we say a function u is a weak solution to (<ref>) if u∈ L^α( (0,T), (L^α_γ/α)_loc(^d)) and satisfies the equation (<ref>) in the distributional sense, i.e. ∫_^d u(T',x) η (T',x) dx-∫_^d u_0(x) η (0,x) dx = ∫_[0,T']×^d u(t ,x)(∂_tη+Δη-a|x|^-2η) (t ,x) + |x|^γ F_α(u(t, x)) η(t,x) dx dt for all T'∈ [0,T] and for all η∈ C^1,2([0,T]×^d) such that suppη(t, ·) is compact. The time T is said to be the maximal existence time, which is denoted by T_m^w, if the weak solution cannot be extended beyond [0,T). Proceeding as <cit.> it follows that L_s^q-integral solutions are weak solution. In that case T_m≤ T_m^w. We shall now turn our attention to supercritical case. In this case, we show that there exists positive initial data in L^q_s() that do not generate a (weak) local solution to (<ref>). Specifically, we have the following theorem. Let d∈ℕ, a,γ∈ℝ, α satisfy (<ref>) and α>α_F(d,γ,0)=1+(2+γ)^+/d. Assume that F_α satisfies F_α(z)=z^α for z≥0, q∈ [1,∞], s∈. Let τ,τ_c be as in (<ref>) and satisfy τ< τ_c. Then there exists an initial data u_0 ∈ L^q_s (^d) such that (<ref>) with u(0)=u_0 has no positive local weak solution. – For a=0=γ, Theorem <ref> recovers results mentioned in Remark <ref>(<ref>). – For a=0,γ>-2, condition α> α_F(d,γ,0) (<ref>) implies d+γ< α d in (<ref>). Thus, in this case, Theorem <ref> recovers <cit.>. – Theorem <ref> implies failure of LWP in super-critical case. Theorem <ref> tells if α satisfies (<ref>) then the sub criticality or criticality condition is necessary in Theorem <ref>. The paper is organized as follows. In Section <ref>, we gather some general tools which will be used later. In Section <ref>, we prove Theorem <ref>. In Section <ref> we establish wellposedness results. In Section <ref>, we prove Theorems <ref> and <ref>. § PRELIMINARIES Notations: The symbol α∧β means min(α,β) whereas α∨β mean max(α,β). By a^+ we denote a∨0. The notation A ≲ B means A ≤ cB for some universal constant c > 0. By A≳ B we mean B≲ A. By A∼ B we mean A≲ B and A≳ B. We shortly denote unweighed Lebesgue space norm by f_L^p= f_p. The Schwartz space is denoted by 𝒮(ℝ^d), and the space of tempered distributions is denoted by 𝒮'(ℝ^d). For s∈ and q∈ [1,∞], we introduce the weighted local Lebesgue space L^q_s,loc(^d) given by L^q_s,loc(^d):={ f ∈ L^0(^d) ; f|_K∈ L^q_s(^d), ∀ K⊂^d, K compact} where L^0() is the set of measurable functions on . §.§ Lorentz space The Lorentz space is the space of all complex-valued measurable functions f such that f_L^p,q()<∞ where f_L^p,q() is defined by f_L^p,q():=p^1/q tμ{|f|>t}^1/p_L^q((0,∞),dt/t) with 0<p<∞, 0<q≤∞ and μ denotes the Lebesgue measure on . Therefore f_p,q:= f_L^p,q()= p^1/q(∫_0^∞ t^q-1μ{|f|>t}^q/pdt)^1/q for q<∞ sup_t>0tμ{|f|>t}^1/p for q=∞. Let us gather some useful results on Lorentz spaces relevant to subsequent our proofs. Let 1≤ p≤∞, 1≤ q_1,q_2≤∞. Then * f_p,p∼ f_p, the usual Lebesgue p-norm. * f_p,q_2≲ f_p,q_1 if q_1≥ q_2. * |·|^-b∈ L^d/b,∞() for b>0. We have the following inequalities in Lorentz spaces: * (Hölder's inequality) Let 1/r=1/r_0+1/r_1∈[0,1) and s≥1 is such that 1/s≤1/s_0+1/s_1. Then fg_L^r,s≤ r' f_L^r_0,s_0 g_L^r_1,s_1. * (Young's inequality) Let 1/r=1/r_0+1/r_1-1∈(0,1] and s≥1 is such that 1/s≤1/s_0+1/s_1. Then f∗ g_L^r,s≤ 3r f_L^r_0,s_0 g_L^r_1,s_1. §.§ Heat kernel estimate Let g_a be the symmetric (in x,y variable) heat kernel associated with the operator , i.e. e^-t ℒ_a f(x)= ∫_ℝ^d g_a(t,x, y) f(y) dy (t>0) see <cit.>. Then we have the following bounds for g_a: [see Theorem 6.2 in <cit.>] Let σ_-,σ_+ be as defined in (<ref>). Let d≥2, a≥ a_*. Then there exist c_1,c_2>0 such that for any t>0 and x,y∈\{0}, the following estimate holds: (1∨√(t)/|x|)^σ_-(1∨√(t)/|y|)^σ_- t^-d/2e^-|x-y|^2/c_1t≲ g_a(t,x,y) ≲(1∨√(t)/|x|)^σ_-(1∨√(t)/|y|)^σ_- t^-d/2e^-|x-y|^2/c_2t. § DISSIPATIVE ESTIMATES IN WEIGHTED LEBESGUE SPACES In order to prove Theorem <ref>, we first show it is enough to prove for t=1 (Lemma 3.1), then using a duality (Lemma 3.2) we show it is enough to prove for s_1≥0. Then we crucially use a known heat kernel estimate (Theorem <ref>) to achieve the desired result. Let 1≤ q_1,q_2 ≤∞, and s_1,s_2∈ℝ. Then e^-ℒ_a is bounded from L^q_1_s_1(ℝ^d) into L^q_2_s_2(ℝ^d) if and only if e^-tℒ_a is bounded from L^q_1_s_1(ℝ^d) into L^q_2_s_2(ℝ^d) with e^-tℒ_a_L^q_1_s_1→ L^q_2_s_2 = t ^-d/2 (1/q_1 - 1/q_2) - s_1 - s_2/2e^-ℒ_a_L^q_1_s_1→ L^q_2_s_2 for any t>0. It is enough to show (<ref>) if e^-ℒ_a is bounded from L^q_1_s_1(ℝ^d) into L^q_2_s_2(ℝ^d), since the converse is trivial. The proof is based on the scaling argument. Let f ∈ L^q_1_s_1(ℝ^d). Since (e^-tℒ_a f)(x) = (e^-ℒ_a (f(t^1/2·)))(t^-1/2 x), (e^-ℒ_a f)(x) = (e^-tℒ_a (f(t^-1/2·)))(t^1/2 x), for t>0 and x∈ℝ^d, we have e^-tℒ_a f_L^q_2_s_2≤ t ^-d/2 (1/q_1 - 1/q_2) - s_1 - s_2/2e^-ℒ_a_L^q_1_s_1→ L^q_2_s_2f_L^q_1_s_1, e^-ℒ_a f_L^q_2_s_2≤ t ^d/2 (1/q_1 - 1/q_2) + s_1 - s_2/2e^-tℒ_a_L^q_1_s_1→ L^q_2_s_2f_L^q_1_s_1. Hence, (<ref>) is proved. Let q_1,q_2∈(1,∞) and s_1,s_2∈ and A={x∈:|x|≥1}. Let k(x,y)=k(y,x) for x,y∈ A and for x∈ A set Tf(x)=∫_A k(x,y)f(y)dy. Then Tf_L^q_2_s_2(A)≤ Cf_L^q_1_s_1(A) for all f if and only if Tf_L^q_1'_-s_1(A)≤ Cf_L^q_2'_-s_2(A) for all f. Note that Tf_L^q_1'_-s_1(A) = sup_g_L_s_1^q_1≤1|∫_A∫_Ak(x,y)f(y)dyg(x)dx| = sup_g_L_s_1^q_1≤1|∫_A∫_Ak(x,y)g(x)dxf(y)dy | = sup_g_L_s_1^q_1≤1|∫_A(Tg)(y)f(y)dy | ≤sup_g_L_s_1^q_1≤1Tg_L_s_2^q_2f_L_-s_2^q_2' ≤ csup_g_L_s_1^q_1≤1g_L_s_1^q_1f_L_-s_2^q_2'=cf_L_-s_2^q_2' (A) This completes the proof. Assume that (<ref>) and (<ref>) hold. In view of Lemma <ref> it is enough to prove the case t=1 i.e. e^-ℒ_af_L^q_2_s_2≲f_L^q_1_s_1. For x∈ and f∈ L_s_1^q_1() applying Theorem <ref> we achieve |e^- ℒ_af(x)| ≲ (1∨1/|x|)^σ_-∫_^d(1∨1/|y|)^σ_-G(x-y)|f(y)|dy. where G(x):=e^-|x|^2/c_2 with c_2 as in Theorem <ref>. Set 1_≥ 1(x):= 0 for |x|< 1 1 for |x|≥ 1 and 1_<1:=1-1_≥1. Then using e^-ℒ_af=1_≥1e^-ℒ_af+1_<1e^-ℒ_af and (<ref>) we have e^-ℒ_af_L^q_2_s_2 ≤ e^-ℒ_af(x)_L^q_2_s_2(|x|≥ 1)+e^-ℒ_af(x)_L^q_2_s_2(|x|<1) ≲ ∫_^d(1∨1/|y|)^σ_-G(x-y)|f(y)|dy_L^q_2_s_2(|x|≥ 1) + |x|^-σ_-∫_^d(1∨1/|y|)^σ_-G(x-y)|f(y)|dy_L^q_2_s_2(|x|<1). Splitting the integrations in y variable we obtain e^-ℒ_af_L^q_2_s_2 ≲ ∫_|y|≥1G(x-y)|f(y)|dy_L^q_2_s_2(|x|≥ 1) + ∫_|y|<1|y|^-σ_-G(x-y)|f(y)|dy_L^q_2_s_2(|x|≥ 1) + |x|^-σ_-∫_|y|≥2G(x-y)|f(y)|dy_L^q_2_s_2(|x|<1) + |x|^-σ_-∫_|y|<2|y|^-σ_-G(x-y)|f(y)|dy_L^q_2_s_2(|x|<1)=:I+II+III+IV. Now we show that each of these terms is dominated by f_L^q_1_s_1 which would prove (<ref>) to conclude the proof. Estimate for IV: Using boundedness of G and changing the order of integration and Hölder's inequality we obtain IV ≲ |x|^-σ_-∫_|y|<2|y|^-σ_-|f(y)|dy_L^q_2_s_2(|x|<1) = |x|^-σ_-_L^q_2_s_2(|x|<1)∫_|y|<2|y|^-σ_-|f(y)|dy = |x|^-σ_-_L^q_2_s_2(|x|<1)∫_|y|<2|y|^-σ_–s_1|y|^s_1|f(y)|dy ≤ |x|^-σ_-_L^q_2_s_2(|x|<1)|y|^-σ_–s_1_L^q_1'(|y|<2)|y|^s_1f(y)_q_1 ≲ f_L^q_1_s_1 where in the last step we have used the hypothesis (s_2-σ_-)q_2+d>0 ⟺ σ_-<s_2+d/q_2, (-σ_–s_1)q_1'+d>0 ⟺ s_1+d/q_1<d-σ_-=σ_++2. Estimate for III: Note that for |x|<1, |y|≥2 we have |x-y|≥ |y|-|x|≥1/2|y|, and as G is radially decreasing we have G(x-y)≤ G(y/2) therefore III ≤ |x|^-σ_-∫_|y|≥2G(y/2)|f(y)|dy_L^q_2_s_2(|x|<1) = |x|^-σ_-_L^q_2_s_2(|x|<1)∫_|y|≥2G(y/2)|y|^-s_1|y|^s_1|f(y)|dy ≤ |x|^-σ_-_L^q_2_s_2(|x|<1)G(y/2)|y|^-s_1_L^q_1'(|y|≥2)|y|^s_1f(y)_q_1 ≲ f_L^q_1_s_1 where in the last step we have used the hypothesis σ_-<s_2+d/q_2 as in the estimate for IV and the fact that G is Schwartz class function. Estimate for II: We claim that |x|^s_2G(x-y)_L^q_2(|x|≥1)≲ 1 uniformly for all |y|<1. In fact when s_2≤0 we have |x|^s_2G(x-y)_L^q_2(|x|≥1)≤ G(x-y)_L^q_2(|x|≥1)≤ G_q_2 for all y. On the other hand when s_2>0, using |x|^s_2≲ |x-y|^s_2+|y|^s_2, for |y|<1 we have |x|^s_2G(x-y)_L^q_2(|x|≥1) ≲ |x-y|^s_2G(x-y)_L^q_2(|x|≥1)+ |y|^s_2G(x-y)_L^q_2(|x|≥1) ≤ |·|^s_2G_q_2+G(x-y)_L^q_2(|x|≥1) ≤ |·|^s_2G_q_2+G_q_2. This proves the claim. Then II = |x|^s_2∫_|y|<1|y|^-σ_-G(x-y)|f(y)|dy_L^q_2(|x|≥ 1) ≤ ∫_|y|<1|x|^s_2G(x-y)_L^q_2(|x|≥ 1)|y|^-σ_- |f(y)|dy ≲ ∫_|y|<1|y|^-σ_–s_1|y|^s_1 |f(y)|dy ≤ |y|^-σ_–s_1_L^q_1'(|y|<1)|y|^s_1f(y)_q_1 ≲ f_L^q_1_s_1 using the claim above and the hypothesis s_1+d/q_1<σ_++2 as in the estimate for IV. Estimate for I: Let us treat I case by case. Case s_1=s_2=0 By hypothesis 1/p:=1+1/q_2-1/q_1∈[0,1]. Then using Young's inequality I= ∫_|y|≥1G(x-y)|f(y)|dy_L^q_2(|x|≥ 1)≤G∗ (1_≥1|f|)_q_2≤G_pf_q_1≲ f_L^q_1_0. Case 0=s_2<s_1 If 1/q_2=1/q_1+s_1/d, then using Young's and Holder's inequalities in Lorentz spaces i.e. Lemma <ref> we have I ≲ G_1,q_21_≥1f_q_2,∞ ≲ |·|^-s_1_d/s_1,∞|·|^s_1f_q_1,∞ ≲ f_L^q_1_s_1. If 1/q_2< 1/q_1+s_1/d, then by using Lemma <ref> (<ref>) we choose 1/p_0,1/p_1,1/p_2∈[0,1] so that 1+1/q_2=1/p_0+1/p_1 1/p_1=1/p_2+1/q_1, 1/p_2<s_1/d. and then using Young's and Holder's inequalities we achieve I ≤ G_p_01_≥1f_p_1 ≲ 1_≥1|·|^-s_1_p_3|·|^s_1f_q_1 ≲ f_L^q_1_s_1. Case 0<s_2=s_1 Using |x|^s_2≲ |x-y|^s_2+|y|^s_2 and Young's inequality I = |x|^s_2∫_|y|≥1G(x-y)|f(y)|dy_L^q_2(|x|≥ 1) ≲ ∫_|y|≥1|x-y|^s_2G(x-y)|f(y)|dy_q_2+∫_|y|≥1G(x-y)|y|^s_2|f(y)|dy_q_2 = (|·|^s_2G)∗ (1_≥1|f|)_q_2+G∗ (1_≥1|·|^s_2|f|)_q_2:=Ia+Ib Note that we have d/q_2< d/q_1+s_1 as s_2>0 and (<ref>) is assumed. Then by choosing 1/p_0,1/p_1,1/p_2∈[0,1] satisfying (<ref>) and using Young's and Holder's inequalities we achieve Ia ≤ |·|^s_2G_p_01_≥1f_p_1 ≲ 1_≥1|·|^-s_1_p_2|·|^s_1f_q_1 ≲ f_L^q_1_s_1. By hypothesis 1/p:=1+1/q_2-1/q_2∈[0,1] then Ib ≤ G_p1_≥1|·|^s_2f_q_1≲ f_L^q_1_s_1. Case 0<s_2< s_1 Since d/q_2< d/q_1+s_1 we proceed as in above case and prove the estimate for Ia. Now with the assumption s_2<s_1 using Lemma <ref> (<ref>) we choose 1/p_3,1/p_4,1/p_5∈[0,1] satisfying 1+1/q_2=1/p_3+1/p_4, 1/p_4=1/p_5+1/q_2, 1/p_5<s_1-s_2/d and obtain Ib ≤ G_p_31_≥1|·|^s_2f_p_4 ≲ 1_≥1|·|^s_2-s_1_p_5|·|^s_1f_q_1 ≲ f_L^q_1_s_1. Case s_2<0< s_1 If d/q_2+s_2< d/q_1+s_1, by Lemma <ref> (<ref>) we choose 1/p_6,⋯,1/p_10∈[0,1] satisfying 1/q_2=1/p_6+1/p_7, 1+1/p_7=1/p_8+1/p_9, 1/p_9=1/p_10+1/q_1, 1/p_6<-s_2/d, 1/p_10<s_1/d so that I = |x|^s_2∫_|y|≥1G(x-y)|f(y)|dy_L^q_2(|x|≥ 1) ≤ |x|^s_2_L^p_6(|x|≥1)G∗ (1_≥1|f|)_L^p_7(|x|≥ 1) ≤ |x|^s_2_L^p_6(|x|≥1)G_p_81_≥1f_p_9 ≤ |x|^s_2_L^p_6(|x|≥1)G_p_8|y|^-s_1_L^p_10(|y|≥1)|·|^s_1f_q_1 ≲f_L^q_1_s_1. If d/q_2+s_2=d/q_1+s_1, then we claim 0<-s_2/d<1. We need to show -s_2<d i.e. s_2>-d. Infact if s_2≤- d, then d/q_1+s_1=d/q_2+s_2≤d/q_2-d<0, a contradiction as q_1,s_1>0. Next we claim 0<1/q_1+s_1/d<1. This is because 0<1/q_1+s_1/d=1/q_2+s_2/d<1/q_2<1 using s_2<0. Above claim shows 1/p_12:=s_1/d+1/q_1∈(0,1), then we have 1/q_2=1/-d/s_2+1/p_12. Therefore I = |x|^s_2∫_|y|≥1G(x-y)|f(y)|dy_L^q_2(|x|≥ 1) ≤ |x|^s_2_d/-s_2,∞G∗ (1_≥1|f|)_p_12,q_2 ≤ |x|^s_2_d/-s_2,∞G_1,q_21_≥1f_p_12,∞ ≤ |x|^s_2_d/-s_2,∞G_1,q_2|y|^-s_1_d/s_1,∞|·|^s_1f_q_1,∞ ≲f_L^q_1_s_1. Case s_2≤ s_1≤0 Follows from duality Lemma <ref> and the above cases. This completes the proof. Assume that (<ref>) hold. Let g=exp(-|·|^2/c_1) where c_1 as in Theorem <ref>. Necessity of σ_-<s_2+d/q_2, s_1+d/q_1<σ_++2: Let f be supported in B(0,1) and equal to |·|^θ in B(0,1/2) with θ>max(-s_1-d/q_1,σ_–d,0). Then f∈ L_s_1^q_1() and hence by hypothesis (<ref>), we have e^-f∈ L_s_2^q_2(). On the other hand for |x|≤1 [e^-f](x) ≥ ∫_|y|≤1/2g_a(1,x,y)f(y)dy ≳ |x|^-σ_-∫_|y|≤1/2|y|^-σ_-+θ g(x-y)dy∼ |x|^-σ_-. where we have used Theorem <ref> in the second step and θ>σ_–d in the last step. Since e^-f∈ L_s_2^q_2(), we must have σ_-<s_2+d/q_2. Using symmetry of heat kernel see (<ref>) in Remark <ref>. it follows that s_1+d/q_1<σ_++2. This proof is a major modification made to <cit.> where q_1=q_2, s_1=s_2=0 was treated. Necessity of s_2+d/q_2≤ s_1+d/q_1: Let 0 ≠ f∈ L^2∩ L_s_1^q_1. If s_2+d/q_2> s_1+d/q_1, then using (<ref>), we have e^-tℒ_af→0 in L^q_2_s_2 (and hence pointwise a.e.) as t→0. Since f∈ L^2, using semigroup property, we have e^-tℒ_af→ f in L^2 as t→0. Thus f=0 which is a contraction. Necessity of s_2≤ s_1: We prove this by modifying the proof in case a=0 in <cit.>. Let φ∈ L_s_1^q_1 be a smooth non-negative function with support in B(0,1) and take f_τ=φ(·-τ x_0) with |x_0|=1. Then for τ>2 and |x|≥1 [e^-f_τ](x) ≥ ∫_|y|≥1g_a(1,x,y)f_τ(y)dy ≳ ∫_|y|≥1 g(x-y)f_τ(y)dy = ∫ g(x-y)f_τ(y)dy=(g∗ f_τ) (x)=(g∗φ)(·-τ x_0) where we have used Theorem <ref> in the second step, the fact B(0,1)∩ ( f_τ)=∅ in the third step. Now |·|^s_2(g∗φ)(·-τ x_0)_q_2=|·+τ x_0|^s_2(g∗φ)_q_2=τ^s_2|·/τ+ x_0|^s_2(g∗φ)_q_2 and |·|^s_1f_q_1=τ^s_1|·/τ+ x_0|^s_1φ_q_1. Therefore for τ>2 we have from (<ref>) that τ^s_2-s_1|·/τ+ x_0|^s_2(g∗φ)_q_2≲|·/τ+ x_0|^s_1`φ_q_1 but |·/τ+ x_0|^s_2(g∗φ)_q_2→g∗φ_q_2 and |·/τ+ x_0|^s_1`φ_q_1→φ_q_1 as τ→∞. Therefore we must have s_2≤ s_1. There exists p_0,⋯,p_10∈[1,∞] so that * if 0< s_1, d/q_2< d/q_1+s_1 hold, then (<ref>) is satisfied, * if s_2< s_1 holds, then (<ref>) is satisfied, * if s_2<0< s_1, d/q_2+s_2< d/q_1+s_1 hold, then (<ref>) is satisfied. (<ref>) Choose 1/p_0∈(max(1/q_2,1+1/q_2-1/q_1-s_1/d),min(1,1+1/q_2-1/q_1)). The last interval in nonempty as q_1,q_2∈(1,∞), s_1>0 and d/q_2< d/q_1+s_1. Now set 1/p_1=1+1/q_2-1/p_0, 1/p_2=1+1/q_2-1/p_0-1/q_1 then (<ref>) is satisfied. (<ref>) Proof is similar to (<ref>), only s_1 is replaced by s_1-s_2. Choose 1/p_3∈(max(1/q_2,1+1/q_2-1/q_1-s_1-s_2/d),min(1,1+1/q_2-1/q_1)). The last interval in nonempty as q_1,q_2∈[1,∞], s_1-s_2>0 and 1/q_2< 1/q_1+s_1-s_2/d. Now set 1/p_4=1+1/q_2-1/p_3, 1/p_5=1+1/q_2-1/p_3-1/q_1 then (<ref>) is satisfied. (<ref>) Note that 1+1/q_2+s_2/d-1/q_1-s_1/d<1, then choose 1/p_8∈(max(1+1/q_2+s_2/d-1/q_1-s_1/d,1-1/q_1-s_1/d,1/q_2+s_2/d,0),min(1+1/q_2-1/q_1,1)). Then choose 1/p_7∈(max(1/q_2+s_2/d,1/p_8+1/q_1-1,0),min(1/p_8+1/q_1+s_1/d-1,1/q_2,1/p_8)). Set 1/p_6=1/q_2-1/p_7, 1/p_9=1+1/p_7-1/p_8, 1/p_10=1+1/p_7-1/p_8-1/q_1 so that equalities in (<ref>) are satisfied. § LOCAL AND SMALL DATA GLOBAL WELL-POSEDNESS In this section we prove the well-posedness in critical and subcritical case i.e. when τ≤τ_c (recall that τ=d/q+s and τ_c=2+γ/α-1). In order to prove Theorem <ref>, we introduce the Kato space depending on four parameters (p,q,k,s). Let k,s∈ and p,q∈ [1,∞], set β=β(d,k,s,p,q):=1/2(s+d/q-k-d/p). Then the Kato space 𝒦^p,q_k,s(T) is defined by 𝒦^p,q_k,s(T) :={u:[0,T)→ L_k^p(): u_𝒦^p, q_k,s(T') <∞ for any T' ∈ (0,T)} endowed with the norm u_𝒦^p,q_k,s(T) :=sup_0≤ t< Tt^βu(t)_L^p_k. In <cit.>, Kato space with three parameter was used. This is basically 𝒦^q,q_k,s(T) when one puts p=q in Definition <ref>. This restriction didn't allow authors in <cit.> to consider the case 1≤ q<α. By Theorem <ref>, we immediately get the following result (in fact these results are equivalent): Let k,s∈ and p,q∈ (1,∞). Then e^-tℒ_af_𝒦^p,q_k,s≤ Cf_L^q_s, ∀ f∈ L^q_s(^d) if and only if k≤ s and σ_-< d/p+k≤d/q+s<σ_++2. Recall that by solution we meant integral solution and therefore, we introduce a nonlinear mapping 𝒥 given by 𝒥_φ[u](t):=e^-tℒ_aφ+∫_0^te^-(t-τ)ℒ_aF_α(u(τ))dτ. A fixed point of this map would essentially be a solution to (<ref>). Next using Lemma <ref>, we establish the nonlinear estimates in Kato spaces with appropriate conditions on the parameters. Let α>1, γ∈ satisfy (<ref>) , (<ref>). Let s∈ and q∈ (1,∞) satisfy τ=s+d/q≤τ_c= 2+γ/α-1. Let k,p satisfy γ/α-1≤ k, α<p<∞ s+γ/α≤ k σ_-<k+d/p<σ_++2+γ/α 1/α(d/q+s+γ)<d/p+k≤τ if τ<τ_c d/p+k<τ if τ=τ_c . Then for any u,v∈𝒦^p,q_k,s(T) we have 𝒥_φ[u]-𝒥_φ[v]_𝒦^p,q_k,s(T) 𝒥_φ[u]-𝒥_φ[v]_𝒦^q,q_s,s(T)≲ T^α-1/2(τ_c-τ)(u_𝒦^p,q_k,s(T)^α-1+v_𝒦^p,q_k,s(T)^α-1)u-v_𝒦^p,q_k,s(T). Note that v_𝒦^q,q_s,s(T)=sup_0≤ t<Tv(t)_L_s^q. First inequality in (<ref>), last inequality in (<ref>) and (<ref>) imposes the condition σ_-<τ_c. This is equivalent with σ_-<2+γ/α-1⟺α<1+2+γ/σ_- if σ_->0 0<2+γ if σ_-=0 α>1+2+γ/σ_- if σ_-<0, which is confirmed by (<ref>), (<ref>) (using the fact σ_->0⇔ a<0 and σ_-<0 ⇔ a>0 and σ_-=0 if a=0). Note that (<ref>) imposes the condition σ_-<σ_++2+γ/α⟺α<σ_+/σ_-+2+γ/σ_- if σ_->0 0<σ_++2+γ if σ_-=0 α>σ_+/σ_-+2+γ/σ_- if σ_-<0 and this is implied by (<ref>) as σ_+/σ_->1 for σ_->0 and σ_+/σ_-<0 for σ_-<0. Before proving Proposition <ref>, we prove a technical lemma as an application of Theorem <ref>. Assume α≥1 and let p∈(α,∞), r∈(1,∞), l,k∈ and σ_-<d/r+l, d/p+k<σ_++2+γ/α, γ≤α k- l+min(α d/p-d/r,0). then for t>0 and φ,ψ∈ L_k^p() we have e^-t[|·|^γ|{φ|^α-1φ)-|ψ|^α-1ψ}]_L_l^r≲ t^-d /2(α/p-1/r)-α k-l-γ/2(φ_L^p_k^α-1+ψ_L^p_k^α-1)φ-ψ_L^p_k. Note that (<ref>) is equivalent with σ_-<d/r+l≤d/p/α+α k-γ<σ_++2, l≤α k-γ. By Theorem <ref>, with s_2=l, s_1=α k-γ , q_2=r, q_1=p/α we obtain e^-t[|·|^γ|{φ|^α-1φ)-|ψ|^α-1ψ}]_L_l^r ≲ t^-d /2(α/p-1/r)-α k-γ-l/2|·|^γ(|φ|^α-1φ-|ψ|^α-1ψ)_L_α k-γ^p/α = t^-d /2(α/p-1/r)-α k-l-γ/2|·|^α k(|φ|^α-1+|ψ|^α-1)|φ-ψ|_p/α = t^-d /2(α/p-1/r)-α k-l-γ/2[(|·|^ k|φ|)^α-1+(|·|^k|ψ|)^α-1][|·|^k(φ-ψ)]_p/α. By using α/p=α-1/p+1/p and Holder' inequality, the above quantity is dominated by t^-d /2(α/p-1/r)-α k-l-γ/2(|·|^k|φ|)^α-1+(|·|^k|ψ|)^α-1_L^p/α-1|·|^k(φ-ψ)_L^p ≲ t^-d /2(α/p-1/r)-α k-l-γ/2(φ_L_k^p^α-1+ψ_L_k^p^α-1)φ-ψ_L_k^p which completes the proof. Let us first establish two claims: Claim I: Let β=β(d,k,s,p,q) be as in Definition <ref>. Then βα<1 Proof of Claim I: Note that s+ d/q=τ≤τ_c= 2+γ/α-1 implies (s+d/q)α-2≤ s+d/q+γ. First inequality in (<ref>) says s+d/q+γ<(d/p+k)α. Thus (s+d/q)α-2<(d/p+k)α⟺[d/q+s-d/p-k]α<2⟺(<ref>). Claim II: d/2(α/p-1/q)+α k-γ-s/2<1 Proof of Claim II: For the subcritical case τ<τ_c we have Proof of claim: d/2(α/p-1/q)+α k-γ-s/2 ≤ d/2(α/p-1/p)+α k-γ-k/2 = 1/2(d/p+k)(α-1)-γ/2 ≤ 1/2(d/q+s)(α-1)-γ/2<1; where in the first and third inequalities we used d/p+k≤τ=d/q+s and in the last step we used τ<τ_c. Proof for the ease τ=τ_c, we only need to make the first, third nonstrict inequalities by strict inequalities (using d/p+k<τ=d/q+s) and last strict inequality by equality (using τ=τ_c). This proves Claim II. Now note that (<ref>), (<ref>) implies (<ref>) for (p, r,l,s)=(p,p,k,k). By Lemma <ref> with (p, r,l,s)=(p,p,k,k) and (<ref>) we have 𝒥_φ[u]-𝒥_φ[v]_L^p_k ≲ ∫_0^te^-(t-τ)ℒ_a[|x|^γ(|u|^α-1u-|v|^α-1v)(τ)]_L^p_kdτ ≲ ∫_0^t(t-τ)^-d(α-1)/2p-1/2{(α-1)k-γ}(u(τ)_L^p_k^α-1+v(τ)_L^p_k^α-1)u(τ)-v(τ)_L^p_kdτ ≲ (u_𝒦^p,q_k,s(T)^α-1+v_𝒦^p,q_k,s(T)^α-1)u-v_𝒦^p,q_k,s(T)∫_0^t(t-τ)^-d(α-1)/2p-1/2{(α-1)k-γ}τ^-βαdτ, where the last inequality is due to the fact u,u∈𝒦^p,q_k,s(T). Recall τ_c= 2+γ/α-1 and B(x,y):=∫_0^1τ^x-1(1-τ)^y-1dτ is convergent if x,y>0. Taking (<ref>), (<ref>), (<ref>) into account, note that the last time-integral in (<ref>) is bounded by t^1-d(α-1)/2p-1/2{(α-1)k-γ}-αβ∫_0^1(1-τ)^-d(α-1)/2p-1/2{(α-1)k-γ}τ^-αβ dτ = t^(α-1)/2(τ_c-τ)t^-βB((α-1)/2(τ_c-d/p-k),1-αβ)< ∞. This together with (<ref>) implies the first part of the result. Note that (<ref>), (<ref>) implies (<ref>) for (p, r,l,s)=(p,q,k,s). So by Lemma <ref> with (p, r,l,s)=(p,q,k,s), we have 𝒥_φ[u](t)-𝒥_φ[v](t)_L^q_s ≲ ∫_0^te^-(t-τ)ℒ_a[|·|^-γ(|u|^α-1u-|v|^α-1v)(τ)]_L^q_s dτ ≲ ∫_0^t(t-τ)^-d/2(α/p-1/q)-α k-γ-s/2(u(τ)_L^p_k^α-1+v(τ)_L^p_k^α-1)u(τ)-v(τ)_L^p_kdτ ≲ (u_𝒦^p,q_k,s(T)^α-1+v_𝒦^p,q_k,s(T)^α-1)u-v_𝒦^p,q_k,s(T)∫_0^t(t-τ)^-d/2(α/p-1/q)-α k-γ-s/2τ^-αβdτ. The last integral is bounded by t^1-d/2(α/p-1/q)-α k-γ-s/2-αβ∫_0^1(1-τ)^-d/2(α/p-1/q)-α k-γ-s/2τ^-αβdτ = t^(α-1)/2(τ_c-τ)∫_0^1(1-τ)^-d/2(α/p-1/q)-α k-γ-s/2τ^-αβdτ, which is finite in view of (<ref>) and (<ref>). Now (<ref>) and (<ref>) implies the second part of the result. * Condition (<ref>) and last inequality in (<ref>) are used to make sure the beta functions B(x,y) is finite for various choices of x,y. * Conditions in (<ref>), (<ref>) (<ref>) are used to invoke Lemma <ref> with (p, r,l,s)=(p,p,k,k) and with (p, r,l,s)=(p,q,k,s). In the next result, we prove that there exists parameter p,k such that (<ref>) in Lemma <ref> and (<ref>), (<ref>), (<ref>) in Proposition <ref> are satisfied. Assume (<ref>), (<ref>). Let γ/α-1≤ s, σ_–d/α<s and q∈ (1,∞) satisfy σ_-<d/q+s<σ_++2. Then there exist k∈ and p∈ (α,∞) satisfying hypothesis (<ref>) of Lemma <ref>, and hypotheses (<ref>), (<ref>), (<ref>) of Proposition <ref>. If we further assume τ<τ_c, (<ref>), we can choose p=q and k=s. We need σ_-<d/p+k<d/q+s<-γ+(d/p+k)α<σ_++2, and s+γ/α≤ k≤ s. Now (<ref>) follows if we chosse d/p+k so that max(σ_-,τ+γ/α)<d/p+k<min(σ_++2+γ/α,τ). Choose k such that max(σ_--d/α,s+γ/α)<k<min(σ_++2+γ/α,s) so that (<ref>) is satisfied. Then choose p so that max(σ_–k,τ+γ/α-k,0)<d /p<min(σ_++2+γ/α-k,d/q+s-k,d/α) which is possible as σ_-<σ_++2+γ/α as a consequence of (<ref>). This completes the proof. The furthermore more part is clear. As we are done with linear estimate Lemma <ref> and nonlinear estimate <ref> and existence of parameter p,k we are in a position to prove the following well-posedness result which implies Theorem <ref>. Let α>1, γ∈ satisfy (<ref>) , (<ref>). Let s∈, q∈ (1,∞) satisfy the subcriticality condition defined in (<ref>) and γ/α-1≤ s, σ_–d/α<s. Let k∈ and p∈ (α,∞) satisfy hypothesis (<ref>) of Lemma <ref>, and hypotheses (<ref>), (<ref>), (<ref>) of Proposition <ref>. Then the Cauchy problem (<ref>) is locally well-posed in L^q_s(^d) for arbitrary data u_0∈ L_s^q(^d). More precisely, the following assertions hold. * (Existence) For any u_0 ∈ L^q_s(^d), there exist a positive number T and an L^q_s(^d)-integral solution u to (<ref>) satisfying u_𝒦^p,q_k,s(T)≤ 2 e^-tℒ_a u_0_𝒦^p,q_k,s(T). Moreover, the solution can be extended to the maximal interval [0,T_m). * (Uniqueness in 𝒦^p,q_k,s(T)) Let T>0. If u, v ∈𝒦^p,q_k,s(T) satisfy (<ref>) with u(0) = v(0)=u_0, then u=v on [0,T]. * (Continuous dependence on initial data) For any initial data φ and ψ in L^q_s(^d), let T(φ) and T(ψ) be the corresponding existence time given by part (<ref>). Then there exists a constant C depending on φ and ψ such that the corresponding solutions u and v satisfy u-v_L^∞(0,T;L^q_s) ∩𝒦^p,q_k,s(T)≤ C T^α-1/2(τ_c-τ)u_0-v_0_L^q_s for T< min{T(u_0), T(v_0)}. * (Blow-up criterion in subcritical case τ<τ_c)) If T_m<∞, then lim_t↑ T_mu(t)_L^q_s=∞. Moreover, the following lower bound of blow-up rate holds: there exists a positive constant C independent of t such that u(t)_L^q_s≳(T_m - t)^-α-1/2(τ_c-τ) for t∈ (0,T_m). * (Blow-up criterion in critical case τ=τ_c) If u is an L^q_s(^d)-integral solution constructed in the assertion (<ref>) and T_m<∞, then u_𝒦^p,q_k,s(T_m)=∞. * (Small data global existence in critical case τ=τ_c) There exists ϵ_0>0 depending only on d,γ,α,q and s such that if u_0 ∈𝒮'(^d) satisfies e^-tℒ_au_0_𝒦^p,q_k,s<ϵ_0 (or u_0_L_s^q<ϵ_0 in view of Lemma <ref>), then T_m=∞ and u_𝒦^p,q_k,s≤ 2ϵ_0. Existence in Kato space 𝒦_k,s^p,q(T): Define B_M^T:={u∈𝒦_k,s^p,q(T) : u_𝒦_k,s^p,q(T)≤ M} with the metric d(u,v)=: u-v_𝒦_k,s^p,q(T). Then by Lemma <ref>, Proposition <ref>, for u,v∈ B_M^T we have 𝒥_u_0[u]_𝒦_k,s^p,q(T) ≤ e^-tu_0_𝒦_k,s^p,q(T)+cT^α-1/2(τ_c-τ)M^α and 𝒥_u_0[u]-𝒥_v_0[v]_𝒦_k,s^p,q(T) ≤ e^-t(u_0-v_0)_𝒦_k,s^p,q(T)+cT^α-1/2(τ_c-τ)M^α-1u-v_𝒦^p,q_k,s(T) Subcritical case τ<τ_c: Using (<ref>), (<ref>) and choosing M=2e^-tu_0_𝒦_k,s^p,q(T) and T>0 small enough so that cT^α-1/2(τ_c-τ)M^α-1≤1/2, we find 𝒥_u_0 is a contraction in B_M^T sub-critical case (and hence we have existence of unique solution u∈ B_M^T). This proves (<ref>), (<ref>). Critical case τ=τ_c: Note that using a density argument we have lim_T→0e^-tu_0_𝒦_k,s^p,q(T)=0. Thus we choose T>0 so that M:=2e^-tu_0_𝒦_k,s^p,q(T) and cM^α-1<1/2 where c as in (<ref>). Then by using (<ref>), (<ref>) for u,v∈ B_M^T we have 𝒥_u_0[u]_𝒦_k,s^p,q(T) ≤ e^-tu_0_𝒦_k,s^p,q(T)+cM^α≤M/2+M/2= M and 𝒥_u_0[u]-𝒥_v_0[v]_𝒦_k,s^p,q(T) ≤ e^-t(u_0-v_0)_𝒦_k,s^p,q(T)+1/2u-v_𝒦^p,q_k,s(T) Thus 𝒥_u_0 is a contraction in B_M^T. This proves (<ref>). Solution is in C([0,T),L_s^q()): Using Lemma <ref>, Proposition <ref> 𝒥_u_0[u](t)_L^q_s ≲ u_0_L^q_s+M^αT^α-1/2(τ_c-τ) and 𝒥_u_0[u](t)-𝒥_v_0[v](t)_L^q_s ≲ u_0-v_0_L^q_s+M^α-1u-v_𝒦^p,q_k,s(T)T^α-1/2(τ_c-τ). Since 𝒥_u_0[u]=u, solution is indeed in L^∞([0,T);L_s^q()) rest of the results follows as in classical case. Uniqueness, continuous dependency, blow-up, small data global existence are usual as in classical case. The hypothesis (<ref>), (<ref>) are to make a integral functional map contraction in Kato spaces. On the other hand conditions on α in (<ref>) are to make sure there exists τ satisfying (<ref>). The result follows from Theorem <ref> and Lemma <ref>. For the furthermore part, as in Lemma <ref> we are choosing p=q,k=s, the uniqueness part follows from the uniqueness of fixed point in B_T^M⊂𝒦^q,q_s,s(T) and Remark <ref>. In <cit.>, Kato space 𝒦^q,q_s,s(T) was not used and hence the they did not achieve uniqueness in mere C([0,T_m),L^q_s()). § FINITE TIME BLOW-UP AND NONEXISTENCE RESULTS In this section we establish that in the sub-critical and critical case there exists initial data for which solution established by Theorem <ref> cannot be extended globally in time. Then blow-up alternative (see Theorem <ref>) implies solution must blow-up in finite time. On the other hand for super-critical case, we shall prove that there exists data such that no local weak (hence integral) solution exists. Before proving the above two we establish the following important lemma which will be used in both he proofs. Assume (<ref>). Let u be a non-negative weak solution on [0,T) to (<ref>) with initial data u_0. Let ϕ∈ C^∞_0(ℝ^d,[0,1]) be such that ϕ=1 on B_1/2 and supported in B_1. Then for l≥max(3,2α/α-1) we have ∫_|x|<√(T) u_0(x) ϕ^l(x/√(T)) dx ≲ T^- 2+γ/2(α-1) + d/2. Let ψ_T(t,x)=η(t/T)ϕ(x/√(T)). where η∈ C^∞_0(,[0,1]) is such that η=1 on B_1/2 and supported in B_1. We note that for l≥3 we have ψ_T^l∈ C^1,2([0,T)×^d) and the estimate ∂_t ψ_T ^l(t,x)|+|Δψ_T^l(t,x)| ≲ T^-1ψ_T^l-2(t,x) ≲ T^-1ψ_T^l/α(t,x) by choosing l≥2α/α-1⟺l/α≤ l-2. We define a function I:[0,T)→_≥ 0 given by I(T):=∫_[0,T)×{|x|<√(T)}|x|^γ u(t,x)^α ψ_T^l(t,x) dtdx. We note that I(T)<∞, since u∈ L_t^α(0,T;L^α_γ/α,loc(^d)). By using the weak form (<ref>), non-negativity of u, the above estimate (<ref>), Hölder's inequality and Young's inequality, the estimates hold: I(T) + ∫_|x|<√(T) u_0(x) ϕ^l(x/√(T)) dx = |∫_[0,T)×{|x|<√(T)}u(∂_t ψ_T^l + Δψ_T^l+a|x|^-2ψ_T^l ) dt dx | ≤ ∫_[0,T)×{|x|<√(T)}(CT^-1+|a||x|^-2)|u| ψ_T^l/α dtdx ≤ CI(T)^1/αK(T)^1/α' ≤ 1/2I(T)+CK(T), where 1= 1/α + 1/α', i.e., α'=α/α-1 and K(T) is defined by K(T):=∫_[0,T)×{|x|<√(T)}{T^-α'|x|^-γα'/α+|a|^α'|x|^-(2+γ/α)α'}dxdt∼ T^-2+γ/2(α-1)+d/2. The last equality holds only when (<ref>) holds. Now from (<ref>), we have ∫_|x|<√(T) u_0(x) ϕ^l(x/√(T)) dx≲ K(T)∼ T^- 2+γ/2(α-1) + d/2 which completes the proof. §.§ Finite time blow-up in critical and subcritical case The proof of this theorem is based on the arguments of <cit.> where lifespan of solution for nonlinear Schrödinger equation is studied. Let λ>0 be a parameter. We take an initial data u_0 as λ f, where f:^d→_≥ 0 is given by f(x) := |x|^-β |x|≤ 1, 0 otherwise with β satisfying β< min{s+ d/q,d}. Then we see u_0∈ L^q_s (^d) and hence by Theorem <ref>, we can define the maximal existence time T_m=T_m(u_0)=T_m(λ f). Moreover the solution with initial data λ f would be nonnegative as heat kernel is so. Since T_m(λ f)≤ T_m^w(λ f), it follows from a change of variable and then Lemma <ref> that for any 0<T<T_m(λ f) λ T^d-β/2∫_|y|<1/√(T)|y|^-βϕ^l(y)dy = λ∫_|x|<1|x|^-βϕ^l(x/√(T))dx ≤ λ∫_|x|^-βϕ^l(x/√(T))dx = λ∫_|x|<√(T)|x|^-βϕ^l(x/√(T))dx≤ C T^- 2+γ/2(α-1) + d/2 which implies λ≤ CL_T^-1 T^β/2- 2+γ/2(α-1) where L_T=∫_|y|<1/√(T)|y|^-βϕ^l(y)dy. Claim: There exists λ_0 such that if λ>λ_0, then T_m(λ f)≤ 4. Indeed, on the contrary we assume that T_m(λ_j f)>4 for a sequence λ_j→∞. Since β<d, we have L_T<∞. The following estimates hold: λ_j ≤ CL_4^-1 4^β/2- 2+γ/2(α-1)<∞ which a contradiction and hence the claim is established. Let λ>λ_0 and 0<T<T_m(λ f)≤4 then again using (<ref>) λ≤ CL_T^-1T^β/2- 2+γ/2(α-1)≤ CL_4^-1T^β/2- 2+γ/2(α-1) as L_T is decreasing in T. By (<ref>) and the fact τ≤τ_c we have κ:= 2+γ/2(α-1)-β/2>0 and so for all T∈(0,T_m(λ f) T≤ cλ^-1/κ which implies T_m(λ f)≤ cλ^-1/κ. Then the result follows from blowup criterion in Theorem <ref>(<ref>). First point in Remark <ref> follows from Theorem <ref>(<ref>). §.§ Nonexistece of weak solution in the supercritical case In this subsection we give a proof of Theorem <ref>. We only give a sketch of the proof. For the details, we refer to <cit.> where nonlinear Schrödinger equation is studied. Let T∈ (0,1). Suppose that the conclusion of Theorem <ref> does not hold. Then there exists a positive weak solution u on [0,T) to (<ref>) (See Definition <ref>) with any initial data u_0 in particular for f given by (<ref>) with β satisfying 2+γ/α-1<β< min{s+ d/q,d}. Note that such choice is possible as τ>τ_c and (<ref>) i.e. α>α_F(d,γ). Now (<ref>) implies u_0 ∈ L^q_s (^d)∩ L^1_loc(^d). For T<1 we have using Lemma <ref> ∫_|x|<√(T) u_0(x) ϕ^l(x/√(T)) dx = T^-β-d/2∫_|y|<1 |y|^-βϕ^l(y) dx = C T^-β-d/2. Combining Lemma <ref> and (<ref>), we obtain 0< C ≤ T^β/2 - 2+γ/2(α-1)→ 0 as T→ 0 which leads to a contradiction, as β satisfies β/2 - 2+γ/2(α-1) >0 i.e. β > 2+γ/α-1. This completes the proof. Acknowledgement: S Haque is thankful to DST–INSPIRE (DST/INSPIRE/04/2022/001457) & USIEF–Fulbright-Nehru fellowship for financial support. S Haque is also thankful to Harish-Chandra Research Institute & University of California, Los Angeles for their excellent research facilities. siam
http://arxiv.org/abs/2407.13405v1
20240718111538
Unveiling the nonlinear dynamics of a rolling axion during inflation
[ "Angelo Caravano", "Marco Peloso" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc", "hep-th" ]
http://arxiv.org/abs/2407.12450v1
20240717095730
Interim report for the International Muon Collider Collaboration (IMCC)
[ "C. Accettura", "S. Adrian", "R. Agarwal", "C. Ahdida", "C. Aimé", "A. Aksoy", "G. L. Alberghi", "S. Alden", "N. Amapane", "D. Amorim", "P. Andreetto", "F. Anulli", "R. Appleby", "A. Apresyan", "P. Asadi", "M. Attia Mahmoud", "B. Auchmann", "J. Back", "A. Badea", "K. J. Bae", "E. J. Bahng", "L. Balconi", "F. Balli", "L. Bandiera", "C. Barbagallo", "R. Barlow", "C. Bartoli", "N. Bartosik", "E. Barzi", "F. Batsch", "M. Bauce", "M. Begel", "J. S. Berg", "A. Bersani", "A. Bertarelli", "F. Bertinelli", "A. Bertolin", "P. Bhat", "C. Bianchi", "M. Bianco", "W. Bishop", "K. Black", "F. Boattini", "A. Bogacz", "M. Bonesini", "B. Bordini", "P. Borges de Sousa", "S. Bottaro", "L. Bottura", "S. Boyd", "M. Breschi", "F. Broggi", "M. Brunoldi", "X. Buffat", "L. Buonincontri", "P. N. Burrows", "G. C. Burt", "D. Buttazzo", "B. Caiffi", "S. Calatroni", "M. Calviani", "S. Calzaferri", "D. Calzolari", "C. Cantone", "R. Capdevilla", "C. Carli", "C. Carrelli", "F. Casaburo", "M. Casarsa", "L. Castelli", "M. G. Catanesi", "L. Cavallucci", "G. Cavoto", "F. G. Celiberto", "L. Celona", "A. Cemmi", "S. Ceravolo", "A. Cerri", "F. Cerutti", "G. Cesarini", "C. Cesarotti", "A. Chancé", "N. Charitonidis", "M. Chiesa", "P. Chiggiato", "V. L. Ciccarella", "P. Cioli Puviani", "A. Colaleo", "F. Colao", "F. Collamati", "M. Costa", "N. Craig", "D. Curtin", "L. D'Angelo", "G. Da Molin", "H. Damerau", "S. Dasu", "J. de Blas", "S. De Curtis", "H. De Gersem", "T. Del Moro", "J. -P. Delahaye", "D. Denisov", "H. Denizli", "R. Dermisek", "P. Desiré Valdor", "C. Desponds", "L. Di Luzio", "E. Di Meco", "K. F. Di Petrillo", "I. Di Sarcina", "E. Diociaiuti", "T. Dorigo", "K. Dreimanis", "T. du Pree", "T. Edgecock", "S. Fabbri", "M. Fabbrichesi", "S. Farinon", "G. Ferrand", "J. A. Ferreira Somoza", "M. Fieg", "F. Filthaut", "P. Fox", "R. Franceschini", "R. Franqueira Ximenes", "M. Gallinaro", "M. Garcia-Sciveres", "L. Garcia-Tabares", "R. Gargiulo", "C. Garion", "M. V. Garzelli", "M. Gast", "C. E. Gerber", "L. Giambastiani", "A. Gianelle", "E. Gianfelice-Wendt", "S. Gibson", "S. Gilardoni", "D. A. Giove", "V. Giovinco", "C. Giraldin", "A. Glioti", "A. Gorzawski", "M. Greco", "C. Grojean", "A. Grudiev", "E. Gschwendtner", "E. Gueli", "N. Guilhaudin", "C. Han", "T. Han", "J. M. Hauptman", "M. Herndon", "A. D. Hillier", "M. Hillman", "T. R. Holmes", "S. Homiller", "S. Jana", "S. Jindariani", "S. Johannesson", "B. Johnson", "O. R. Jones", "P. -B. Jurj", "Y. Kahn", "R. Kamath", "A. Kario", "I. Karpov", "D. Kelliher", "W. Kilian", "R. Kitano", "F. Kling", "A. Kolehmainen", "K. C. Kong", "J. Kosse", "G. Krintiras", "K. Krizka", "N. Kumar", "E. Kvikne", "R. Kyle", "E. Laface", "K. Lane", "A. Latina", "A. Lechner", "J. Lee", "L. Lee", "S. W. Lee", "T. Lefevre", "E. Leonardi", "G. Lerner", "P. Li", "Q. Li", "T. Li", "W. Li", "R. Li Voti", "M. Lindroos", "R. Lipton", "D. Liu", "M. Liu", "Z. Liu", "A. Lombardi", "S. Lomte", "K. Long", "L. Longo", "J. Lorenzo", "R. Losito", "I. Low", "X. Lu", "D. Lucchesi", "T. Luo", "A. Lupato", "E. Métral", "K. Mękała", "Y. Ma", "J. M. Mańczak", "S. Machida", "T. Madlener", "L. Magaletti", "M. Maggi", "H. Mainaud Durand", "F. Maltoni", "M. Mandurrino", "C. Marchand", "F. Mariani", "S. Marin", "S. Mariotto", "S. Martin-Haugh", "M. R. Masullo", "G. S. Mauro", "A. Mazzolari", "B. Mele", "F. Meloni", "X. Meng", "M. Mentink", "R. Miceli", "N. Milas", "A. Mohammadi", "D. Moll", "A. Montella", "M. Morandin", "M. Morrone", "T. Mulder", "R. Musenich", "M. Nardecchia", "F. Nardi", "D. Neuffer", "D. Newbold", "D. Novelli", "M. Olvegård", "Y. Onel", "D. Orestano", "J. Osborne", "S. Otten", "Y. M. Oviedo Torres", "D. Paesani", "S. Pagan Griso", "D. Pagani", "K. Pal", "M. Palmer", "A. Pampaloni", "P. Panci", "P. Pani", "Y. Papaphilippou", "R. Paparella", "P. Paradisi", "A. Passeri", "N. Pastrone", "A. Pellecchia", "F. Piccinini", "H. Piekarz", "T. Pieloni", "J. Plouin", "A. Portone", "K. Potamianos", "J. Potdevin", "S. Prestemon", "T. Puig", "J. Qiang", "L. Quettier", "T. R. Rabemananjara", "E. Radicioni", "R. Radogna", "I. C. Rago", "A. Ratkus", "E. Resseguie", "J. Reuter", "P. L. Ribani", "C. Riccardi", "S. Ricciardi", "T. Robens", "Y. Robert", "C. Roger", "J. Rojo", "M. Romagnoni", "K. Ronald", "B. Rosser", "C. Rossi", "L. Rossi", "L. Rozanov", "M. Ruhdorfer", "R. Ruiz", "F. S. Queiroz", "S. Saini", "F. Sala", "C. Salierno", "T. Salmi", "P. Salvini", "E. Salvioni", "N. Sammut", "C. Santini", "A. Saputi", "I. Sarra", "G. Scarantino", "H. Schneider-Muntau", "D. Schulte", "J. Scifo", "T. Sen", "C. Senatore", "A. Senol", "D. Sertore", "L. Sestini", "R. C. Silva Rêgo", "F. M. Simone", "K. Skoufaris", "G. Sorbello", "M. Sorbi", "S. Sorti", "L. Soubirou", "D. Spataro", "A. Stamerra", "S. Stapnes", "G. Stark", "M. Statera", "B. M. Stechauner", "S. Su", "W. Su", "X. Sun", "A. Sytov", "J. Tang", "J. Tang", "R. Taylor", "H. Ten Kate", "P. Testoni", "L. S. Thiele", "R. Tomas Garcia", "M. Topp- Mugglestone", "T. Torims", "R. Torre", "L. T. Tortora", "S. Trifinopoulos", "S. -A. Udongwo", "I. Vai", "R. U. Valente", "U. van Rienen", "R. van Weelderen", "M. Vanwelde", "G. Velev", "R. Venditti", "A. Vendrasco", "A. Verna", "A. Verweij", "P. Verwilligen", "Y. Villamzar", "L. Vittorio", "P. Vitulo", "I. Vojskovic", "D. Wang", "L. -T. Wang", "X. Wang", "M. Wendt", "M. Widorski", "M. Wozniak", "Y. Wu", "A. Wulzer", "K. Xie", "Y. Yang", "Y. C. Yap", "K. Yonehara", "H. D. Yoo", "Z. You", "M. Zanetti", "A. Zaza", "L. Zhang", "R. Zhu", "A. Zlobin", "D. Zuliani", "J. F. Zurita" ]
physics.acc-ph
[ "physics.acc-ph", "hep-ex" ]
roman empty § ABSTRACT This document summarises the International Muon Collider Collaboration (IMCC) progress and status of the Muon Collider R&D programme. arabic
http://arxiv.org/abs/2407.12947v1
20240717182930
The impact of higher derivative corrections to General Relativity on black hole mergers
[ "João M. Dias", "Antonia M. Frassino", "David C. Lopes", "Valentin D. Paccoia", "Jorge V. Rocha" ]
gr-qc
[ "gr-qc" ]
fnsymbolarabic ∂ > 1.1ex∼ < 1.1ex∼ r_hk_gQλ ^1 Departamento de Física, Instituto Superior Técnico – IST, Universidade de Lisboa - UL, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal ^2 Departamento de Física y Matemáticas, Universidad de Alcalá, Campus universitario 28805, Alcalá de Henares (Madrid), Spain ^3Departament de Física Quàntica i Astrofísica, Institut de Ciències del Cosmos, Universitat de Barcelona, Martí i Franquès 1, E-08028 Barcelona, Spain ^4Dipartimento di Fisica e Geologia, Università degli Studi di Perugia, Via A. Pascoli, 06123, Perugia, Italy ^5Departamento de Matemática, ISCTE – Instituto Universitário de Lisboa, Avenida das Forças Armadas, 1649-026 Lisboa, Portugal ^6Instituto de Telecomunicações–IUL, Avenida das Forças Armadas, 1649-026 Lisboa, Portugal§ ABSTRACT The merging of two black holes is a notoriously difficult process to describe exactly. Nevertheless, the hindrances posed by gravity's nonlinearity can be circumvented by focusing on the strict extreme mass ratio limit, in which one of the black holes is infinitely larger than the other. Such an approach has been developed by Emparan and Martínez and applied within General Relativity to investigate the time evolution of event horizons melding, using nothing but elementary concepts in gravitational physics and simple integrations of geodesics. We apply this strategy to study black hole mergers in higher derivative gravity, in order to assess how the defining characteristics of the fusion process change as the gravitational theory is modified. We adopt the case of Einsteinian cubic gravity for concreteness, and determine how the mergers' duration and the relative area increment change as the theory's single coupling parameter is varied. The impact of higher derivative corrections to General Relativity on black hole mergers João M. Dias,^1Antonia M. Frassino,^2,3 David C. Lopes,^1 Valentin D. Paccoia,^4 and Jorge V. Rocha^5,1,6 July 22, 2024 ============================================================================================================= § INTRODUCTION The remarkable scientific advances of the past seven decades, both at the theoretical and technological level, have enabled the routine detection of gravitational waves (GW) <cit.>, beginning with the early work of Pirani <cit.>. The waveforms analyzed so far validate General Relativity (GR) as an excellent approximation to the gravitational interactions we observe in nature. Yet, GR falls short of providing a complete description of quantum gravity. Therefore, modifications of GR are warranted <cit.>. A multitude of viable modifications of GR have been significantly constrained, mostly from weak field tests (e.g. Solar system and binary pulsar constraints <cit.>, or limits derived from the propagation speed of GWs <cit.>). Nevertheless, the biggest departures from GR are expected in the strong field regime, which is naturally probed in the process of compact binary mergers. Gravitational waveforms from such events have characteristic profiles that are typically described as the sequence of an inspiral phase, a coalescence phase, and a final ringdown phase. Theoretical methods have been developed to address each of these stages in order to construct the GW templates necessary to identify and properly characterize detected signals <cit.>. While the inspiral and ringdown phases of a GW signal are traditionally investigated with perturbative techniques [For instance, the post-Newtonian approach is commonly used to study the inspiral phase, whereas the quasinormal mode analysis is well-suited to address the ringdown phase.], which can be adapted to modified theories of gravity, the study of the highly nonlinear coalescence phase typically relies on numerical relativity or self-force methods, depending on the hierarchy of masses involved. For the study of mergers in modified gravity, this state of affairs is somewhat restraining, given that little is known about well-posedness and suitable formulations of the equations of motion for numerical simulations in extensions of GR <cit.>. Simultaneously, the self-force program has been developed to a great extent heavily based on GR <cit.>. It is important to stress that the nonlinear coalescence phase generates the largest amplitudes of the GW signal and is therefore the best place to probe strong gravity effects. In this paper, we present, as proof of principle, a study of the full nonlinear merger between black holes in modified gravity employing a technique previously developed by Emparan and Martínez <cit.>. The approach we will adopt allows us to circumvent the aforementioned challenges faced by numerical relativity, by relying also on a perturbative approach — namely we shall rely on the extreme mass ratio (EMR) limit, just like in the self-force program. Therefore, this study is most relevant for future detections with LISA <cit.>. The main difference with respect to the self-force framework is that we will be interested in resolving scales of the order of the small object. A compelling class of viable extensions of Einstein gravity is comprised by higher derivative gravity, where the Einstein-Hilbert action is complemented with diffeomorphism-invariant terms that involve more than two derivatives of the metric. Generically, the resulting equations of motion end up being differential equations of order higher than two, with the notable exception of Lovelock theories <cit.>. Higher curvature corrections to GR naturally appear in the context of low-energy effective theory <cit.> and are also motivated by ultraviolet completions of GR  <cit.>. In addition, modified gravity theories have been proposed to accommodate cosmological data <cit.>. Given the long-standing interest in higher derivative gravity, there is a vast literature about black hole solutions in these theories. Aspects of static spherically symmetric solutions of higher derivative gravity have been studied in <cit.>. A generalization of Lovelock gravity with the addition of interactions cubic in the curvature in spacetime dimensions D≥5 has been of great interest in the context of AdS/CFT <cit.>, while solutions in Einstein gravity with quadratic curvature terms in the action have been studied in <cit.>. For our case study, we will adopt Einsteinian cubic gravity (ECG), which is one such higher curvature theory <cit.>. It stands out as the most general diffeomorphism-invariant metric theory whose Lagrangian is constructed out of products of the Riemann tensor up to cubic order, with the property that its linearized spectrum on maximally symmetric backgrounds exactly matches that of GR, and whose coefficients in the Lagrangian are dimension-independent. Our choice of Einsteinian cubic gravity among all higher curvature gravity theories is dictated by two criteria: (i) its astrophysical viability, tied to the fact that it matches with the linearized Einstein equations around flat space, and (ii) its simplicity in several aspects. This simplicity is manifest in the fact that ECG only has one additional dimensionful coupling constant, λ, and that static and spherically symmetric solutions are determined by a single ordinary differential equation <cit.>. Hence, ECG affords a concrete and controllable setting in which the effects of higher derivative corrections can be probed while considering simple static and spherically symmetric BHs. Being a viable and well-motivated theory of gravity, it is important to test some of its predictions against observations; for instance, such confrontation has allowed to put upper bounds on the coupling λ by analyzing the deviation it would cause on BH shadows <cit.>. We could have equally considered quadratic gravity but: (i) its static and spherically symmetric solutions are determined by a coupled systems of ODEs; (ii) apart from a (thermodynamically unstable) branch of solutions, the static spherically symmetric BHs are exactly the same as in GR, i.e. the Schwarzschild spacetime; (iii) purely quadratic terms in the Lagrangian can be eliminated by a field redefinition <cit.> (at the expense of introducing non-minimal couplings with matter fields and higher dimension operators in the spirit of effective field theories). The goal of this paper is to quantify the influence of higher curvature corrections on extreme mass ratio BH mergers. Namely, we aim to determine how the duration of the process and the increase in the relative area vary as functions of the ECG coupling constant. The duration of a binary merger is a quantity of primordial relevance for gravitational waves, so it is expected our results can be translated into constraints on λ from gravitational wave detections. To characterize the extreme mass ratio binary merger we will resort to the ray-tracing techniques first developed in this context in Ref. <cit.> to study the fusion of two BHs, the smaller one being of Schwarzschild form, and later employed in <cit.> to analyze a variety of situations. To apply this program in a modified gravity setup (respecting the equivalence principle) we only need to have closed form solutions for the spacetime describing a static black hole in the theory considered. Exact analytical solutions in ECG are out of reach, so we will resort to an analytic approximation scheme. The method of <cit.> yields quantitative information about some of the effects of modifications of GR on the binary merger, namely on the evolution of the event horizon. However, we should remark that at this stage the effects on the GW signal are assessed only qualitatively. The technique used requires further development (in progress) in order to make quantitative predictions about GW signals. The paper is organized as follows: In Section <ref> we describe black holes in ECG and the determination of such solutions by using the continued fraction method. In Section <ref> we apply the Emparan-Martínez technique to study BH mergers with the previously determined background. Finally, in Section <ref> we draw our conclusions and discussion. § BLACK HOLES IN EINSTEINIAN CUBIC GRAVITY Four-dimensional ECG is defined by the following Lagrangian density, L = 1/16π G(R - λ G^2 P + O) , where G is the Newton gravitational constant and λ is the additional coupling constant of the theory. It generalizes the usual Einstein-Hilbert Lagrangian, 16π G L = R, where R stands for the Ricci scalar, by adding a certain combination of cubic curvature terms, defined by P ≡ 12 R_a^c_b^d R_c^e_d^f R_e^a_f^b + R_ab^cdR_cd^efR_ef^ab -12 R_abcd R^ac R^bd +8 R_a^b R_b^c R_c^a . The remaining invariant O in Eq. (<ref>) gathers terms that are topological in nature, as well as terms that do not contribute to the equations of motion when restricting to a spherically symmetric line element, as we will do. For our purposes, we can simply drop this term from the Lagrangian. We will assume λ≥0 throughout. This ensures that the theory has a unique vacuum <cit.> and that the static and spherically symmetric BHs we shall study are regular on and outside the horizon <cit.>. The generalized Einstein equations are derived from the Lagrangian (<ref>), yielding E_ab≡ E_acdeR_b^cde - 1/2g_ab L - 2∇^c∇^d E_acdb = 0 , where E^abcd≡∂ L / ∂ R_abcd. Static and spherically symmetric black hole solutions of ECG are characterized by a line element of general form <cit.> ds^2 = - f(r) dt^2 + f(r)^-1dr^2 + r^2 dΩ^2 . This is the exact same form as that of Schwarzschild. However, in ECG the blackening factor f(r) is not simply a polynomial in inverse powers of the radial coordinate, r. Instead, the field equations (<ref>) demand that it must satisfy a nonlinear ordinary differential equation, -(f-1)r - G^2λ[ 4f'^3 +12f'^2/r - 24f(f-1) f'/r^2 - 12f f”(f'-2(f-1)/r) ] = 2GM . A spacetime of the form (<ref>) with a metric function f obeying this sole equation of motion automatically solves all the components of the generalized Einstein equations (<ref>)—of which equation (<ref>) is in fact a first integral. From this point of view, the parameter M is just a constant of integration, but physically it corresponds to the gravitational mass of the spacetime. It is easy to see that the Schwarzschild solution is recovered when the coupling constant is set to λ=0. However, for λ≠0 closed form solutions of equation (<ref>) are unfortunately out of reach. Therefore, one must either resort to numerical integration or to approximation schemes. From now on we will work with natural units, for which G=1. Let us first briefly describe the typical numerical integration approach. An asymptotic analysis of (<ref>) shows that two kinds of corrections to the Schwarzschild solution f_0(r)= 1-2M/r are turned on when λ≠0 <cit.>. In addition to perturbative corrections in λ, which appear multiplying (high order) inverse powers of r, there are two non-perturbative corrections, schematically of the form ∼exp(±r^5/2/√(λ)). Upon demanding the absence of the growing exponential term, we are left with a general asymptotically flat solution that is controlled by a single free parameter, namely the coefficient of the decaying exponential. An equivalent analysis can be performed at the other `end' of the black hole exterior: the horizon radius, r_h, which is the outermost root of the blackening factor, f(r_h)=0. This value of r happens to be a singular point of the differential equation (<ref>). Nevertheless, by assuming regularity of f and performing a series expansion around such a point, one can solve (<ref>) order by order in powers of (r-r_h). It turns out that all the coefficients of this series can be determined as a function of the second Taylor coefficient. Therefore, the behavior of the solution around the horizon is also controlled by a single free parameter. A global black hole solution that is regular on and outside the horizon can then be obtained by `shooting' from the two extremes and imposing continuity and differentiability of the function at some intermediate radius. These two conditions completely fix the two free parameters of the local solutions near the horizon and in the asymptotic region, leaving a unique black hole solution for each value of λ. In practice, what we do is to start the integration from the horizon and vary the free parameter controlling the behavior around r_h so as to maximize the value of r at which the numerical solution diverges due to the growing exponential mentioned above. Ideally, this would happen at r=+∞. While the above approach is well-defined and can be employed, in principle, to obtain asymptotically flat black holes, in practice the results have inherent shortcomings: the validity of the solution thus obtained is limited to a relatively short radial interval. (More than 10 times the horizon radius is already challenging to achieve.) This is due to our inability to fix the free parameter at the horizon with infinite precision, as well as the unavoidable numerical inaccuracies in the integration scheme. Both of these effects inevitably activate the growing exponential term present in the general asymptotic solution, rendering the numerical solution singular —and definitely not asymptotically flat. An alternative approach, which better suits our purposes, is the use of the continued fraction method to approximate a true solution of (<ref>). This idea was first exploited in the present context in Ref. <cit.> and has the advantage of producing good approximations that are valid on the entire range r≥ r_h. This is extremely important for our intents, since we want to integrate expressions that depend on the blackening factor f(r) over a range that extends far beyond the horizon radius of the small black hole. Thus, we now review how the continued fraction method is applied within this setting. To begin with, it is convenient to define a new coordinate x=1-r_h/r, which maps the exterior of the black hole, r ∈ [r_h,∞), to the finite interval x∈[0,1). A suitable ansatz for f that smoothly interpolates between zero at the horizon r=r_h and unity at r→∞ can be written as f(x) = x [ 1 - ε(1- x) + (b_0-ε)(1-x)^2 + B(x)(1- x)^3 ], where B(x) is expressed as a continued fraction, B(x) = b_1/1+b_2 x/1+b_3 x/1+… . The asymptotic behavior at x=1 fixes the coefficients ε, b_0 and b_1 to be ε = 2M/r_h-1 , b_0 = 0 , b_1 = 2r_h^2 κ_g - 3 r_h + 4M/r_h . where κ_g = f'(r_h)/2 represents the black hole's surface gravity. While the coefficient b_2 cannot be determined from a near-horizon analysis, all the higher order coefficients b_n, with n>2, get fixed once λ/r_h^4 and b_2 are selected. Indeed, note that the dimensionless quantities M/r_h and κ_g r_h are uniquely determined for each choice of λ/r_h^4 by <cit.>κ_g r_h = 1/1+√(1+48 λ /r_h^4) , 2M/r_h = 1-16λ/r_h^45+3√(1+48 λ /r_h^4)/(1+√(1+48 λ /r_h^4))^3 . The coefficient b_2 can be determined in two different ways, as discussed in <cit.>. One approach is to truncate the continued fraction (<ref>) at some order n, so that b_n+1(b_2)=0, which can be solved for b_2. This has some drawbacks. The value of b_2 depends on the truncation order and it's not clear that it converges at higher orders. Furthermore, the solution to equation b_n+1(b_2)=0 is typically not unique. Another approach is to numerically solve the differential equation (<ref>), starting from a point near the horizon, r=r_h(1+ϵ), with ϵ≪ 1, where we impose the following initial conditions: f(r_h(1+ϵ))=2 κ_g r_hϵ + ∑_j=2^n a_j (r_h ϵ)^j , f'(r_h(1+ϵ))=2 κ_g + ∑_j=2^n j a_j (r_h ϵ)^j-1 . By comparing Eq. (<ref>) with a similar series expansion of Eq. (<ref>) around the horizon, we obtain the coefficient b_2 uniquely define by the parameter a_2: b_2=6r_h -6M-8 κ_g r_h^2 - r_h^3 a_2/4M-3r_h+ 2 κ_g r_h^2 , whereas the a_j's with j>2 are recursively obtained as a function of a_2. The value of a_2 that is needed to plug in Eq. (<ref>) is uniquely determined by requiring that the numerical solution for f(r) is asymptotically flat. In practice, this can never be achieved, as the finite precision of the integration scheme inevitably triggers the exponentially growing mode at a given radius r_s(a_2) which depends on the choice of a_2. Thus, the desired value of a_2 is a_2= max_α r_s(α), which maximizes the domain in which the numerical solution is valid. As an example, in Fig. <ref> we present the solution for the blackening factor f obtained by numerically integrating the differential equation (<ref>) for a specific value of λ/r_h^4, together with its asymptotic expansion and the continued fraction approximation. By construction, the numerical solution vanishes at the horizon, and for somewhat larger radii it shows regular and monotonic behavior. However, at a certain radius r_s the numerical solution blows up, signaling the dominance of the exponentially growing mode of the asymptotic solution [Note that r_s/r_h tends to be larger for larger values of λ/r_h^4, and so the numerical solution is valid in a wider range as we increase the coupling parameter. This is because, as we increase λ/r_h^4, we decrease the exponent of the asymptotic behavior ∼exp((r/r_h)^5/2/λ/r_h^4), which delays its contribution to the numerical solution.]. As can be seen from the continuous line in Fig. <ref>, the continued fraction expansion (<ref>) smoothly transitions between the near-horizon solution and the asymptotic solution, appropriately describing f(r) in the whole radial domain. Fig. <ref> shows Δ f, the difference between the numerical solution and the continued fraction approximation, as a function of the radius, for different truncation orders of the continued fraction expansion. For sufficiently large radii, Δ f becomes large, but this effect is entirely due to the (unphysical) divergence of the numerical solution. Therefore, we are specifically interested in the behavior of Δ f only for radii smaller than the value of r/r_h where this function attains its minimum (in Fig. <ref>, this is indicated with solid lines). We observe that, within this range of radii, |Δ f| decreases as we increase the order of the expansion. In particular, the ninth-order continued fraction yields |min{Δ f}| ∼ 10^-3. The blackening factor displays the same qualitative behavior for all values of the coupling constant λ, with small quantitative changes that will impact the null geodesics that we compute in Sec. <ref>. This is illustrated in Fig. <ref>, where we plot the profile f(r) for three different values of λ. The behavior of the function f(r) with λ is very non-trivial. The surface gravity (<ref>) (being the derivative of f(r)) decreases with increasing λ, so that the gravitational field, in the near-horizon region, is weaker for larger λ. Far from the black hole horizon, the gravitational field is well approximated by the Newtonian potential. Given that the mass decreases with increasing values of λ, this implies that the gravitational field in the asymptotic region becomes weaker for larger values of λ. By continuity there is an intermediate range of radii in which the gravitational field becomes stronger with increasing λ. A final comment is in order. The appearance of the coupling constant λ introduces a further mass scale, λ^-1/4 G^-1/2, in addition to the physical mass M of a given black hole solution. One consequence is that BHs in ECG do not in general satisfy r_h=2GM: the relation between the horizon radius and the mass is affected by the coupling constant. When performing calculations, and displaying our results, it is therefore distinct to fix M=1 or r_h=1. Although one may always convert between the two normalizations, expressing results in units of the mass has the advantage that it is coordinate independent. Nevertheless, to compare the behavior of geodesics we find it much more convenient to fix r_h=1 for all the BHs we consider (with different couplings λ). We have therefore normalized our black hole background to have r_h=1, irrespective of λ, as done in Figs. <ref> and <ref>. § BLACK HOLE MERGERS IN ECG §.§ Generators of the merging horizon We are interested in determining a certain class of null geodesics in the background of an ECG black hole, which will form the event horizon generators. At late times we want this congruence of geodesics to approach a null plane, each individual light ray being parallel to all others. Let x^μ=(t,r,θ,ϕ) be the spacetime coordinates of one such geodesic, and denote by κ the affine parameter used to distinguish points along the geodesics. The geodesic equation reads d^2 x^μ/dκ^2 + Γ^μ_αβd x^α/dκd x^β/dκ = 0 , where Γ^μ_αβ stands for the Christoffel symbols. Since the background is spherically symmetric, we may orient the coordinate system in such a way that our geodesic remains always on the `equatorial' plane. I.e., we may choose, without loss of generality, to fix the polar coordinate to θ(κ)=π/2, the geodesic equation guaranteeing that the geodesic experiences no polar acceleration, d^2θ/dκ^2=0. The remaining coordinates of the geodesic are determined by the other components of Eq. (<ref>), which yield second order differential equations. However, there is a shortcut which immediately outputs first order equations. The fact that ∂/∂ t and ∂/∂ϕ are Killing vectors of the background spacetime allows one to directly derive expressions for two components of the 4-velocity vector: d t/dκ = E/f(r) , d ϕ/dκ = -L/r^2 . Here, E and L represent the energy and the angular momentum associated with these geodesics. Imposing then the light-like condition for the null rays, g_μνẋ^μẋ^ν=0, immediately yields the remaining component of the 4-velocity vector, d r/dκ = ±√(E^2 - L^2 f(r)/r^2) . It is advantageous to use the radial coordinate r to describe the null generators of the horizon. To that end, we express the azimuthal angle ϕ and the time coordinate t both as functions of r: d ϕ/d r = ∓1/r^2 √(1/q^2 - f(r)/r^2) , d t/d r = ±1/q f(r) √(1/q^2 - f(r)/r^2) , where q:= L/E is the impact parameter, which will be used to distinguish between the different null generators. The equations above, in this form, make it manifest that the geodesics have a turning point (a point at which r remains momentarily constant along the geodesic, while ϕ and t continue varying) at an r-coordinate, r_t, such that f(r_t)/r_t^2 = 1/q^2 . Moreover, given that the blackening factor f is a monotonically increasing function that asymptotes to 1, it is clear that geodesics with sufficiently small impact parameter will not possess any turning point: these are the generators that connect (in the far past) with the horizon of the small black hole, or those that are born on the `small BH side' of the caustic line (see later description of Fig. <ref>). In order to determine the event horizon of the extreme mass ratio merger, we must then compute the full development of a null geodesic congruence that asymptotes a null plane at late times (being `null' implies that the radial coordinate must diverge when t→∞). This is done by integrating Eqs. (<ref>) backwards in r for a variety of null geodesics, which are labelled by their impact parameter q. The initial condition for each of these integrations is set by the requirement that the integral curves all approach the same null plane, which amounts to demanding that <cit.>ϕ_q(r→∞) = q/r + O(r^-3) , t_q(r→∞) = r + 2M ln(r/2M) + O(r^-1) . The numerical integration is performed until the null geodesic either: (a) reaches the small black hole horizon, r=1, which can only happen when t→ -∞. These geodesics will be referred to as the non-caustic generators of the event horizon. (b) hits the caustic line at ϕ=π, at which moment we immediately stop the integration. These geodesics constitute the caustic generators of the event horizon. (c) goes off to a large radius after having gone through a point of closest approach to the small black hole. This can happen only if the geodesic bends away from the central object. This phenomenon does not occur with the kind of backgrounds we are considering, but it is a feature that appears in more general situations, for example when the background spacetime is taken to be an over-extreme Reissner-Nordström naked singularity. The caustic line physically represents a collection of points in the spacetime from where more than one light ray can be emitted —in different directions— to reach the same asymptotic null plane. More details about the non-smooth features of the event horizon evolution during the merging of the two black holes in the EMR can be found in Refs. <cit.>. The distinction between the caustic and non-caustic generators is illustrated in Fig. <ref>, which shows a projection of the event horizon generators on the (x,z) spatial plane, with x=rsinϕ and z=rcosϕ, for a specific value of the coupling constant. As can be seen, there is a critical generator with impact parameter q=q_c that separates the non-caustic generators, with q ≤ q_c, from the caustic generators, with q>q_c. The latter geodesics can be further differentiated: generators with q_c < q < q_* approach the caustic line from the side of the small black hole (i.e. they bend towards the small black hole), while generators with q>q_* have a turning point, approaching the caustic line from the side of the large black hole (i.e. they escape the small black hole's pull). Since a turning point is a singular point of Eqs. (<ref>), the numerical integration halts for generators with q>q_*, but that does not mean the geodesic is incomplete. It is just a consequence of using the radial coordinate as the integration variable instead of the affine parameter. There is, however, a simple way to circumvent this problem. Because the background is spherically symmetric, such a geodesic must be symmetric around a turning point. Therefore one may just perform “half” of the integration and then glue the other symmetric half by imposing continuity of the geodesic functions and their derivatives <cit.>. Figure <ref> shows some horizon generators for the same values of λ adopted in Fig. <ref>. This is done by keeping fixed r_h=1. As λ is increased, geodesics with the same impact parameter q are less deflected towards the small black hole. Naively, this should come as no surprise, given that both the mass and the surface gravity of the black hole decrease with the growth of the coupling parameter (see Eqs. (<ref>) and (<ref>)). Consider now the generator with q=q_*, as we vary λ. This generator originates from the crest of the caustic, at (t,r,ϕ)=(t_*,r_*,π)<cit.>. The value of r_* gives a measure of the maximum distortion of the small black hole horizon during the merger. Note that the impact parameter q_* decreases with λ, but r_* does not vary monotonically. In particular, there exists a specific value of λ for which the distortion of the small black hole is minimized. The reason for such non-monotonicity stems from the non-trivial behavior of the blackening factor along the generators' trajectory, as a function of the coupling. Each of these null geodesics crosses a very wide range of radii along their path, r∈[r_*,+∞), and the blackening factor f(r) in different regions (asymptotic, intermediate, and near-horizon) shows different trends with λ, as exemplified in Fig. <ref>. Technically, r_* is determined by solving a coupled system of equations, f(r_*) q_*^2 = r_*^2 , ϕ(r_*)=π , but the latter is actually an integral equation, taking into account Eq. (<ref>). Therefore, the whole worldline of the geodesic is relevant for the calculation: the problem is non-local. Even knowing how the blackening factor f(r) changes with the coupling —as λ is increased, f becomes flatter for sufficiently large r/r_h but steeper sufficiently close to the horizon—, it is difficult to anticipate the qualitative dependence of r_* on λ. This illustrates how challenging it is to predict these merger features, and it highlights the importance of developing these methods to determine the evolution of the horizon in black hole mergers beyond General Relativity. §.§ Extracting merger properties The computations described in the previous subsection allow us to conduct a quantitative analysis of the effects of modifying General Relativity by the inclusion of higher derivative terms in the Lagrangian — in the present case, these terms are cubic in the Riemann tensor. Specifically, for each value of the coupling constant λ the merging horizons can be determined. From that, we extract defining characteristics of the merger, such as the duration of the process and the relative increase in the area of the event horizon. Finally, we assess how each of these properties varies with λ. Recall that the maximum distortion of the event horizon is determined by the generator with q=q_*, which reaches the caustic at r=r_* and t=t_*, the pinch-on instant. Figure <ref> shows how r_* varies non-monotonically as a function of the theory's coupling constant, λ. In the range of the coupling parameters analyzed, λ/r_h^4 = 0.0066 yields the value for which the distortion is minimized. At its minimum, r_*=1.71270 r_h. As argued in the last subsection, this non-monotonicity stems from the non-trivial behavior of the blackening factor along the generators' trajectory, as a function of the coupling. We note, however, that if we had fixed the mass M instead of r_h, r_* would be monotonically increasing as a function of λ/M^4. We can also calculate the duration of the merger, Δ_*, as the retarded time elapsed between the pinch-on instant, t_*, and the asymptotic retarded time of the central generator with q=0 <cit.>. The results are presented in Figure <ref>, and similar to what happened with r_*, there is a value of the coupling constant, λ/r_h^4=0.0121, which minimizes the duration of the merger, in which case Δ_*=5.67917 r_h. This is not surprising: intuitively, if the small black hole is less distorted, the fused system also takes less time to relax down to a planar horizon. What stands out the most is that the value of λ for which r_* is minimized is not the same value that minimizes Δ_*. This discrepancy arises from the interplay between the non-trivial behavior of r_* and t_*, both as functions of λ. We may also consider, as in Ref. <cit.>, the area growth of the event horizon. At early times the event horizon is disconnected, and the small black hole has the initial area 𝒜_in=4π r_h^2. At late times the two initial components of the horizon have fused together and its generators were either already present at the infinite past or they originated from the caustic line. One can think of the generators with q>q_* as entering the event horizon from the large BH side. Conversely, generators with q<q_* either entered the horizon from the small BH side or else asymptote its apparent horizon in the far past (if q≤ q_c). Considering the contribution of all these generators from the small BH side, they form a disk of area π q_*^2 at future null infinity, so that, during the process, the area increases by Δ𝒜_smallBH=[q_*^2/(4r_h^2)-1]𝒜_in. Figure <ref> illustrates the variation of this quantity with λ. Consistently with the previous results, the area increment of the small black hole also shows a non-monotonic behavior with λ. It is of obvious interest to compare our results with the case of GR, which is recovered in the limit λ→0. For the Schwarzschild black hole in GR, Emparan and Martínez <cit.> obtained r_*=1.76031 r_h, Δ_*=5.94676 r_h and Δ𝒜_𝓈𝓂𝒶𝓁𝓁ℬℋ/(4π r_h^2) = 0.79356. As shown in Figs. <ref>, <ref> and <ref>, we have computed these quantities for 26 different values of λ, and fitted that data with second-order Padé approximants. The resulting fitting functions extrapolated to λ=0 match the Emparan-Martínez values with high precision, yielding errors of 0.13%, 0.04% and 0.92%, respectively. In other words, we find that in Einsteinian cubic gravity, and for the range of coupling constants studied, the fusion of an EMR binary can happen up to 4.5% quicker than in GR, and the small black hole can be distorted up to 2.7% less, with an area increment that can be up to 26.7% smaller. A summary of these results is presented in Table <ref>. To illustrate how the merger duration depends on the coupling constant, we computed several time slices for the same three λ values considered in Fig. <ref>, overlaying them in Fig. <ref>. § DISCUSSION AND CONCLUSION In this paper, we have analyzed the merging of two black holes beyond General Relativity, in the extreme-mass ratio regime. The EMR limit allows one to compute the evolution of the coalescing event horizon by null ray-tracing in the background provided by the small black hole. Any extension of GR leading to modifications of the BH geometry, while leaving it asymptotically flat, will produce quantitative changes on the merger process. Since we explore the EMR regime and resolve the scales comparable to the smaller object, this analysis is particularly well-suited to address ultraviolet modifications of GR. Among the higher derivative theories considered in the literature, we focused on Einsteinian cubic gravity. Besides being astrophysically viable and theoretically well-motivated, this theory only depends on a single coupling constant, λ, offering therefore a quite manageable parameter space. A previous study <cit.> placed an observational constraint on the coupling constant, namely λ < 4.57×10^22 M_⊙^4. In the present paper, we focused on the effects produced when λ≲ M^4, where M denotes the mass of the small BH. Therefore, taking the small object in our study to be a solar mass BH is well within the viable regime for λ. On the other hand, considering a supermassive BH would already be in tension with the observational constraint. The type of analysis conducted herein, when complemented with the associated computation of gravitational wave emission and its confrontation with observations, has the potential to set stronger constraints. To this end it will be instrumental to include corrections beyond the strict EMR limit, to allow the spacetime to be dynamical. This is currently under study. Our investigation was devoted to a modest range of the dimensionless coupling λ/M^4 ≲ 1: this is exactly the regime in which interesting effects are observed. As the coupling constant is increased starting from zero (i.e., the GR limit), the duration of the merger and the area increment both display non-monotonic behavior: they first decrease to what seems to be an absolute minimum, and from there on they steadily increase. Our numerical results for significantly larger λ are not conclusive, but they favor a linear growth trend. Interestingly, the critical value of λ leading to the shortest merging time is different from the one giving rise to the minimum area increment (see Figs. <ref> and <ref>). The duration Δ_* of the horizon fusion gives a measure of how long the merger phase lasts in the GW signal generated from an EMR black hole coalescence. Our results mean that there is a specific value of λ/M^4 for which the merger process is the quickest possible. Compared to GR, at this threshold value of λ the merger proceeds in a time 4.5% quicker. Along the same lines, one is tempted to establish a connection between the increase in the event horizon area and the entropy growth of the binary. Recall that in ECG the entropy of the spherically symmetric BH is known analytically <cit.>, even though the spacetime is determined numerically. Dialing up λ starting from zero, the relative change in area initially decreases and note that the difference with respect to GR can be quite significant — more than 25%! Thus, one expects a comparably significant modification in the entropy growth. Once again, it would be interesting to go beyond the strict EMR limit in order to include gravitational radiation and then use the above ideas to estimate the energy emitted in GWs during such mergers. In conclusion, we have proposed a means to probe generally covariant theories of gravity (including a class of modifications of GR) in the highly non-linear merger regime. This offers a way to tackle —with relatively simple analytic techniques— a problem that typically requires demanding numerical simulations and years-long efforts to adapt perturbative methods to modified gravity. The results presented can serve as a benchmark for upcoming numerical simulations. BLANK SPACEBLANK SPACE AMF acknowledges the support of the MICINN grants PID2019-105614GB-C22, AGAUR grant 2017-SGR 754, PID2022-136224NB-C22 funded by MCIN/AEI/ 10.13039/501100011033/FEDER, UE, and State Research Agency of MICINN through the ‘Unit of Excellence Maria de Maeztu 2020-2023’ award to the Institute of Cosmos Sciences (CEX2019-000918-M). DCL and JVR acknowledge support from FCT under Project No. 2022.08368.PTDC. JVR also acknowledges support from FCT under Project No. CERN/FIS-PAR/0023/2019. AMF and JVR would also like to thank the organizers of the workshop “Gravity: Challenges beyond General Relativity”, held in Barcelona on May 22-24, 2024, for interesting discussions that took place at that event.
http://arxiv.org/abs/2407.13716v1
20240718171227
Understanding Christensen-Sinclair factorization via semidefinite programming
[ "Francisco Escudero-Gutiérrez" ]
math.OA
[ "math.OA", "math.FA", "math.OC" ]
-7.5cm teoTheorem conConjecture prop[teo]Proposition queQuestion
http://arxiv.org/abs/2407.12618v1
20240717144447
A Brief Review of Quantum Machine Learning for Financial Services
[ "Mina Doosti", "Petros Wallden", "Conor Brian Hamill", "Robert Hankache", "Oliver Thomson Brown", "Chris Heunen" ]
quant-ph
[ "quant-ph", "cs.CE" ]
,1]Mina Doostimdoosti@ed.ac.uk 1]Petros Wallden 2]Conor Brian Hamill 2]Robert Hankache 3]Oliver Thomson Brown 1]Chris Heunen [1]School of Informatics, Quantum Software Lab, University of Edinburgh, United Kingdom [2]Data Science & Innovation, NatWest Group, London, United Kingdom [3]EPCC, Quantum Software Lab, University of Edinburgh, United Kingdom A Brief Review of Quantum Machine Learning for Financial Services [ =================================================================== § ABSTRACT This review paper examines state-of-the-art algorithms and techniques in quantum machine learning with potential applications in finance. We discuss QML techniques in supervised learning tasks, such as Quantum Variational Classifiers, Quantum Kernel Estimation, and Quantum Neural Networks (QNNs), along with quantum generative AI techniques like Quantum Transformers and Quantum Graph Neural Networks (QGNNs). The financial applications considered include risk management, credit scoring, fraud detection, and stock price prediction. We also provide an overview of the challenges, potential, and limitations of QML, both in these specific areas and more broadly across the field. We hope that this can serve as a quick guide for data scientists, professionals in the financial sector, and enthusiasts in this area to understand why quantum computing and QML in particular could be interesting to explore in their field of expertise. § INTRODUCTION Quantum computing, as a revolutionary and fundamentally different paradigm in computation, has significantly impacted diverse domains, including machine learning and data science. As machine learning continues to spearhead technological advancements in data science, the convergence of quantum computing and machine learning has garnered considerable interest, with the aspiration of transcending the boundaries of classical machine learning methodologies. The inception of Quantum Machine Learning (QML) dates back to the mid-1990s, initially exploring the concepts of quantum learning theory <cit.>. However, it wasn't until approximately 18 years ago that the field gained significant attention following the seminal paper by Harrow, Hassidim, and Lloyd <cit.>. This pivotal work laid the foundation for numerous studies in both supervised and unsupervised learning domains <cit.>. Leveraging non-classical properties such as entanglement, superposition, and interference, quantum algorithms offer the potential for substantial speedups, sometimes exponential, over classical computing counterparts. While such speedups have been demonstrated for certain specific problems, achieving them in the realm of data science, and moreover for industry applications, remains an ongoing challenge with much active research in QML attempting to address it. In this brief and concise review, instead of a broad examination of the field of quantum machine learning (as done in <cit.>), we shift our focus to techniques and algorithms that parallel classical methods utilized in finance. We aim to highlight the potentials, promises, and challenges of applying QML within the financial sector. This review can also serve as a guide for financial professionals, both data scientists and management, offering an understanding of the latest advancements in QML relevant to their sector. Doing so, this review can aid decision-making processes, enhance the adoption of cutting-edge technologies, and lead to a deeper and hype-free appreciation of how QML can improve financial services. Broadly categorized, QML encompasses three areas based on the nature of the algorithm and data: quantum algorithms on classical data, classical algorithms on quantum data and quantum algorithms on quantum data. Our review is centred on the first category, where classical data interfaces with quantum algorithms, as it aligns with the applications in finance, such as credit scoring, risk management, stock price prediction, and fraud detection. Within this scope, we explore quantum counterparts to Gradient Boosted Tree techniques for supervised learning, Generative AI models, and Graph Neural Networks, touching upon the most recent advancements in these topics. First, we review some of the main and most widely used techniques from classical machine learning in financial applications. Then, we provide a brief, high-level overview of the promises and limitations of quantum machine learning, offering insights into the current landscape and prospects of this rapidly evolving field. Drawing insights from classical techniques, we select three categories of QML algorithms and review them with the desired applications in finance in mind. Finally, we conclude the review with some discussions. § APPLICATIONS OF CLASSICAL MACHINE LEARNING IN FINANCE In this section, we briefly review the various ways classical machine learning techniques are employed in financial services. These applications range from credit scoring and risk management to fraud detection, stock price prediction, and personalization through recommendation systems. Each of these areas benefits from the advanced capabilities of machine learning to improve accuracy, efficiency, and effectiveness. Below, we delve into specific techniques and their impacts in these key financial domains. * Credit scoring: Estimating the risk associated with lending to a particular individual or business is a key challenge in financial services. The measure of risk associated with selling loans typically informs the price offered to an individual or business and is a key component in ensuring that a financial institution operates safely and sustainably. Traditionally, credit risk modelling has been undertaken in banks using linear algorithms like logistic regression <cit.>. However, the availability of large datasets and computing power has enabled the adoption of more modern machine learning and deep learning techniques. Studies comparing these modern techniques to traditional algorithms have found that deep learning outperforms logistic regression modelling, with gradient-boosted trees often identifying customer risk most accurately <cit.>. Furthermore, graph convolutional networks have been shown to improve credit risk estimation by extracting features representing a company’s financial environment <cit.>, indicating these algorithms can provide a more holistic view of a business to estimate credit risk. * Risk management and compliance: Lenders are obliged to operate within an established risk appetite and meet compliance requirements. Estimating the portfolio risk is a key challenge for investment banks. Previous work has compared several different methods for estimating the value-at-risk in a stock portfolio, with the gradient-boosted algorithm Adaboost the most performant <cit.>. Recently, the ability of Generative AI to create human-like text has been adopted by several banks to ensure their communications comply with regulatory frameworks <cit.>. As this technology matures, its usage is expected to become increasingly widespread, with the potential for substantially streamlining compliance processes. * Fraud detection: The automated detection of fraudulent transactions and behaviours allows lenders to reduce financial losses while protecting customers and building trust, without the cost and inaccuracy that manual analysis of transactions can entail. Popular algorithms that show strong performance in predicting fraud include support vector machines (SVMs) <cit.>, boosted trees <cit.>, and artificial neural networks <cit.>. * Stock price prediction: Forecasting of future stock prices is a crucial but challenging task in financial services, informing investment strategies and risk management. Deep-learning-based techniques including the use of a Long Short-Term Memory (LSTM) network can capture complex relationships and outperform traditional forecasting methods like ARIMA <cit.>. Additionally, techniques like Generative Adversarial Networks (GANs) that generate new data, can be used to effectively classify and forecast time series <cit.>. * Personalisation and recommendation systems: Machine learning has allowed different types of financial services to provide an increasingly personalised experience to customers, a vital aspect in an industry that consists of long-lasting and product-centric relationships between lenders and customers. Personalised contact marketing campaigns can be supported by the estimation of customer lifetime value (CLV), a metric of the total profit a lender may make from a customer over the course of their relationship. Machine learning allows more accurate predictions of CLV, enabling the marketing of relevant financial products to customers <cit.>. Recommendation of personal portfolios has been shown to be improved by the adoption of graph networks, in addition to a transformer-based neural network architecture to capture the relationship between financial products and how users behave over time <cit.>. § GENERAL PROMISES AND LIMITATIONS OF QML Before getting into the specific quantum machine techniques, let us first give a high-level summary of what can or cannot be expected from QML in general, by listing the most important promises and limitations.[Disclaimer: This is an opinionated list by the authors adopted from many different studies and discussions in the field, and it is by no means a comprehensive list. We refer to the following valuable QML reviews for similar arguments and more details <cit.>.] Promises of Quantum Machine Learning: * Efficient Training with Quantum-Aware Optimizers: QML offers the promise of efficient training through the development of quantum-aware optimizers that adapt to the unique characteristics of quantum computing, potentially outperforming classical optimization methods. * Improved Generalisation: Recent studies suggest that QML models have the potential to achieve good generalisation even with a small amount of training data, offering the prospect of accurate predictions on previously unseen data. * Quantum Advantage in Specific Tasks: QML holds the potential for achieving a quantum advantage in tasks like hidden parameter extraction from quantum data, generative modelling, and discovering quantum error correcting codes, particularly when dealing with quantum-mechanical processes. Also QML in some specific tasks can provide provable promises where there are no such counterparts in classical machine learning. * Improved Accuracy: Quantum machine learning techniques offer the potential to achieve higher accuracy compared to classical models in various tasks such as classification, regression, and generative modelling, by leveraging quantum-enhanced feature spaces and complex data relationships. This is perhaps one of the most important advantages of using quantum models, even when they will not offer any speedups. These advancements can lead to more reliable predictions and better decision-making in fields. * Quantum Advantage Beyond Speedup: Even though quantum algorithms are often been judged by whether or not they achieve a quantum speedup over a classical analogue, QML presents a unique opportunity for looking at different figures of merits where quantum computing can provide an advantage, for instance, privacy. * Transition to Fault-Tolerant Quantum Computing Era: As quantum computers evolve into the fault-tolerant era, QML is expected to become even more useful, leveraging the capabilities of error-corrected quantum hardware to learn, infer, and make predictions directly from quantum data. Limitations of Quantum Machine Learning: * Data Uploading on Quantum Memory: While QML models show promise with quantum data, the process of uploading the encoded classical data into quantum states on quantum registers and memories, namely QRAM, remains a significant challenge, especially when large datasets are involved. * Challenges in Quantum Training Landscape: Many QML algorithms encounter challenges in training, such as local minima and Barren plateaus, which are analogous to the issue of vanishing gradients in classical ML. However, the additional complexity of the quantum landscape, coupled with the presence of hardware noise, amplifies these issues. Additionally, QML models tend to exploit the full space of quantum states in order to be more expressive, which often leads to the emergence of Barren plateaus. [Quantum states are described in a vector space known as Hilbert space, which expands exponentially with system size. To fully utilize quantum systems' power, models tend to access all operations in this space, often referred to as expressibility. However, these highly expressive cases are the ones that often face vanishing gradient issues and training difficulties referred to as Barren plateaus. (See <cit.>]). Moreover, since techniques that have demonstrated effectiveness in classical ML have not yet been fully adapted and studied in the quantum setting, there is hope that future research will discover innovative approaches to mitigate these limitations. * Impact of Quantum Noise: Hardware noise in quantum computations poses a critical challenge for QML in the pre-quantum-error-correction era, affecting all aspects of model performance and requiring strategies like error mitigation techniques or noise-resilient model design to address. * Uncertainty Regarding Quantum Advantage: While exponential quantum advantage holds promise for specific tasks, there remains uncertainty regarding its realisation, especially in scenarios involving purely classical data, where achieving exponential advantages may not be feasible. As research in QML progresses, it becomes increasingly evident that there are instances where the likelihood of attaining a quantum advantage over classical algorithms is minimal or nonexistent <cit.>. It is crucial to acknowledge that quantum advantage is not universally attainable or practical for all tasks. However, it is also important to note that Classical ML models often lack provable performance guarantees or even theoretical explainability of their performance, yet remain useful in countless practical scenarios. For this reason, perhaps, discarding QML algorithms solely due to the lack of exponential advantage might be too harsh of a verdict. It's worth noting that the criteria for demonstrating quantum computational supremacy, where rigorous proofs are needed, differ from those for demonstrating that a (quantum) model performs better than its classical competitors. [We also refer to <cit.> for more argument on why quantum advantage might not be the right figure-of-merit for QML.] § QML ALGORITHMS FOR SUPERVISED LEARNING TASKS In the realm of supervised learning tasks, where models learn from labelled data to make predictions or decisions, several QML approaches and architectures have been proposed in the literature, with some of them holding promising results for outperforming classical approaches. This section explores the application of quantum algorithms and techniques to address supervised learning tasks, with a focus on methods and techniques that are often used in data science and finance, such as Gradient Boosted Trees (GBT), or Quantum Neural Network (QNN). §.§ Quantum analogue for GBT? Gradient Boosted Trees (GBT) is a machine learning technique related to decision trees where combining multiple weak learners (typically shallow decision trees) creates a strong predictive model. GBT can be used for both classification and regression tasks, making them versatile in supervised learning. It is worth mentioning that the decision tree has been studied in <cit.> where it leverages quantum entropy for node splitting and quantum fidelity for data clustering while using Grover's search algorithm <cit.> for searching over the tree to enhance the efficiency. However, the topic has not been further explored in quantum machine learning. As a result, due to the lack of an exact quantum equivalent technique, in this section, we focus on relevant techniques to the applications for which GBT is used. While GBT does not have an exact quantum equivalent to this date, QML offers several alternative methods for enhancing supervised learning. The main parallel lies in enhancing supervised learning tasks through quantum-enhanced feature spaces. These quantum algorithms aim to expand feature space representation in higher dimensions, potentially capturing complex data relationships, and thereby improving task accuracy, which is the focus of the next section. §.§ Supervised learning with quantum-enhanced feature spaces This work <cit.> introduces two quantum algorithms for near-term quantum devices: the Quantum Variational Classifier, resembling conventional SVMs, and Quantum Kernel Estimation, for optimizing a classical SVM on a quantum computer. Both utilize the quantum state space as a feature space and offer advantages hard to simulate classically, following the concept of quantum SVMs introduced in <cit.>. The study also explores applications for noisy intermediate-scale quantum computers in machine learning. The Quantum Variational Classifier operates in four steps: mapping classical data to a quantum state, applying a short-depth quantum circuit, performing binary measurements, and using a decision rule based on empirical distribution. Its experimental results show a classification success rate close to 100% for circuit depths [Circuit depth in quantum computing refers to the number of sequential layers of quantum gates (over one or several qubits) required to implement a quantum algorithm or circuit. It is related to the time complexity of the circuit and is a crucial factor in assessing the computational resources needed for quantum computations. A deeper circuit often implies a longer execution time, during which the qubits might decohere and lose their useful quantum information, hence long-depth quantum circuits are more challenging to implement on quantum hardware.] greater than 1, with optimal performance at depth 4, even considering decoherence effects. The classifier is trained on three datasets per depth and performs 20 classifications per trained set, using 20,000 shots [In quantum computing experiments, "shots" refer to the number of times a quantum circuit is executed at the end of which a fixed quantum measurement is repeated. Each shot represents an independent execution or measurement of the quantum system, providing statistical information about the outcomes of the experiment. By averaging the results over multiple shots, one can obtain an estimate of probabilities or expectation values associated with quantum states or operations, which is often necessary given the probabilistic nature of quantum mechanics.] for running classification experiments compared to 2,000 for training. The Quantum Kernel Estimation algorithm estimates the SVM kernel on a quantum computer for both training and classification phases. The quantum computer initially estimates the kernel for all pairs of training samples. In the classification phase, the quantum computer is used to estimate the kernel for a new data point using the support vectors obtained from the optimization, providing sufficient information to construct the complete SVM classifier. The estimation of inner products for the kernel involves direct estimation from transition amplitudes, utilizing feature map circuits. The transition probability is estimated by measuring the final state in the usual computational basis multiple times. It achieves high classification success, with two test sets reaching 100% and a third averaging 94.75% success [For more details, see FIG. S7 (page 22) in  <cit.>. Accuracy and experimental data are provided, however, no comparison with classical is given, most probably since the experimental accuracy was very high.]. This method's effectiveness is underlined by its ability to maintain the positive semidefiniteness of the kernel despite sampling errors. Further studies, such as <cit.> and <cit.>, delve deeper into these quantum approaches. Particularly, <cit.> benchmarks the circuit-centric quantum classifier against classical techniques, testing it on datasets like CANCER, SONAR, WINE, SEMEION, and MNIST256. Despite fewer parameters, the quantum classifier often outperforms or matches classical models like MLPshal and MLPdeep, although it shows overfitting in some cases, suggesting the need for improved regularisation techniques. §.§.§ Provable QML algorithms in this area Most notable recent developments in this area (since 2023) include <cit.>, demonstrating the quantum advantage of variational quantum classifiers and quantum kernel SVMs in solving certain hard problems in complexity theory [To be specific, the problem studied in this paper is PROMISEBQP-complete.]. This implies potential efficiency in Bounded-Error Quantum Polynomial-Time decision problems. Another significant contribution is <cit.>, which highlights the efficiency of quantum SVMs. Using quantum circuits to define kernel functions, it achieves a provable exponential speedup over classical algorithms for specific datasets. The complexity of solving the dual formulation is estimated at O(M^4.67/ϵ^2) quantum circuit evaluations, and empirical analysis suggests addressing the kernelized primal problem in O(minM^2/ϵ^6, 1/ϵ^10) evaluations, where M denotes the size of the data set, and ϵ the solution accuracy compared to the ideal result from exact expectation values, which is only obtainable in theory. This work also presents a variational approximation to quantum SVMs, showing improved scaling in heuristic training. §.§ Quantum Neural Networks Many QML models rely fundamentally on Parameterized Quantum Circuits (PQCs) as a vital component. These circuits consist of a series of parameterised unitary gates acting on quantum states which encode the classical data <cit.>. The parameterized circuits include quantum gates that contain free parameters (θ) that are adjusted through training to address specific problems. PQCs share a conceptual quantum analogue of neural networks, and it has been shown that classical feedforward neural networks can be formally embedded into PQCs <cit.>. In the QML literature, certain types of PQCs are often referred to as QNNs, and sometimes QNNs are considered as a subclass of Variational Quantum Algorithms (VQA) <cit.>. In practical terms, the term QNN is utilized whenever a PQC is applied in a data science context (mostly when the data is classical and the algorithm is quantum). Specifically related to supervised classification tasks, a QNN aims to map states from different classes to distinguishable regions within the Hilbert space. QNNs can be realised in different architectures. As a simple categorisation, one can look at three different examples where in each layer the number of qubits in the model is increased, preserved or decreased (like dissipative QNNs or convolutional QNNs). Dissipative QNNs generalize the classical feedforward network <cit.> as mentioned. In a standard QNN architecture, the classical data is encoded into quantum states and sent to a QNN where the quantum circuit is applied and at the end all of the qubits are measured. This is an example of a QNN architecture where the number of qubits is preserved. Convolutional QNN, studied in <cit.>, and demonstrated for having good classification performance, is an example of a QNN architecture where the number of qubits is reduced since qubits are measured and discarded in each layer to reduce the dimension of the data while preserving its relevant features. Also, the trainability and capacities of QNNs have been studied in <cit.> where implementation on real hardware has also been demonstrated. In general, QNNs and their different architectures are one of the main and active areas of research in QML <cit.>. §.§ QML techniques related to credit scoring and risk management In terms of applications for finance, recent studies have utilized quantum machine learning (QML) for credit scoring and financial risk measurement. First, we introduce the paper <cit.> where a quantum-enhanced machine learning method for predicting credit rating has been proposed [Here in this work, “quantum-enhanced" specifically points that (classical) classifier is trained with a quantum adiabatic algorithm. The method referred to as QBoost developed in <cit.>. However the term “quantum-enhanced" is used in different meanings in the literature, including hybrid (quantum-classical) algorithms, and quantum-inspired classical algorithms.]. Implemented on a neutral atom quantum processor with up to 60 qubits, this model shows promising results, with competitive performance, improved interpretability, and training times comparable to state-of-the-art random forest models. In <cit.>, authors explore quantum machine learning for enhancing financial forecasting. They incorporate classical and quantum Determinantal Point Processes in Random Forest models, achieving nearly 6% improvement in precision, compared to the classical random forest. Additionally, they design quantum neural network architectures with orthogonal and compound layers for credit risk assessment. These models match classical performance but with significantly fewer parameters, demonstrating the efficiency of quantum methods in machine learning. Lastly, <cit.> examines the broader potential of quantum computing in financial risk management, focusing on Value-at-Risk and Potential Future Exposure. The study acknowledges the feasibility of conceptual solutions and small-scale circuits but also highlights challenges in real-life applications, such as the need for increased hardware capacity (more qubits) and quantum noise mitigation. In this area of applications, one of the first works is by <cit.>, where the authors demonstrated translating these challenges into a quadratic unconstrained binary optimization (QUBO) problem, solvable by quantum annealers. This approach was tested on a quantum simulator, indicating the potential of quantum annealers in optimizing credit analysis features (further detailed in <cit.>). § QUANTUM COMPUTING FOR GENERATIVE AI Generative AI or deep Generative Learning Models (GLMs) have revolutionized the landscape of artificial intelligence and have been successfully applied to various tasks <cit.>. With classical GLMs approaching the computational limitations according to Moore's law <cit.>, and the unique abilities of quantum computers in handling large dimensionalities and producing complicated probability distributions, Quantum Generative AI (QGenAI) represent a promising field for enhancing computational efficiency and overcoming these barriers and paving the way for the next generation of artificial intelligence. Several research directions have been proposed in this domain which we can broadly categorise as Quantum circuit Born machine (QCBM) <cit.>, quantum Boltzmann machine (QBM) <cit.>, quantum generative adversarial network (QGAN) <cit.>, and Quantum Transformers <cit.>. Given that classical transformers are state-of-the-art technology for many applications of interest, in this section, we mainly focus on their quantum analogue and we will only briefly mention other techniques. §.§ Quantum Transformers The key component of the transformer, is the attention mechanism, as introduced by <cit.>. The process involves transforming input data (of dimension n × d), denoted as X ∈ℝ^n × d, into n patches with dimensions of d. The trainable weight matrix from the initial fully connected layer is denoted as V. The core of the attention mechanism is represented by the attention coefficients which assign weights to each patch concerning every other patch (through the equation Aij = x^T_i W x_j, where W is the new trainable weight matrix). A quantum transformer also focuses on the attention mechanism. In a recent work <cit.>, three types of quantum transformers have been introduced, building upon previous related works <cit.> and applied to visual tasks for benchmarking: Orthogonal Patch-wise Neural Network, Quantum Orthogonal Transformer and Qunatum Compound Transformer. The first one implements a trivial attention mechanism where each patch pays attention only to itself. The second one is designed to be a close quantum analogue of the classical approach and capture both a linear fully connected layer, and the attention matrix to capture the interaction between patches. The third one, distinct from the other two, harnesses the power of quantum superposition in loading data and is inspired by the MLPMixer architecture <cit.>. The compound transformer introduces a unified operation that can replace the former ones and facilitates information exchange between patches without relying on convolution or attention mechanisms. Now let us have a brief overview of how each of these quantum transformers work. The Orthogonal Patch-wise Neural Network simplifies the attention mechanism, where as mentioned each patch only pays attention to itself. i.e. each input patch is multiplied by the same trainable matrix V, with one circuit per patch. The quantum circuit employs a quantum orthogonal layer to perform this multiplication. The output of each circuit is a quantum state encoding V x_i, a vector retrieved through tomography [tomography refers to the process of measuring multiple copies of a quantum state to retrieve its classical description.]. Interestingly, the tomography procedure avoids exponential complexity as it deals with states of linear size in this case. A data loader with N = d qubits is required, and computational complexity is given as O(log(d)), with trainable parameters O(d log(d)) and fixed parameters requiring d - 1. This quantum transformer is designed to achieve attention without the intricate computations associated with classical attention mechanisms. The Quantum Orthogonal Transformer operates by calculating each attention coefficient A_ij by loading x_j into the circuit with a vector loader, followed by a trainable quantum orthogonal layer W, resulting in the vector W x_j. Subsequently, an inverse data loader of x_i is applied, creating a state where the probability of measuring 1 on the first qubit is exactly the desired attention |x^T_i W x_j|^2 = A_ij^2. The positive coefficients for A, are learned during training. Then a post-processing with softmax is applied to obtain each coefficient, and these components are classically combined. The computational complexity of this quantum circuit is similar to the Orthogonal Patch-wise Neural Network. The Quantum Compound Transformer represents a shift towards a more native quantum approach. In this transformer, all patches are loaded as a quantum state with the same weight, in superposition, and then processed by an orthogonal layer. This orthogonal layer simultaneously extracts features from each patch and re-weights them. The output is computed as a weighted sum of the features extracted from all patches. The process is thus simplified by using one operation instead of calculating separate weight matrices. The quantum circuit which executes one attention layer of this transformer employs a matrix data loader for the input matrix X and a quantum orthogonal layer for V, applied on both registers. In summary, the Quantum Compound Transformer handles feature extraction and weighting in a single operation and provides a computational complexity of O(log(n) + n log(d) + log(n + d)). We refer to Table 2 in <cit.>, for a comprehensive comparison between these quantum transformers and state-of-the-art classical approach. The authors also provide simulations and experiments to benchmark their proposed quantum methods for medical image classification tasks. The experiments were performed on the MedMNIST dataset, which comprises 12 preprocessed, two-dimensional medical image datasets. Besides the three quantum transformers, two baseline methods Vision Transformer and Orthogonal Fully-Connected Neural Network (OrthoFNN) were also included for comparison. The results show that the Vision Transformer, Quantum Orthogonal Transformer, and Quantum Compound Transformer outperformed Orthogonal Fully-Connected and Orthogonal Patch-wise neural networks across all 12 MedMNIST tasks. But more importantly, quantum architectures achieved comparable accuracy to classical benchmarks while utilizing fewer trainable parameters. The resource analysis for simulated quantum circuits revealed promising results for the Compound Transformer which required 80 trainable parameters compared to 512 (2d^2, d=16) for the Classical Vision Transformer. Circuit depth and the number of distinct circuits were provided for each quantum architecture. We also refer to Table 5 in <cit.>, for the exact number of qubits, circuit depth and parameters for each. The work also includes experiments on IBM Quantum devices that demonstrated competitive accuracy levels for quantum transformers compared to classical benchmarks (however slightly lower). Hence this work shows an important potential for quantum transformers compared to their classical counterparts in terms of improving accuracy and lowering the number of required training parameters. Additionally, a recent preprint <cit.> investigates the integration of fault-tolerant quantum computing into transformer architectures by employing techniques from the quantum signal processing framework and quantum singular value transformation to construct efficient quantum subroutines for key transformer components, and claims to achieve a quadratic advantage under specific parameter regimes. Finally, very recently, a new architecture for quantum transformed, namely Quixer has been introduced in <cit.>, which utilises the Linear Combination of Unitaries <cit.> and Quantum Singular Value Transform primitives as building blocks. The model is also applied to a small practical language modelling task, and resource estimation has been provided [Disclaimer: We do not compare the architecture of <cit.> with the one presented in more detail from the work of <cit.> or able to provide much more details, primarily because the new paper was accessible quite recently after the main body of this review had been completed.]. Another remarkable and rather different study related to quantum transformers is presented in <cit.> and investigates the comparative expressive power of quantum contextual recurrent neural networks (CRNNs) and classical linear recurrent neural networks (LRNNs) for sequence-to-sequence learning tasks. The authors demonstrate that CRNNs, incorporating quantum contextuality—a fundamental feature of quantum systems—exhibit a quadratic memory separation advantage over classical models, and indicate a practical advantage for generalisation of smaller models. Intuitively, quantum contextuality is similar to linguistic contextuality where the meaning of a word depends on the sentence context determined by the other words. Taking inspiration from this similarity, the authors evaluate their quantum model against state-of-the-art classical models on a standard Spanish-to-English translation task. Comparative performance analysis with LRNN, GRU RNN, classical transformer, and Gaussian model illustrates the quantum advantage, showing better performance across all model sizes. This work highlights intriguing and less explored directions to explore the potential advantage of QGenAI algorithms, by diving into more foundational quantum properties. §.§ Other QGenAI techniques Quantum Circuit Born Machine (QCBM) introduced by Benedetti et al. <cit.>, serves as the quantum counterpart to classical Stochastic Neural Networks (SNN). In QCBM, randomness originates from intrinsic quantum mechanical properties rather than being sampled after each layer. The main idea is to use a Quantum Neural Network (QNN) to generate a tunable discrete probability distribution that approximates a target distribution. This is achieved by leveraging a Parametrized Quantum Circuit (PQC) to manipulate an initial quantum state and generate the desired distribution. The performance of QCBM heavily relies on the topology of the entangling layers in PQCs, with high performance achieved when the topology matches the hardware architectures. QCBM has been applied to various generative learning tasks <cit.>. Applications of QCBM extend to finance, where it has been used for learning empirical financial data distributions <cit.>, generating synthetic financial data <cit.>, and learning joint distributions <cit.>, showing heuristic advantages over classical models, particularly as problem scale increases <cit.>. Quantum Boltzmann Machine (QBM) is a quantum version of the classical Boltzmann machine (BM), proposed by Amin et al. <cit.>. It leverages quantum devices to prepare the Boltzmann distribution, which estimates discrete target distributions. In QBM, qubits replace the units in BMs, and the energy term in classical BMs' Hamiltonian is replaced by a quantum Hamiltonian. For instance, a transverse-field Ising model is commonly used, with the Hamiltonian formulated to include trainable parameters <cit.>. QBM aims to minimize the negative log-likelihood of the target distribution by updating its parameters through optimization methods. An important aspect of QBM is its training process, which involves calculating gradients using both positive and negative phases. The positive phase refers to the Boltzmann average, while the negative phase sampling is NP-hard due to the exponential complexity of exact diagonalisation <cit.>. Several approximation methods, such as quantum Monte Carlo techniques, have been proposed to address this challenge <cit.>. QBM has been applied to simulate probability distributions, achieve better trainability than classical Boltzmann machines, and has been applied to many areas including language processing <cit.>, and finance <cit.>. QBMs are promising techniques in QGenAI and are a subject of ongoing active research in this domain. Quantum Generative Adversarial Network (QGAN) is a novel approach in QGenAI initially conceptualized by Lloyd and Weedbrook <cit.>. QGANs follow the framework of classical GANs, employing a generator and a discriminator to engage in a two-player minimax game. However, QGANs differ in how these components are constructed, often utilizing QNNs instead of classical deep neural networks. QGANs can estimate both discrete and continuous distributions, offering potential computational advantages over classical GANs due to their utilisation of quantum resources <cit.>. Different types of QGANs have been explored for various tasks, including quantum chemistry calculations <cit.>, image generation <cit.>, and finance <cit.>. § QML ALGORITHMS FOR GRAPH-RELATED PROBLEMS In finance, Graph Data Science methods, encompassing topological analysis and algorithms for centrality, similarity, and anomaly detection, provide crucial insights into complex financial networks. Graph Neural Networks (GNNs) further extend these insights by leveraging graph data for supervised learning tasks such as fraud detection and stock price prediction. Within the quantum realm, efforts have emerged to explore the capabilities of quantum computing and quantum machine learning, notably through Quantum Graph Neural Networks (QGNNs), a subset of Quantum Neural Networks (QNNs) which has been discussed earlier. The first work in this area was by Verdon et al. <cit.> introducing the QGNN algorithm as a quantum graph classification model, aiming to harness QML techniques for graph-structured data and derive node representations in a quantum Hilbert space. The algorithm encodes the graph structure in a quadratic Hamiltonian and employs a parameterized quantum circuit, along with standard QNN techniques, to extract relevant graph information tailored to the application of interest. One notable application proposed in the paper involves employing the Quantum Graph Convolutional Network (QGCNN) for graph isomorphism classification, which is also used as a benchmark for evaluating the representation power of classical graph neural networks <cit.>. The implementation of QGCNN involves utilizing the QGCNN ansatz with single-qubit precision encoding, applied in parallel to the structures of two given graphs (G1 and G2). By sampling eigenvalues of the relevant coupling Hamiltonian on both graphs and conducting standard basis measurements, a set of `energies' of this Hamiltonian is obtained for classification purposes. The dataset used for training comprises graphs sampled uniformly at random from a specific distribution for a fixed number of nodes. The training process involves optimizing the networks using a Nelder-Mead optimization algorithm. Despite the relatively small sample size, the experiments demonstrate promising accuracy for classification, underscoring the potential efficacy of QGCNNs. Follow-up studies include <cit.>, addressing certain limitations in existing Graph Neural Networks (GNNs). This paper introduces the Equivalent Quantum Graph Circuit (EQGC). Their approach involves capturing permutation-invariant topologies of input graphs (albeit constrained by linearly scaling qubit requirements) limiting its applicability to small-scale synthetic datasets. The next important work is the Graph Quantum Neural Tangent Kernel (GraphQNTK) <cit.> that combines ideas from GNN and Graph Kernels (GK) <cit.>. The work introduces a hybrid quantum-classical method where a quantum algorithm estimates the neural tangent kernel of the underlying classical graph neural network, effectively emulating an infinite-width GNN. They have also introduced a quantum attention mechanism to better capture the semantic similarity information of graph nodes into the model. This work stands out compared to the previous ones due to its capability to handle more realistic graph data, in terms of size[An open-source repository of their code can be found at https://github.com/abel1231/graphQNTKhttps://github.com/abel1231/graphQNTK.]. The authors compare their model with state-of-the-art GKs and GNNs such as WL kernel, AWL, RetGK, GNTK, WWL, GCN, PatchySAN, GCKN, GraphSAGE, and GIN (we refer to Table 2 in <cit.> for the comparison). Common challenges in QGNN are the absence of general methods for mapping data from Euclidean space into a quantum Hilbert space, as well as the fact that they typically accommodate low-dimensional data. These two challenges have been addressed in a recent pre-print by Ai et al. <cit.> where they propose another hybrid quantum-classical algorithm namely Egograph-based Quantum Graph Neural Network (egoQGNN) for graph classification. Compared to the previous works, egoQGNN uses a hierarchical architecture, and a proposed decompositional processing strategy to capture information within k-hop neighbours of a node, and as a result claims to better handle real-world datasets, in addition to coming together with the proof of the classification capability and the first theoretical framework to describe the data mapping. This work also provides simulations and heuristic performance and comparison to classical counterparts. § CONCLUSION AND DISCUSSIONS This short review paper outlines QML algorithms with promising potential applications in finance. We explored quantum enhancements to classical machine learning techniques, focusing on supervised learning tasks, Generative AI models, and Graph Neural Networks. Our analysis highlights the potential of QML in tasks such as credit scoring, risk management, stock price predictions, and fraud detection within the financial sector. Upon reviewing our findings, it's evident that certain areas, algorithms, and techniques within QML offer greater promise and utility for finance, both in the near and long term. In the near term, Quantum Variational Classifier and Quantum Kernel Estimation algorithms emerge as realistic options for application on Noisy Intermediate-Scale Quantum (NISQ) devices, showing potential for improvements in tasks like credit scoring and risk management. While some existing algorithms demonstrate high classification success rates, further experimentation on real hardware is necessary to validate their efficacy compared to classical methods. Even with the lack of provable exponential quantum advantage in this area, given their hybrid nature and the fact that these algorithms are not as expensive and resource-intensive as the fault-tolerant ones, adoption of these techniques could prove beneficial, particularly if they offer significant improvements in precision or other metrics. Looking ahead to the long term, Quantum Neural Networks present compelling opportunities for long-term advancements in financial analysis, especially given that they appear to be relevant in all three general areas discussed in this paper (Supervised learning, GenAI, and Graph-related problems). Given their ability to capture complex data relationships and simplify feature extraction, architectures like the Quantum Compound Transformer show promise for enhancing models used in financial forecasting and decision-making. By leveraging these unique features of quantum and combining them with techniques from the quantum signal processing framework, these approaches aim to overcome the computational limitations of current and future classical ML. However, they should be perhaps looked at as a longer-term investment due to the lack of sufficient research and experiments to demonstrate their real-life applicability which is currently out of reach of current and near-term quantum devices. Moreover, Quantum Graph Neural Networks (QGNNs) hold promise for innovation in finance, particularly in tasks such as fraud detection and stock price prediction. Ongoing research addressing challenges like data mapping and scalability suggests significant potential for long-term revolutionizing of graph-related problems in finance. However, at the moment it is unclear whether or not the real-life application in this area can be achieved in the near term. Despite the opportunities presented by QML, it's crucial to acknowledge the challenges and limitations, such as efficient data uploading and training challenges in the quantum landscape. Looking forward, the future of QML holds promise for transformative advancements in machine learning and data science. By addressing existing challenges and fully harnessing quantum technologies, we can unlock new frontiers in computational science and drive innovation across diverse industries. Additionally, the study of quantum algorithms serves as inspiration for creating novel algorithms in classical machine learning, including the rise of perhaps new generations of quantum-inspired algorithms, highlighting the value of QML research from different application perspectives, irrespective of the technological challenges posed by quantum computing hardware. Acknowledgements: The authors acknowledge the support received from the strategic partnership between NatWest Group and University of Edinburgh, particularly Raad Khraishi and Aidan McFadden who provided insights on applicability of machine learning techniques in practice within financial services, and Paul Carter, Phoebe Choi and Kostas Kavoussanakis who managed this engagement. unsrt
http://arxiv.org/abs/2407.13732v1
20240718173503
Realizable $H$-Consistent and Bayes-Consistent Loss Functions for Learning to Defer
[ "Anqi Mao", "Mehryar Mohri", "Yutao Zhong" ]
cs.LG
[ "cs.LG", "stat.ML" ]
Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [ July 22, 2024 ================================================================================================= § ABSTRACT We present a comprehensive study of surrogate loss functions for learning to defer. We introduce a broad family of surrogate losses, parameterized by a non-increasing function Ψ, and establish their realizable -consistency under mild conditions. For cost functions based on classification error, we further show that these losses admit -consistency bounds when the hypothesis set is symmetric and complete, a property satisfied by common neural network and linear function hypothesis sets. Our results also resolve an open question raised in previous work <cit.> by proving the realizable -consistency and Bayes-consistency of a specific surrogate loss. Furthermore, we identify choices of Ψ that lead to -consistent surrogate losses for any general cost function, thus achieving Bayes-consistency, realizable -consistency, and -consistency bounds simultaneously. We also investigate the relationship between -consistency bounds and realizable -consistency in learning to defer, highlighting key differences from standard classification. Finally, we empirically evaluate our proposed surrogate losses and compare them with existing baselines. § INTRODUCTION In many practical scenarios, combining expert insights with established models can yield significant enhancements. These experts can be human domain specialists or more complex, albeit resource-intensive, models. For example, modern language and dialogue models are prone to producing hallucinations or inaccurate information. The quality of their responses can be significantly enhanced by delegating uncertain predictions to more specialized or advanced pre-trained models. This problem is particularly crucial for large language models (LLMs), as noted in <cit.>. The same principle applies to other generative systems, like those for images or videos, and to learning models in diverse applications such as image classification, annotation, and speech recognition. Thus, the task of learning to defer (L2D) with experts has become increasingly critical across a wide array of applications. Directly optimizing the deferral loss function, which is the target loss in L2D, is computationally intractable for many choices of the hypothesis set. Therefore, a common approach is to optimize a surrogate loss that facilitates the optimization of the deferral loss function. Recent work in L2D has proposed several surrogate losses <cit.> and studied their consistency guarantees, including Bayes-consistency, realizable -consistency, and -consistency bounds (see definitions in Section <ref>). In particular, <cit.> proposed the first Bayes-consistent surrogate loss by generalizing the cross-entropy loss for L2D. <cit.> proposed an alternative Bayes-consistent surrogate loss by generalizing the one-versus-all loss for L2D. <cit.> showed that these surrogate losses are not realizable -consistent. They proposed an alternative surrogate loss that is realizable -consistent, but they were unable to prove or disprove whether the proposed surrogate loss is Bayes-consistent. All the surrogate losses mentioned above and their consistency guarantees hold only for cost functions based on classification error. <cit.> generalized the surrogate loss in <cit.> to incorporate general cost functions and any multi-class surrogate losses. They provided -consistency bounds for the novel family of surrogate losses, offering a stronger guarantee than Bayes-consistency. However, none of these surrogate losses satisfies all these guarantees simultaneously. In particular, a recent AISTATS notable award paper by <cit.> left open the problem of finding surrogate losses that are both Bayes-consistent and realizable -consistent when the cost function for the expert is its classification error. The problem becomes even more challenging when considering more general and realistic cost functions. We present a comprehensive analysis of surrogate loss functions for L2D. Our contributions address the limitations of previous approaches and provide a unified framework for designing surrogate losses with strong theoretical guarantees. In Section <ref>, we first introduce a broad family of surrogate losses for L2D, derived from first principles (Section <ref>). This family is parameterized by a non-increasing function Ψ, which provides some flexibility in tailoring the loss function to specific requirements. We establish that under mild conditions on Ψ, these surrogate losses achieve realizable -consistency, a key guarantee for many applications (Section <ref>). Next, for cost functions based on classification error, we further establish that our surrogate loss functions admit -consistency bounds when the hypothesis set is symmetric and complete (Section <ref>). This result holds for commonly used neural network and linear function hypothesis sets, further strengthening the applicability of our results. Additionally, our results resolve an open question raised by <cit.> by proving the realizable -consistency and Bayes-consistency of their proposed surrogate loss, which the authors had left as an open question (Section <ref>). In Section <ref>, we further identify specific choices of Ψ, such as the one corresponding to the mean absolute error loss, that lead to -consistent surrogate losses for any general cost function. These loss functions are adapted to general cost functions and benefit from Bayes-consistency (Section <ref>), realizable -consistency, and -consistency bounds simultaneously. In Section <ref>, we also study the relationship between -consistency bounds and realizable -consistency in the context of L2D, highlighting key distinctions from the standard classification setting. Finally, we further report the results of experiments with our new surrogate losses and their comparison with the baselines in different settings (Section <ref>). We discuss the related work in Section <ref> and then begin with the preliminaries in Section <ref>. § RELATED WORK The approach of single-stage learning to defer, where a predictor and a deferral function are trained together, was pioneered by <cit.> and further developed in subsequent studies on abstention, where the cost is constant <cit.> and on deferral, where the cost can vary depending on the instance and the label <cit.>. In this approach, the deferral function determines whether to defer to an expert for each input. This approach has been shown to be superior to confidence-based approaches, where the decision to abstain or defer is based solely on the magnitude of the predictor’s value <cit.>; and to selective classification approaches, where the selection rate is fixed and a cost function modeled by an expert cannot be taken into account <cit.>. <cit.> initiated the learning to defer (L2D) problem scenario, which integrates human expert decisions into the cost function. This approach has been further explored in subsequent studies <cit.>. <cit.> introduced the first Bayes-consistent surrogate loss for L2D, which was further refined in <cit.>. <cit.> proposed an alternative Bayes-consistent surrogate loss, the one-versus-all loss, which was later examined within a broader family of loss functions <cit.>. <cit.> proposed an asymmetric softmax function, which can induce a valid probability estimator for learning to defer. <cit.> showed that the surrogate losses in <cit.> are not realizable -consistent. They proposed an alternative surrogate loss that is realizable -consistent, but they were unable to prove or disprove whether the proposed surrogate loss is Bayes-consistent. All the surrogate losses mentioned above and their consistency guarantees hold only for cost functions based on classification error. <cit.> generalized the surrogate loss in <cit.> to incorporate general cost functions and any multi-class surrogate losses. They provided -consistency bounds for the novel family of surrogate losses, offering a stronger guarantee than Bayes-consistency. Additional studies have focused on post-hoc methods, with <cit.> suggesting an alternative optimization technique between the predictor and rejector, <cit.> offering corrections for underfitting surrogate losses <cit.>, and <cit.> providing a unifying post-processing framework for multi-objective L2D based on a generalization of the Neyman-Pearson Lemma <cit.>. The L2D framework or variations thereof have found applications in diverse scenarios, spanning regression, reinforcement learning, and human-in-the-loop systems, among others <cit.>. More recently, the problem of learning to defer with multiple experts has been analyzed in several publications <cit.>. Meanwhile, <cit.> also proposed a two-stage learning to defer framework. They introduced two-stage surrogate losses that are both Bayes-consistent and realizable -consistent with constant costs. However, realizable -consistency does not hold for cost functions based on classification error. As with <cit.>, our work focuses on the single-stage and single-expert setting, and we plan to explore a similar approach in a multi-expert/two-stage setting in the future. § PRELIMINARIES We start with the definitions and notations used in the learning-to-defer scenario considered in this paper. We will then introduce consistency guarantees, including Bayes consistency, Realizable -consistency, and -consistency bounds. Finally, we will review existing consistent surrogate losses for L2D. §.§ Learning to defer: problem setup Let be an input space and = [n] := *1, …, n be the label space in the standard multi-class classification setting. We study the learning to defer (L2D) scenario, where a learner can either predict a label from or defer to an expert. To model this, we introduce an augmented label space = *1, …, n, n + 1, where the label n + 1 corresponds to deferral. An expert is a fixed predictor ×→. The goal of L2D is to select a predictor h out of a hypothesis set of functions mapping from × to with small expected deferral loss. Let (x) denote the prediction of h on input x ∈, defined as (x) = _y ∈ h(x, y), that is the label in the augmented label space with the highest score, with an arbitrary but fixed deterministic strategy for breaking ties. Then, the deferral loss function is defined as follows: ∀ (x, y) ∈×, (h, x, y) = 1_(x) ≠ y 1_(x) ∈ [n] + c(x, y) 1_(x) = n + 1, where c(x, y) is the the cost of deferring on input x with true label y. If the deferral option is selected, that is (x) = n + 1, the deferral cost c(x, y) is incurred. Otherwise, the prediction of h is within the standard label space, (x) ∈ [n], and the loss incurred coincides with the standard zero-one classification loss, 1_(x) ≠ y. The choice of the cost function c is flexible. For example, the cost can be defined as the expert's classification error: c(x, y) = 1_(x) ≠ y, as in previous work <cit.>. Here, (x) = _y∈ [n](x,y) is the prediction made by the expert . More generally, it can incorporate the inference cost for the expert <cit.>: c(x, y) = α 1_(x) ≠ y + β, with α, β > 0. We assume, without loss of generality, that the cost is bounded by 1: 0 ≤ c(x, y) ≤ 1, which can be achieved through normalization in practice. §.§ Consistency guarantees Directly optimizing the deferral loss function, which is the target loss in L2D, is generally computationally intractable for for complex hypothesis sets . Therefore, a common approach is to optimize a surrogate loss that facilitates the optimization of the deferral loss function. A natural learning guarantee for such surrogate losses is Bayes-consistency <cit.>: A surrogate loss is Bayes-consistent with respect to , if minimizing the surrogate loss over the family of all measurable functions leads to the minimization of the deferral loss: lim_n →∞_ (h_n) - ^*_(_all) = 0 lim_n →∞_(h_n) - ^*_(_all) = 0. Here, given a distribution over × and a loss function ××→, we denote by _ (h) the generalization error of a hypothesis h ∈, _(h) = _(x, y) ∼[(h, x, y)], and by ^*_() the best-in-class generalization error, ^*_() = inf_h ∈_(h). Bayes-consistency assumes that the optimization occurs over the family of all measurable functions, _all. However, in practice, the hypothesis set of interest is typically a restricted one, such as a family of neural networks. Therefore, a hypothesis-dependent learning guarantee, such as -consistency bounds <cit.> (see also <cit.>) and realizable -consistency <cit.>, is more informative and relevant. Realizable -consistency, defined as follows, requires that a minimizer of the surrogate loss over the given hypothesis set also minimizes the target loss, provided that the underlying distribution is realizable. A surrogate loss is realizable -consistent with respect to , if for any distribution over which there exists a predictor h^* ∈ achieving zero deferral loss, _(h^*) = 0, minimizing the surrogate loss also leads to a zero-error solution: ĥ∈_h ∈_(h) _(ĥ) = 0. Note that realizable -consistency does not imply Bayes-consistency, even if we set = _all in Definition <ref>, since Bayes-consistency requires that the relationship holds for all distributions, not just realizable ones. -consistency bounds, on the other hand, always imply Bayes-consistency. Given a hypothesis set , a surrogate loss admits an -consistency bound, if for some non-decreasing concave function Γ_+→_+ with Γ(0) = 0, a bound of the following form holds for any hypothesis h∈ and any distribution: _(h) - ^*_()+_() ≤Γ*_(h) - ^*_()+_(), where _() is the minimizability gap, defined as the difference between the best-in-class generalization error and the expected pointwise infimum loss: _() = ^*_() - 𝔼_x* inf_h∈_y|x*(h, x, y). The minimizability gap can be upper-bounded by the approximation error and vanishes when = _all <cit.>. Thus, an -consistency bound implies Bayes-consistency. The relationship between the two hypothesis-dependent learning guarantees—realizable -consistency and -consistency bounds—depends on the target loss adopted in the specific learning scenario. In Section <ref>, we will demonstrate that in the standard multi-class classification setting, an -consistency bound is a stronger notion than realizable -consistency. However, in L2D, these guarantees do not imply one another. §.§ Existing surrogate losses Here, we will review several consistent surrogate losses used in L2D. For convenience, we use c(x, y) = 1_(x) ≠ y to denote the cost when it specifically represents the expert's classification error, and use c(x, y) when it represents a general cost function. <cit.> proposed the first Bayes-consistent surrogate loss by generalizing the cross-entropy loss for L2D, with cost functions based on classification error, which is defined as _CE(h, x, y) = - log*e^h(x, y)/∑_y' ∈ e^h(x, y') - (1 - c(x, y)) log*e^h(x, n + 1)/∑_y' ∈ e^h(x, y'). <cit.> proposed an alternative one-vs-all surrogates loss with cost functions based on expert's classification error, that is Bayes-consistent as well: _OvA(h, x, y) = Φ (h(x, y)) +∑_y' ∈ y'≠ yΦ (- h(x, y')) + *1 - c(x, y)*Φ(h(x, n + 1)) -Φ(-h(x, n + 1)), where Φ is a strictly proper binary composite loss <cit.>, such as the logistic loss t ↦log(1 + e^-t). _CE and _OvA are not realizable -consistent. Instead, <cit.> proposed the following loss function that is realizable -consistent when is closed under scaling: _RS(h, x, y) = - 2 log*e^h(x, y) + (1 - c(x, y)) e^h(x, n + 1)/∑_y' ∈ e^h(x, y'). However, they were unable to prove or disprove whether the surrogate loss _RS is Bayes-consistent. All the surrogate losses mentioned above and their consistency guarantees hold only for cost functions based on the classification error: c(x, y) = 1_(x) ≠ y. <cit.> generalized the surrogate loss _CE to incorporate general cost functions and any multi-class surrogate losses: _general(h, x, y) = ℓ(h, x, y) + (1 - c(x, y)) ℓ(h, x, n + 1). Here, ℓ is a Bayes-consistent surrogate loss for the multi-class zero-one loss over the augmented label set . In particular, ℓ can be chosen as a comp-sum loss <cit.>, for example, the generalized cross entropy loss (see Section <ref>). As shown by <cit.>, _general benefits from -consistency bounds, which implies its Bayes-consistency. § NOVEL SURROGATE LOSSES In this section, we introduce a new family of surrogate losses for L2D that benefit from Bayes-consistency, realizable -consistency and -consistency bounds, starting from first principles. §.§ Derivation from first principles Observe that for any (x, y) ∈×, we have 1_(x) = n + 1 = 1_(x) ≠ y 1_(x) = n + 1, since (x) = n + 1 implies (x) ≠ y. Thus, using additionally 1_(x) ∈ [n] = 1_(x) ≠ n + 1, the deferral loss can be rewritten as follows for all (x, y) ∈×: (h, x, y) = 1_(x) ≠ y 1_(x) ∈ [n] + c(x, y) 1_(x) = n + 1 = 1_(x) ≠ y 1_(x) ≠ n + 1 + c(x, y) 1_(x) ≠ y 1_(x) = n + 1 = 1_(x) ≠ y 1_(x) ≠ n + 1 + c(x, y) 1_(x) ≠ y (1 - 1_(x) ≠ n + 1) = c(x, y) 1_(x) ≠ y + *1 - c(x, y) 1_(x) ≠ y ∧(x) ≠ n + 1. (h, x, y) = 1_(x) ≠ y 1_(x) ∈ [n] + c(x, y) 1_(x) = n + 1 = c(x, y) 1_(x) ≠ y + *1 - c(x, y) 1_(x) ≠ y 1_(x) ≠ n + 1 = c(x, y) 1_(x) ≠ y + *1 - c(x, y) 1_(x) ≠ y ∧(x) ≠ n + 1, where the second equality is derived by analyzing three cases: (x) = y, (x) = n + 1 and (x) ≠ y, (x) ≠ n + 1 and the third inequality by using 1_A 1_B = 1_A ∩ B. Thus, in particular, when c(x, y) = 1, the loss incurred is the standard 1_(x) ≠ y and, when c(x, y) = 0, the loss is 1_(x) ≠ y ∧(x) ≠ n + 1 = 1_(x) ≠ y ∧(x) ∈ [n]. Intuitively, this means that when c(x, y) = 1, indicating that the expert incurs an error, the model learns to predict the true label ((x) = y) to incur zero loss. When c(x, y) = 0, meaning the expert is accurate, the model learns to either predict the true label ((x) = y) or defer the prediction to the expert ((x) = n + 1), both of which result in zero loss. Next, we will derive the new surrogate losses for L2D by replacing the indicator functions in (<ref>) with smooth loss functions. The first indicator function 1_(x) ≠ y is just the multi-class zero-one loss. Thus, a natural choice is to replace it with a surrogate loss in standard multi-class classification. We will specifically consider the family of comp-sum losses <cit.>, defined as follows for any (h, x, y) ∈××: ℓ_comp(h, x, y) = Ψ*e^h(x, y)/∑_y' ∈ e^h(x, y'), where Ψ [0, 1] →_+∪+∞ is a non-increasing function. For example, by taking Ψ(t) = -log(t), 1/q*1 - t^q with q∈ (0, 1), 1 - t, we obtain the logistic loss <cit.>,the sum exponential loss <cit.>, the generalized cross entropy loss <cit.>, and the mean absolute error loss <cit.>, respectively: Logistic loss: ℓ_log(h, x, y) = - log*e^h(x, y)/∑_y' ∈ e^h(x, y') Generalized cross entropy loss: ℓ_gce(h, x, y) = 1/q*1 - *e^h(x,y)/∑_y'∈ e^h(x,y')^q Mean absolute error loss: ℓ_mae(h, x, y) = 1 - e^h(x, y)/∑_y' ∈ e^h(x, y'). For any (h, x, y) ∈××, the confidence margin ρ_h(x, y) is defined by ρ_h(x, y) = h(x, y) - max_y' ∈, y' ≠ y h(x, y'). Thus, the second indicator function 1_(x) ≠ y ∧(x) ≠ n + 1 can be expressed as follows in terms of the confidence margin: 1_(x) ≠ y ∧(x) ≠ n + 1 = 1_*h(x, y) ≤max_y' ∈, y' ≠ y h(x, y')∧*h(x, n + 1) ≤max_y' ∈, y' ≠ n + 1 h(x, y') = 1_*ρ_h(x, y) ≤ 0∧*ρ_h(x, n + 1) ≤ 0 = 1_max*ρ_h(x, y), ρ_h(x, n + 1)≤ 0. Note that the first indicator function can also be written in terms of margin: 1_(x) ≠ y = 1_ρ_h(x, y) ≤ 0. Unlike the first indicator function, which presses h(x, y) to be the largest score among , that is the margin ρ_h(x, y) to be positive, the second indicator function only enforces h(x, y) or h(x, n + 1) to be the largest score among , that is the maximum of two margins, max*ρ_h(x, y), ρ_h(x, n + 1), to be positive. This condition can be further strengthened by requiring the sum of two margins, ρ_h(x, y) + ρ_h(x, n + 1), to be positive. In view of this observation, we adopt the following modified comp-sum surrogate loss for the second indicator function: ℓ_comp(h, x, y) = Ψ*e^h(x, y) + e^h(x, n + 1)/∑_y' ∈ e^h(x, y'), where Ψ [0, 1] →_+∪+∞ is a non-increasing function. In other words, ℓ_comp replaces the term e^h(x, y) in the softmax function in ℓ_comp with the sum e^h(x, y) + e^h(x, n + 1). The effect is to encourage the sum of the two margins, ρ_h(x, y) + ρ_h(x, n + 1), to be positive, rather than just the single margin ρ_h(x, y). Following this principle, we derive the following expression for a new family of surrogate losses, _RL2D, dubbed realizable L2D: _RL2D(h, x, y) = c(x, y) ℓ_comp(h, x, y) + *1 - c(x, y)ℓ_comp(h, x, y). For the choices of Ψ(t) = -log(t), 1/q*1 - t^q with q∈ (0, 1) and 1 - t, we obtain the new surrogate losses for L2D in Table <ref>. In the next sections, we will prove both realizable -consistency guarantees and -consistency bounds for this family of surrogate losses, which imply their excess error bounds and Bayes-consistency as well. §.§ Realizable -consistency Here, we show that _RL2D is realizable -consistent with respect to . We say that a hypothesis set is closed under scaling if, h ∈α h ∈ for any α∈. theoremRealizableConsistencyNew Assume that is closed under scaling. Suppose that Ψ is non-increasing, Ψ*2/3 > 0 and lim_t → 1Ψ(t) = 0. Then, the surrogate loss _RL2D is realizable -consistent with respect to . The proof, detailed in Appendix <ref>, begins by establishing an upper bound on the deferral loss in terms of the comp-sum loss: ≤_RL2D/Ψ*2/3. Letting ĥ be the minimizer of _RL2D and α be any real number, we then show that _(ĥ) ≤1/Ψ*2/3__RL2D(α h^*). The generalization error is then split by conditioning on whether h^*(x) is the deferral class (n + 1) or not. Finally, we demonstrate that each conditional term converges to zero as α tends to +∞, and apply the monotone convergence theorem to complete the proof. §.§ -Consistency bounds Here, we show that _RL2D admits an -consistency bound with respect to , which implies its Bayes-consistency as well. We say that a hypothesis set is symmetric if there exists a family of functions f mapping from to such that **h(x, 1), …, h(x, n + 1) h ∈ = **f_1(x),…, f_n + 1(x) f_1, …, f_n + 1∈, for any x ∈. We say that a hypothesis set is complete if for any (x, y) ∈×, the set of scores generated by it spans across the real numbers: *h(x, y) | h ∈ =. Common neural network and linear function hypothesis sets are all symmetric and complete. We first consider the case where the cost is expert's classification error. theoremHConsistencyNew Assume that is symmetric and complete and that c(x, y) = 1_(x) ≠ y. Then, for all h ∈ and any distribution, the following -consistency bound holds: _(h) - _() + _() ≤Γ*__RL2D(h) - __RL2D() + __RL2D(), where Γ(t) = √(2t) when Ψ(t) = -log(t) and Γ(t) = √(2 (n + 1)^q t) when Ψ(t) = 1/q*1 - t^q with q∈ (0, 1). The proof, detailed in Appendix <ref> and <ref>, establishes strong consistency guarantees for our new surrogate loss _RL2D (Theorem <ref>). We first introduce y_max = _y ∈ p(x, y), the label with the highest conditional probability. We then show that for any hypothesis h and input x, if y_max is not the predicted label h_max, the conditional error of h is lower bounded by a modified hypothesis h (obtained by swapping the scores of y_max and h_max). Next, for hypotheses where y_max = h_max, we lower bound their conditional regret in terms of the conditional regret of the deferral loss using a new hypothesis h_μ. This proof is novel and significantly different from existing approaches for establishing -consistency bounds in either the standard or deferral settings <cit.>. The next result further shows that when Ψ(t) = 1 - t, our surrogate losses benefit from -consistency bounds for any general cost function. theoremHConsistencyMAE Assume that is symmetric and complete. Suppose that Ψ(t) = 1 - t. Then, for all h ∈ and any distribution, the following -consistency bounds hold: _(h) - _() + _() ≤ (n + 1) *__RL2D(h) - __RL2D() + __RL2D(). The proof is included in Appendix <ref>. Theorem <ref> provides stronger consistency guarantees for our new surrogate loss _RL2D with Ψ(t) = 1 - t since it holds for any general cost function. The proof idea is similar to that of Theorem <ref>, albeit with more cases to analyze due to the general cost function. This occurs when lower bounding the conditional regret of a hypothesis h, which satisfies y_max = h_max, in terms of the conditional regret of the deferral loss by introducing a new hypothesis h_μ. The additional cases necessitate a more stringent condition for the guarantee, such that the functions Ψ(t) = -log(t) and Ψ(t) = 1/q(1 - t^q) do not apply. §.§ Excess error bounds and Bayes-consistency For the family of all measurable functions = _all, the minimizability gaps vanish. In this case, Theorems <ref> and <ref> imply the following excess error bounds and Bayes-consistency guarantees. corollaryExcessBoundNew Suppose that c(x, y) = 1_(x) ≠ y. For all h ∈_all and any distribution, the following excess error bounds hold: _(h) - _(_all) ≤Γ*__RL2D(h) - __RL2D(_all), where Γ(t) = √(2t) when Ψ(t) = -log(t) and Γ(t) = √(2 (n + 1)^q t) when Ψ(t) = 1/q*1 - t^q with q∈ (0, 1). Furthermore, the surrogate loss _RL2D is Bayes-consistent with respect to in these cases. corollaryExcessBoundNewMAE Suppose that Ψ(t) = 1 - t. For all h ∈_all and any distribution, the following excess error bounds hold: _(h) - _(_all) ≤ (n + 1) *__RL2D(h) - __RL2D(_all). Furthermore, the surrogate loss _RL2D is Bayes-consistent with respect to in this case. Therefore, Theorem <ref> and Corollary <ref> show that _RL2D is both realizable -consistent and Bayes-consistent with respect to . This solves the open problem raised by <cit.>. In particular, for cost functions based on classification error, c(x, y) = 1_(x) ≠ y, our surrogate loss _RL2D with Ψ(t) = -log(t) coincides with the surrogate loss _RS in <cit.>, modulo a constant. This affirmatively answers the question of whether their surrogate loss is Bayes-consistent when c(x, y) = 1_(x) ≠ y. However, their surrogate loss cannot be shown to be Bayes-consistent for a general cost function. In contrast, our surrogate losses _RL2D with Ψ(t) = 1 - t are adaptable to general cost functions and benefit from both -consistency bounds and realizable -consistency guarantees. We also provide a more general family of comp-sum loss functions with Ψ(t) = 1/q(1 - t^q) that benefit from both -consistency bounds and realizable -consistency when c(x, y) = 1_(x) ≠ y. § RELATIONSHIP BETWEEN -CONSISTENCY BOUNDS AND REALIZABLE -CONSISTENCY Here, we discuss the relationship between -consistency bounds and realizable -consistency. First, realizable -consistency does not imply -consistency bounds, since -consistency bounds require that the relationship holds for all distributions, not just realizable ones. Moreover, -consistency bounds provide non-asymptotic guarantees, while realizable -consistency provides only asymptotic guarantees. Second, -consistency bounds imply realizable -consistency in the standard multi-class classification setting. This is because minimizability gaps vanish under the realizable assumption in standard case. In particular, for comp-sum losses, the following holds (see Appendix <ref> for proof). theoremMinimizability Assume that there exists a zero error solution h^* ∈ with _ℓ_0-1(h^*) = 0 and is closed under scaling. Assume that lim_t → 1Ψ(t) = 0. Then, the minimizability gap of comp-sum loss ℓ_comp vanishes: _ℓ_comp() = 0. However, in the deferral setting, this relationship no longer holds: -consistency bounds cannot imply realizable -consistency. In particular, <cit.> showed that _CE benefits from -consistency bounds, while <cit.> showed that it is not realizable -consistent. The loss function in <cit.> is not Bayes-consistent, and thus does not have -consistency bound guarantees, but is actually realizable -consistent <cit.>. § EXPERIMENTS In this section, we empirically evaluate our proposed surrogate losses and compare them with existing baselines. Experimental settings. We follow the setting of <cit.> and conduct experiments on a synthetic dataset: Mixture-of-Gaussians <cit.>, and three real-world datasets: CIFAR-10H <cit.>, HateSpeech <cit.>, and COMPASS <cit.>. For the Mixture-of-Gaussians, we adopt the exact realizable setting from <cit.>, which is realizable by linear functions: there exists a linear hypothesis h^* ∈ achieving zero deferral loss, _(h^*) = 0. For these three datasets, we adopt the same model class as that in <cit.>. Each dataset is randomly split into 70%, 10%, and 20% for training, validation, and testing, respectively. As with <cit.>, we choose the cost function to be the expert's classification error: c(x, y) = 1_(x) ≠ y. We compare our surrogate to four baselines as described in Section <ref>: the cross-entropy surrogate _CE from <cit.>, the one-vs-all surrogate _OvA from <cit.>, the realizable surrogate _RS from <cit.>, and the general surrogate _general from <cit.>. For _OvA, we choose Φ as the logistic loss, following <cit.>. For _general, we choose ℓ as the generalized cross entropy loss with q = 0.7, following <cit.>. For our Realizable L2D surrogate _RL2D, we consider two choices: ℓ as the generalized cross entropy loss with q = 0.7, following <cit.>, and ℓ as the mean absolute error loss (q = 1). Among these, _CE, _OvA and _general are Bayes-consistent but not realizable -consistent; _RS, _RL2D with q = 0.7 and _RL2D with q = 1 are both Bayes-consistent and realizable -consistent, as shown in Sections <ref> and <ref>. Note that in this case, _RS is a special case of _RL2D when Ψ is chosen as t ↦ -log(t). We use the same optimizer, learning rate, and number of epochs as chosen in <cit.>, and we select the model that achieves the highest system accuracy, that is average *1 - (h, x, y), on a validation set. Evaluation. For the three real-world datasets, we report the system accuracy, that is average value of *1 - (h, x, y) on the test data. For completeness, we also include the accepted accuracy, that is the average value of *1_(x) ≠ y 1_(x) ∈ [n]. This metric considers only incorrect predictions ((x) ≠ y) and measures the fraction of those where the system's output ((x)) falls within the valid range of possible outputs ([n]). We also report the coverage, that is the average value of *1_(x) ∈ [n] on the test set, or the fraction of test instances where the system's prediction falls within the valid range ([n]). For each metric, we average results over three runs and report the mean accuracy along with the standard deviation for both our proposed methods and the baseline approaches. For the realizable Mixture-of-Gaussians, we plot the system accuracy of various methods on a held-out test dataset consisting of 5,000 points as we increase the size of the training data. Results. Figures 1 show that for the realizable synthetic dataset, _RS, _RL2D with q = 0.7, and _RL2D with q = 1 are able to achieve a near zero-error solution, while _CE, _OvA, and _general fail to find a near zero-error solution. This verifies our realizable -consistency results in Section <ref>. Table <ref> shows that for the real-world datasets, _RL2D with q = 0.7, and _RL2D with q = 1 either outperform or are comparable to the best baseline in terms of system accuracy on each dataset. This performance is supported by our -consistency bounds and Bayes-consistency results for our Realizable L2D surrogate with respect to the deferral loss , as shown in Sections <ref> and <ref>. Table <ref> also shows that _RL2D achieves reasonable coverage and acceptable accuracy. The system accuracy, coverage, and standard deviations of the baselines match those in <cit.>. Moreover, _RS, _RL2D with q = 0.7, and _RL2D with q = 1 perform differently across various datasets: _RL2D with q = 0.7 outperforms the others on HateSpeech and CIFAR-10H, while _RL2D with q = 1 outperforms the others on COMPASS. Note that in this case, _RS is a special case of _RL2D when Ψ is chosen as t ↦ -log(t). These results show that Realizable L2D can benefit from the flexibility in the choice of Ψ. § CONCLUSION We introduced a broad family of surrogate losses and algorithms for learning to defer, parameterized by a non-increasing function. We established their realizable -consistency properties under mild conditions and proved that several of these surrogate losses benefit from -consistency bounds for cost functions based on classification error and general cost functions, which also imply their Bayes-consistency. This research not only resolves an open question posed in previous work but also lays the groundwork for comparing various consistency notions in learning to defer and standard classification. Looking forward, our approach offers a promising avenue for analyzing multi-expert and two-stage settings. toc § PROOF OF REALIZABLE H-CONSISTENCY * We first prove that for every (h, x, y) ∈××, the following inequality holds: (h, x , y) ≤_RL2D(h, x, y)/Ψ*2/3. We will analyze case by case. * Case I: If (x) ∈ [n] (deferral does not occur): * If 1_(x) ≠ y = 1, then we must have (h, x , y) = 1, e^h(x, y)/∑_y' ∈ e^h(x, y')≤1/2, e^h(x, y) + e^h(x, n + 1)/∑_y' ∈ e^h(x, y')≤2/3 _RL2D(h, x, y) ≥ c(x, y) Ψ*1/2 + *1 - c(x, y)Ψ*2/3≥Ψ*2/3 (h, x , y). * If 1_(x) ≠ y = 0, then we must have _RL2D(h, x, y) ≥ 0 = (h, x , y). * Case II: If (x) = n + 1 (deferral occurs): then we must have (h, x , y) = c(x, y), e^h(x, y)/∑_y' ∈ e^h(x, y')≤1/2 _RL2D(h, x, y) ≥ c(x, y) Ψ*1/2≥Ψ*2/3 (h, x , y). This concludes that (h, x , y) ≤_RL2D(h, x, y)/Ψ*2/3. Next, we prove that _RL2D is realizable -consistent under the assumptions. Consider a distribution and an expert under which there exists a zero error solution h^* ∈ with _(h^*) = 0. Let ĥ be the minimizer of the surrogate loss: ĥ∈_h ∈__RL2D(h). Let α be any real number. Then, the following inequality holds: _(ĥ) ≤1/Ψ*2/3__RL2D(ĥ) ≤1/Ψ*2/3_RL2D ≤1/Ψ*2/3__RL2D(α h^*) ĥ∈_h ∈__RL2D(h) and is closed under scaling = 1/Ψ*2/3*_RL2D(α h^*, x, y) |^*(x) = n + 1ℙ*^*(x) = n + 1 + 1/Ψ*2/3*_RL2D(α h^*, x, y) |^*(x) ∈ [n]ℙ*^*(x) ∈ [n]. For the first term conditional on ^*(x) = n + 1, we must have h^*(x, n + 1) > max_y ∈ h^*(x, y) and c(x, y) = 0 since the data is realizable. Therefore, lim_α→∞*_RL2D(α h^*, x, y) |^*(x) = n + 1ℙ*^*(x) = n + 1 = lim_α→∞[]Ψ*e^α h^*(x, y) + e^α h^*(x, n + 1)/∑_y' ∈ e^α h^*(x, y')|^*(x) = n + 1ℙ*^*(x) = n + 1 = *0 |^*(x) = n + 1ℙ*^*(x) = n + 1lim_t → 1Ψ(t) = 0 and monotone convergence theorem = 0. For the second term conditional on ^*(x) ∈ [n], we must have h^*(x, y) > max_y' ∈, y' ≠ y h(x, y') since the data is realizable. Therefore, lim_α→∞*_RL2D(α h^*, x, y) |^*(x) ∈ [n]ℙ*^*(x) ∈ [n] = lim_α→∞[]c(x, y) Ψ*e^α h^*(x, y)/∑_y' ∈ e^α h^*(x, y') + *1 - c(x, y)Ψ*e^α h^*(x, y) + e^α h^*(x, n + 1)/∑_y' ∈ e^α h^*(x, y')|^*(x) ∈ [n]ℙ*^*(x) ∈ [n] = *0 |^*(x) ∈ [n]ℙ*^*(x) ∈ [n]lim_t → 1Ψ(t) = 0 and monotone convergence theorem = 0. Combining the two analyses, we conclude that _(ĥ) = 0 and thus _RL2D is realizable -consistent with respect to . § PROOF OF H-CONSISTENCY BOUNDS Before delving into the proof, we first establish some essential notation and definitions. Let represent a deferral surrogate loss and denote a hypothesis set. We define the conditional error as _(h, x) = _y | x*(h, x, y), the best-in-class conditional error as ^*_*, x = inf_ h ∈_(h, x), and the conditional regret as Δ_, *h, x = _(h, x) - ^*_*, x. We proceed to present a general theorem demonstrating that, to establish -consistency bounds (<ref>) with a concave function Γ, it suffices to lower bound the conditional regret of the surrogate loss by that of the deferral loss, using the same function Γ. theoremTool-Gamma If the following holds for all h ∈ and x ∈, for some concave function Γ: Δ_, (h, x) ≤Γ*Δ_, (h, x), then, for all hypotheses h ∈ and for any distribution, _(h)- ^*_() + _() ≤Γ*_(h) - ^*_() + _(). We can express the expectations of the conditional regrets for and as follows: _x*Δ_, (h, x) = _(h) - ^*_() + _() _x*Δ_, (h, x) = _(h) - ^*_() + _(). Then, by using (<ref>) and taking the expectation, we obtain: _(h) - ^*_() + _() = _x*Δ_, (h, x) ≤_x*Γ*Δ_, (h, x)Eq. (<ref>) ≤Γ*_x*Δ_, (h, x)concavity of Γ = Γ*_(h) - ^*_() + _(). Thus, the proof is complete. Next, to prove -consistency bounds using Theorem <ref>, we will characterize the conditional regret of the deferral loss in the following section. §.§ Auxiliary lemma To simplify the presentation, we introduce the following notation. For any y ∈, define p(x, y) = ℙ(Y = y | X = x) as the conditional probability that Y = y given X = x. For brevity, we will omit the dependency on x in our notation. We denote by h_y = h(x, y) for any y ∈. We also denote by p_y = p(x, y) and q_y = p(x, y) c(x, y) for any y ∈, and p_n + 1 = ∑_y ∈ p(x, y) (1 - c(x, y)). Note that p(x, y) *1 - c(x, y) = p_y - q_y, ∀ y ∈. Let p_ = p_(x) = p_(x) (x) ∈ [n] p_n + 1 (x) = n + 1.. Let y_max = _y ∈ p_y and h_max = _y ∈ h_y. Note that both y_max and h_max are in the label space , while (x) is in the augmented label space . We characterize the conditional regret of the deferral loss as follows. Assume that is symmetric and complete. Then, the conditional regret of the deferral loss can be expressed as follows: Δ_, (h, x) = max*p_y_max, p_n + 1 - p_. We can write the conditional error of the deferral loss as follows: _(h, x) = ∑_y ∈ p(x, y) (h, x, y) = ∑_y ∈ p(x, y) 1_(x) ≠ y 1_(x) ∈ [n] + ∑_y ∈ p(x, y) c(x, y) 1_(x) = n + 1 = (1 - p_(x))1_(x) ∈ [n] + *1 - p_n + 1 1_(x) = n + 1 = 1 - p_. Since is symmetric and complete, for any x ∈, *(x) h ∈ =. Then, the best-in-class conditional error of can be expressed as follows: ^*_*, x = inf_ h ∈_(h, x) = 1 - max*p_n + 1, p_y_max Therefore, Δ_, (h, x) = _(h, x) - ^*_*, x = max*p_y_max, p_n + 1 - p_. Next, we will present the proofs separately in the following sections, by lower bounding the conditional regret of the surrogate loss by that of the deferral loss using Lemma <ref>. §.§ Psi theoremHConsistencyNewMae Assume that is symmetric and complete. Then, for all h ∈ and any distribution, the following -consistency bound holds: _(h) - _() + _() ≤ n *__RL2D(h) - __RL2D() + __RL2D(). We can write the conditional error of the surrogate loss as follows: __RL2D(h, x) = ∑_y ∈ p(x, y) _RL2D(h, x, y) = ∑_y ∈ p(x, y) c(x, y) *1 - e^h(x, y)/∑_y' ∈ e^h(x, y') + ∑_y ∈ p(x, y) *1 - c(x, y)*1 - e^h(x, y) + e^h(x, n + 1)/∑_y' ∈ e^h(x, y') = ∑_y ∈ q_y *1 - e^h_y/∑_y' ∈ e^h_y' + ∑_y ∈*p_y - q_y*1 - e^h_y + e^h_n + 1/∑_y' ∈ e^h_y'. By Lemma <ref>, the conditional regret of the deferral loss can be expressed as Δ_, (h, x) = max*p_y_max, p_n + 1 - p_. Next, we will show that the conditional regret of the surrogate loss can be lower bounded as follows: Δ__RL2D, (h, x) = __RL2D(h) - ^*__RL2D() ≥1/n + 1*Δ_, (h, x). We first prove that for any hypothesis h and x ∈, if y_max≠ h_max, then the conditional error of h can be lower bounded by that of h, which satisfies that h(x, y) = h_h_max y = y_max h_y_max y = h_max h_y otherwise.. Indeed, __RL2D(h) - __RL2D( h) = q_y_max*1 - e^h_y_max/∑_y' ∈ e^h_y' + *p_y_max - q_y_max*1 - e^h_y_max + e^h_n + 1/∑_y' ∈ e^h_y' +q_h_max*1 - e^h_h_max/∑_y' ∈ e^h_y' + *p_h_max - q_h_max*1 - e^h_h_max + e^h_n + 1/∑_y' ∈ e^h_y' - q_y_max*1 - e^h_h_max/∑_y' ∈ e^h_y' - *p_y_max - q_y_max*1 - e^h_h_max + e^h_n + 1/∑_y' ∈ e^h_y' -q_h_max*1 - e^h_y_max/∑_y' ∈ e^h_y' - *p_h_max - q_h_max*1 - e^h_y_max + e^h_n + 1/∑_y' ∈ e^h_y' =1/∑_y' ∈ e^h_y'* p_y_max - p_h_max*e^h_h_max - e^h_y_max≥ 0. Therefore, we only need to lower bound the conditional regret of hypothesis h satisfying y_max = h_max. Next, we will analyze case by case. Note that when (p_y_max - p_n + 1) (h_y_max - h_n + 1) > 0, we have Δ_, (h, x) = max*p_y_max, p_n + 1 - p_ = 0. * Case I: If p_y_max - p_n + 1≥ 0 and h_y_max - h_n + 1≤ 0: we define a new hypothesis h_μ such that h_μ(x, y) = log*e^h_n + 1 + μ y = y_max log*e^h_y_max - μ y = n + 1 h(x, y) otherwise., where e^h_y_max≥μ≥ 0. Then, we can lower bound the conditional regret of _RL2D by using Δ__RL2D, (h, x) ≥__RL2D(h) - ^*__RL2D(h_μ) for any e^h_y_max≥μ≥ 0: Δ__RL2D, (h, x) ≥sup_e^h_y_max≥μ≥ 0*__RL2D(h) - ^*__RL2D(h_μ) ≥sup_e^h_y_max≥μ≥ 0[]q_y_max*1 - e^h_y_max/∑_y' ∈ e^h_y' + *p_y_max - q_y_max*1 - e^h_y_max + e^h_n + 1/∑_y' ∈ e^h_y' + ∑_y' ∈, y' ≠ y_max*p_y' - q_y'*1 - e^h_y' + e^h_n + 1/∑_y' ∈ e^h_y' - q_y_max*1 - e^h_n + 1 + μ/∑_y' ∈ e^h_y' - *p_y_max - q_y_max*1 - e^h_n + 1 + e^h_h_max/∑_y' ∈ e^h_y' - ∑_y' ∈, y' ≠ y_max*p_y' - q_y'*1 - e^h_y' + e^h_y_max - μ/∑_y' ∈ e^h_y' = 1/∑_y' ∈ e^h_y'sup_e^h_y_max≥μ≥ 0*q_y_max*e^h_n + 1 + μ - e^h_y_max + *p_n + 1 - p_y_max + q_y_max*e^h_y_max - μ - e^h_n + 1 = *p_y_max - p_n + 1e^h_n + 1/∑_y' ∈ e^h_y'μ = e^h_y_max achieves the maximum ≥1/n + 1*p_y_max - p_n + 1by the assumption h_n + 1≥ h_y_max = h_h_max = 1/n + 1*Δ_, (h, x)by the assumption p_y_max≥ p_n + 1 and h_y_max - h_n + 1≤ 0 * Case II: If p_y_max - p_n + 1≤ 0 and h_y_max - h_n + 1≥ 0: we define a new hypothesis h_μ such that h_μ(x, y) = log*e^h_n + 1 - μ y = y_max log*e^h_y_max + μ y = n + 1 h(x, y) otherwise., where e^h_n + 1≥μ≥ 0. Then, we can lower bound the conditional regret of _RL2D by using Δ__RL2D, (h, x) ≥__RL2D(h) - ^*__RL2D(h_μ) for any e^h_n + 1≥μ≥ 0: Δ__RL2D, (h, x) ≥sup_e^h_n + 1≥μ≥ 0*__RL2D(h) - ^*__RL2D(h_μ) ≥sup_e^h_n + 1≥μ≥ 0[]q_y_max*1 - e^h_y_max/∑_y' ∈ e^h_y' + *p_y_max - q_y_max*1 - e^h_y_max + e^h_n + 1/∑_y' ∈ e^h_y' + ∑_y' ∈, y' ≠ y_max*p_y' - q_y'*1 - e^h_y' + e^h_n + 1/∑_y' ∈ e^h_y' - q_y_max*1 - e^h_n + 1 - μ/∑_y' ∈ e^h_y' - *p_y_max - q_y_max*1 - e^h_n + 1 + e^h_h_max/∑_y' ∈ e^h_y' - ∑_y' ∈, y' ≠ y_max*p_y' - q_y'*1 - e^h_y' + e^h_y_max + μ/∑_y' ∈ e^h_y' = 1/∑_y' ∈ e^h_y'sup_e^h_n + 1≥μ≥ 0*q_y_max*e^h_n + 1 - μ - e^h_y_max + *p_n + 1 - p_y_max + q_y_max*e^h_y_max + μ - e^h_n + 1 = *p_n + 1- p_y_maxe^h_y_max/∑_y' ∈ e^h_y'μ = e^h_n + 1 achieves the maximum ≥1/n + 1*p_n + 1- p_y_maxby the assumption h_h_max = h_y_max≥ h_n + 1 = 1/n + 1*Δ_, (h, x)by the assumption p_n + 1≥ p_y_max and h_y_max - h_n + 1≥ 0 This proves the inequality (<ref>). By Theorem <ref>, we complete the proof. §.§ Psi theoremHConsistencyNewLog Assume that is symmetric and complete. Assume that c(x, y) = 1_(x) ≠ y. Then, for all h ∈ and any distribution, the following -consistency bound holds: _(h) - _() + _() ≤ 2 √(__RL2D(h) - __RL2D() + __RL2D()). We can write the conditional error of the surrogate loss as follows: __RL2D(h, x) = ∑_y ∈ p(x, y) _RL2D(h, x, y) = -∑_y ∈ p(x, y) c(x, y) log*e^h(x, y)/∑_y' ∈ e^h(x, y') - ∑_y ∈ p(x, y) *1 - c(x, y)log*e^h(x, y) + e^h(x, n + 1)/∑_y' ∈ e^h(x, y') = -∑_y ∈ q_y log*e^h_y/∑_y' ∈ e^h_y' - ∑_y ∈*p_y - q_ylog*e^h_y + e^h_n + 1/∑_y' ∈ e^h_y'. By Lemma <ref>, the conditional regret of the deferral loss can be expressed as Δ_, (h, x) = max*p_y_max, p_n + 1 - p_. Next, we will show that the conditional regret of the surrogate loss can be lower bounded as follows: Δ__RL2D, (h, x) = __RL2D(h) - ^*__RL2D() ≥1/2*Δ_, (h, x) ^2. We first consider the case where (x) ≠ y_max. Otherwise, it would be straightforward to see that the bound holds. In the case where (x) ≠ y_max, we have q_y_max = p_y_max. We first prove that for any hypothesis h and x ∈, if y_max≠ h_max, then the conditional error of h can be lower bounded by that of h, which satisfies that h(x, y) = h_h_max y = y_max h_y_max y = h_max h_y otherwise.. Indeed, __RL2D(h) - __RL2D( h) = -q_y_maxlog*e^h_y_max/∑_y' ∈ e^h_y' - *p_y_max - q_y_maxlog*e^h_y_max + e^h_n + 1/∑_y' ∈ e^h_y' -q_h_maxlog*e^h_h_max/∑_y' ∈ e^h_y' - *p_h_max - q_h_maxlog*e^h_h_max + e^h_n + 1/∑_y' ∈ e^h_y' + q_y_maxlog*e^h_h_max/∑_y' ∈ e^h_y' + *p_y_max - q_y_maxlog*e^h_h_max + e^h_n + 1/∑_y' ∈ e^h_y' + q_h_maxlog*e^h_y_max/∑_y' ∈ e^h_y' + *p_h_max - q_h_maxlog*e^h_y_max + e^h_n + 1/∑_y' ∈ e^h_y' = * q_y_max - q_h_maxlog*e^h_h_max/e^h_y_max + * p_y_max - q_y_max - p_h_max + q_h_maxlog*e^h_h_max + e^h_n + 1/e^h_y_max + e^h_n + 1 ≥* p_y_max - p_h_maxlog*e^h_h_max + e^h_n + 1/e^h_y_max + e^h_n + 1 ≥ 0. Therefore, we only need to lower bound the conditional regret of hypothesis h satisfying y_max = h_max. Since c(x, y) = 1_(x) ≠ y, we have p_y_max≥ p_n + 1 = p_(x). Note that when (p_y_max - p_n + 1) (h_y_max - h_n + 1) > 0, we have Δ_, (h, x) = max*p_y_max, p_n + 1 - p_ = 0. When h_y_max - h_n + 1≤ 0, we define a new hypothesis h_μ such that h_μ(x, y) = log*e^h_n + 1 + μ y = y_max log*e^h_y_max - μ y = n + 1 h(x, y) otherwise., where e^h_y_max - e^h_n + 1≤μ≤ e^h_y_max. Then, we can lower bound the conditional regret of _RL2D by using Δ__RL2D, (h, x) ≥__RL2D(h) - ^*__RL2D(h_μ) for any e^h_y_max - e^h_n + 1≤μ≤ e^h_y_max: Δ__RL2D, (h, x) ≥sup_e^h_y_max≥μ≥ e^h_y_max - e^h_n + 1*__RL2D(h) - ^*__RL2D(h_μ) ≥sup_e^h_y_max≥μ≥ e^h_y_max - e^h_n + 1[]-q_y_maxlog*e^h_y_max/∑_y' ∈ e^h_y' - *p_y_max - q_y_maxlog*e^h_y_max + e^h_n + 1/∑_y' ∈ e^h_y' - ∑_y' ∈, y' ≠ y_max*p_y' - q_y'log*e^h_y' + e^h_n + 1/∑_y' ∈ e^h_y' + q_y_maxlog*e^h_n + 1 + μ/∑_y' ∈ e^h_y' + *p_y_max - q_y_maxlog*e^h_n + 1 + e^h_y_max/∑_y' ∈ e^h_y' + ∑_y' ∈, y' ≠ y_max*p_y' - q_y'log*e^h_y' + e^h_y_max - μ/∑_y' ∈ e^h_y' = sup_e^h_y_max≥μ≥ e^h_y_max - e^h_n + 1*q_y_maxloge^h_n + 1 + μ/e^h_y_max + ∑_y' ∈, y' ≠ y_max*p_y' - q_y'loge^h_y' + e^h_y_max - μ/e^h_y' + e^h_n + 1 e^h_y_max - e^h_n + 1≤μ≤ e^h_y_max ≥sup_e^h_y_max≥μ≥ e^h_y_max - e^h_n + 1*q_y_maxloge^h_n + 1 + μ/e^h_y_max + ∑_y' ∈, y' ≠ y_max*p_y' - q_y'loge^h_y_max - μ/e^h_n + 1 = sup_e^h_y_max≥μ≥ e^h_y_max - e^h_n + 1*q_y_maxloge^h_n + 1 + μ/e^h_y_max + *p_n + 1 - *p_y_max - q_y_maxloge^h_y_max - μ/e^h_n + 1. By differentiating with respect to μ, we obtain that μ = q_y_max e^h_y_max - *p_n + 1 - *p_y_max - q_y_max e^h_n + 1/q_y_max + *p_n + 1 - *p_y_max - q_y_max achieves the maximum. Plugging it into the expression, we have Δ__RL2D, (h, x) ≥ q_y_maxlog**e^h_y_max + e^ h_n + 1q_y_max/e^h_y_max*q_y_max + *p_n + 1 - *p_y_max - q_y_max + *p_n + 1 - *p_y_max - q_y_maxlog**e^h_y_max + e^ h_n + 1*p_n + 1 - *p_y_max - q_y_max/ e^ h_n + 1*q_y_max+*p_n + 1 - *p_y_max - q_y_max. This can be further lower bounded by taking the minimum over h ∈, where the minimum is attained when e^h_y_max = e^ h_n + 1 Therefore, Δ__RL2D, (h, x) ≥ q_y_maxlog*2q_y_max/q_y_max + *p_n + 1 - *p_y_max - q_y_max + *p_n + 1 - *p_y_max - q_y_maxlog*2*p_n + 1 - *p_y_max - q_y_max/ q_y_max+*p_n + 1 - *p_y_max - q_y_max. By applying Pinsker’s inequality <cit.>, we obtain Δ__RL2D, (h, x) ≥*q_y_max+p_n + 1 - *p_y_max - q_y_max ×1/2**q_y_max/q_y_max+p_n + 1 - *p_y_max - q_y_max-1/2+*p_n + 1 - *p_y_max - q_y_max/q_y_max+p_n + 1 - *p_y_max - q_y_max-1/2^2 ≥1/2*p_y_max - p_n + 1^2/q_y_max+p_n + 1 - *p_y_max - q_y_max ≥1/2*p_y_max - p_n + 1^2 q_y_max+p_n + 1 - *p_y_max - q_y_max≤ 1 = 1/2*Δ_, (h, x)^2 by the assumption p_y_max≥ p_n + 1 and h_y_max≤ h_n + 1 This proves the inequality (<ref>). In the case where (x) = y_max, we have p_n + 1 = p_y_max. By Lemma <ref>, the conditional regret of the deferral loss can be expressed as Δ_, (h, x) = p_n + 1 - p_. If (x) = n + 1, then we have Δ_, (h, x) = 0. Otherwise, when (x) ≠ n + 1, we can proceed in the similar way as above, by defining a new hypothesis h_μ such that h_μ(x, y) = log*e^h_n + 1 + μ y = (x) log*e^h_(x) - μ y = n + 1 h(x, y) otherwise. Then, we can lower bound the conditional regret of _RL2D by using Δ__RL2D, (h, x) ≥__RL2D(h) - ^*__RL2D(h_μ), by applying the same derivation as above, modulo replacing y_max with (x). This leads to the inequality (<ref>) as well. By Theorem <ref>, we complete the proof. §.§ Psi theoremHConsistencyNewGCE Assume that is symmetric and complete. Assume that c(x, y) = 1_(x) ≠ y. Then, for all h ∈ and any distribution, the following -consistency bound holds: _(h) - _() + _() ≤ 2 √((n + 1)^α*__RL2D(h) - __RL2D() + __RL2D()). We can write the conditional error of the surrogate loss as follows: __RL2D(h, x) = ∑_y ∈ p(x, y) _RL2D(h, x, y) = 1/q∑_y ∈ p(x, y) c(x, y) *1- *e^h(x, y)/∑_y' ∈ e^h(x, y')^q + 1/q∑_y ∈ p(x, y) *1 - c(x, y)*1 - *e^h(x, y) + e^h(x, n + 1)/∑_y' ∈ e^h(x, y')^q = 1/q∑_y ∈ q_y* 1- *e^h_y/∑_y' ∈ e^h_y'^q + 1/q∑_y ∈*p_y - q_y* 1- *e^h_y + e^h_n + 1/∑_y' ∈ e^h_y'^q. By Lemma <ref>, the conditional regret of the deferral loss can be expressed as Δ_, (h, x) = max*p_y_max, p_n + 1 - p_. Next, we will show that the conditional regret of the surrogate loss can be lower bounded as follows: Δ__RL2D, (h, x) = __RL2D(h) - ^*__RL2D() ≥1/2(n + 1)^q*Δ_, (h, x) ^2. We first consider the case where (x) ≠ y_max. Otherwise, it would be straightforward to see that the bound holds. In the case where (x) ≠ y_max, we have q_y_max = p_y_max. We first prove that for any hypothesis h and x ∈, if y_max≠ h_max, then the conditional error of h can be lower bounded by that of h, which satisfies that h(x, y) = h_h_max y = y_max h_y_max y = h_max h_y otherwise.. Indeed, q *__RL2D(h) - __RL2D( h) = q_y_max*1 -*e^h_y_max/∑_y' ∈ e^h_y'^q + *p_y_max - q_y_max*1 - *e^h_y_max + e^h_n + 1/∑_y' ∈ e^h_y'^q + q_h_max*1 - *e^h_h_max/∑_y' ∈ e^h_y'^q + *p_h_max - q_h_max*1 - *e^h_h_max + e^h_n + 1/∑_y' ∈ e^h_y'^q - q_y_max*1 - *e^h_h_max/∑_y' ∈ e^h_y'^q - *p_y_max - q_y_max*1 - *e^h_h_max + e^h_n + 1/∑_y' ∈ e^h_y'^q - q_h_max*1 - *e^h_y_max/∑_y' ∈ e^h_y'^q + *p_h_max - q_h_max*1 - *e^h_y_max + e^h_n + 1/∑_y' ∈ e^h_y'^q = * q_y_max - q_h_max**1 -*e^h_y_max/∑_y' ∈ e^h_y'^q - *1 - *e^h_h_max/∑_y' ∈ e^h_y'^q + * p_y_max - q_y_max - p_h_max + q_h_max**1 - *e^h_y_max + e^h_n + 1/∑_y' ∈ e^h_y'^q - *1 - *e^h_h_max + e^h_n + 1/∑_y' ∈ e^h_y'^q ≥* p_y_max - p_h_max**1 - *e^h_y_max + e^h_n + 1/∑_y' ∈ e^h_y'^q - *1 - *e^h_h_max + e^h_n + 1/∑_y' ∈ e^h_y'^q ≥ 0. Therefore, we only need to lower bound the conditional regret of hypothesis h satisfying y_max = h_max. Since c(x, y) = 1_(x) ≠ y, we have p_y_max≥ p_n + 1 = p_(x). Note that when (p_y_max - p_n + 1) (h_y_max - h_n + 1) > 0, we have Δ_, (h, x) = max*p_y_max, p_n + 1 - p_ = 0. When h_y_max - h_n + 1≤ 0, we define a new hypothesis h_μ such that h_μ(x, y) = log*e^h_n + 1 + μ y = y_max log*e^h_y_max - μ y = n + 1 h(x, y) otherwise., where e^h_y_max - e^h_n + 1≤μ≤ e^h_y_max. Then, we can lower bound the conditional regret of hypothesis h by using Δ__RL2D, (h, x) ≥__RL2D(h) - ^*__RL2D(h_μ) for any e^h_y_max - e^h_n + 1≤μ≤ e^h_y_max: Δ__RL2D, (h, x) ≥sup_e^h_y_max≥μ≥ e^h_y_max - e^h_n + 1*__RL2D(h) - ^*__RL2D(h_μ) ≥1/qsup_e^h_y_max≥μ≥ e^h_y_max - e^h_n + 1[]q_y_max*1 - *e^h_y_max/∑_y' ∈ e^h_y'^q + *p_y_max - q_y_max*1 - *e^h_y_max + e^h_n + 1/∑_y' ∈ e^h_y'^q + ∑_y' ∈, y' ≠ y_max*p_y' - q_y'*1 - *e^h_y' + e^h_n + 1/∑_y' ∈ e^h_y'^q - q_y_max*1 - *e^h_n + 1 + μ/∑_y' ∈ e^h_y'^q - *p_y_max - q_y_max*1 - *e^h_n + 1 + e^h_y_max/∑_y' ∈ e^h_y'^q - ∑_y' ∈, y' ≠ y_max*p_y' - q_y'*1 - *e^h_y' + e^h_y_max - μ/∑_y' ∈ e^h_y'^q ≥1/qsup_e^h_y_max≥μ≥ e^h_y_max - e^h_n + 1[]q_y_max*1 - *e^h_y_max/∑_y' ∈ e^h_y'^q + ∑_y' ∈, y' ≠ y_max*p_y' - q_y'*1 - *e^h_n + 1/∑_y' ∈ e^h_y'^q - q_y_max*1 - *e^h_n + 1 + μ/∑_y' ∈ e^h_y'^q - ∑_y' ∈, y' ≠ y_max*p_y' - q_y'*1 - *e^h_y_max - μ/∑_y' ∈ e^h_y'^qe^h_y_max - e^h_n + 1≤μ≤ e^h_y_max = 1/qsup_e^h_y_max≥μ≥ e^h_y_max - e^h_n + 1[]q_y_max*1 - *e^h_y_max/∑_y' ∈ e^h_y'^q + *p_n + 1 - *p_y_max - q_y_max*1 - *e^h_n + 1/∑_y' ∈ e^h_y'^q - q_y_max*1 - *e^h_n + 1 + μ/∑_y' ∈ e^h_y'^q - *p_n + 1 - *p_y_max - q_y_max*1 - *e^h_y_max - μ/∑_y' ∈ e^h_y'^q By differentiating with respect to μ, we obtain that μ = *p_n + 1 - *p_y_max - q_y_max^1/q - 1 e^h_y_max - *q_y_max^1/q - 1 e^h_n + 1/*q_y_max^1/q - 1 + *p_n + 1 - *p_y_max - q_y_max^1/q - 1 achieves the maximum. Plugging it into the expression, we have Δ__RL2D, (h, x) ≥1/q[] - q_y_max*e^h_y_max/∑_y' ∈ e^h_y'^q - *p_n + 1 - *p_y_max - q_y_max*e^h_n + 1/∑_y' ∈ e^h_y'^q + q_y_max**e^ h_y_max + e^ h_n + 1*p_n + 1 - *p_y_max - q_y_max^1/q - 1/∑_y' ∈ e^h_y'*q_y_max^1/q - 1+*p_n + 1 - *p_y_max - q_y_max^1/q - 1 ^q + *p_n + 1 - *p_y_max - q_y_max**e^ h_y_max + e^ h_n + 1q_y_max^1/q - 1/∑_y' ∈ e^h_y'*q_y_max^1/q - 1 + *p_n + 1 - *p_y_max - q_y_max^1/q - 1^q This can be further lower bounded by taking the minimum over h ∈, where the minimum is attained when e^h_n + 1 = e^h_y_max = e^h_y for all y ∈. Therefore, Δ__RL2D, (h, x) ≥ 2/(n + 1)^q **q_y_max^1/1- q+*p_n + 1 - *p_y_max - q_y_max^1/1- q/2^1- q - p_n + 1 - p_y_max/2 minimum is attained when e^ h_n + 1 = e ^ h_y_max = e^h_y, ∀ y ∈ ≥1/2 (n + 1)^q *p_y_max - p_n + 1^2 q_y_max+*p_n + 1 - *p_y_max - q_y_max≤ 1 and by analyzing the Taylor expansion = 1/2 (n + 1)^q *Δ_, (h, x)^2 p_y_max≥ p_n + 1 and h_y_max≤ h_n + 1 This proves the inequality (<ref>). In the case where (x) = y_max, we have p_n + 1 = p_y_max. By Lemma <ref>, the conditional regret of the deferral loss can be expressed as Δ_, (h, x) = p_n + 1 - p_. If (x) = n + 1, then we have Δ_, (h, x) = 0. Otherwise, when (x) ≠ n + 1, we can proceed in the similar way as above, by defining a new hypothesis h_μ such that h_μ(x, y) = log*e^h_n + 1 + μ y = (x) log*e^h_(x) - μ y = n + 1 h(x, y) otherwise. Then, we can lower bound the conditional regret of _RL2D by using Δ__RL2D, (h, x) ≥__RL2D(h) - ^*__RL2D(h_μ), by applying the same derivation as above, modulo replacing y_max with (x). This leads to the inequality (<ref>) as well. By Theorem <ref>, we complete the proof. § PROOF OF THEOREM <REF> * By definition and the Lebesgue dominated convergence theorem, we have _ℓ_comp() ≤^*_ℓ_comp() ≤lim_α→∞*Ψ*e^α h^*(x, y)/∑_y' ∈ e^α h^*(x, y') = 0. This completes the proof. § FUTURE WORK While we presented a comprehensive study of surrogate loss functions for learning to defer, our work focused on the standard single-expert and single-stage setting, aligning with previous work <cit.>. However, an interesting direction is to extend our approach to multi-expert <cit.> and two-stage settings <cit.>, which we have left for future work.
http://arxiv.org/abs/2407.12228v1
20240717005333
Variational approach to light-matter interaction: Bridging quantum and semiclassical limits
[ "Yiying Yan", "Zhiguo Lü", "JunYan Luo" ]
quant-ph
[ "quant-ph" ]
yiyingyan@zust.edu.cn Department of Physics, School of Science, Zhejiang University of Science and Technology, Hangzhou 310023, China zglv@sjtu.edu.cn Key Laboratory of Artificial Structures and Quantum Control (Ministry of Education), School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China Department of Physics, School of Science, Zhejiang University of Science and Technology, Hangzhou 310023, China § ABSTRACT We present a time-dependent variational approach with the multiple Davydov D_2 trial state to simulate the dynamics of light-matter systems when the field is in a coherent state with an arbitrary finite mean photon number. The variational approach captures not only the system dynamics but also the field dynamics and is applicable to a variety of quantum models of light-matter interaction such as the Jaynes-Cummings model, Rabi model, and Dicke model, and is feasible to tackle the multimode quantized fields. By comparison of the variational and semiclassical dynamics of both the system and field, we illustrate that the variational dynamics from the quantum models agrees with those from the corresponding semiclassical models as long as the mean number of photons is sufficiently large. Moreover, we illustrate that in the crossover between the quantum and semiclassical limits, the quantum corrections lead to the collapse of the oscillations in dynamics, which is absent in the semiclassical models. The variational approach provides a unified treatment of light-matter interaction from the quantum to the semiclassical limit. Variational approach to light-matter interaction: Bridging quantum and semiclassical limits JunYan Luo July 22, 2024 =========================================================================================== § INTRODUCTION Light-matter interaction between quantum systems and coherent fields plays a fundamental role in quantum optics <cit.>, which enables the control of the quantum system by the field or vice versa and is highly relevant to the realization of quantum technologies such as quantum information processing <cit.>, quantum sensing <cit.>, quantum metrology <cit.>, quantum battery <cit.> etc. Theoretically, there exist two kinds of descriptions of the light-matter interaction. One is based on the full quantum description with a quantized field. The other is based on the semiclassical description where the field is treated classically. Two paradigmatic models for such theories are quantum and semiclassical Rabi models <cit.>. Of particular interest, the study of these basic models has recieved renewed attention in artificial atoms. Both theoretical and experimental studies on the quantum Rabi model have been extended to the so-called ultrastrong coupling regime <cit.>, where the coupling constant becomes a considerable fraction of the field frequency. For the semiclassical Rabi model, a strong driving with the Rabi frequency being comparable to the transition frequency of the qubit can be achieved <cit.>. Such a strong coupling (driving) regime has potential applications in the realization of ultrafast quantum gates <cit.>. Provided the field is initially in a coherent state, the quantum and semiclassical descriptions of light-matter interaction become consistent with each other in the semiclassical limit, that is, the mean photon number of the field tends to infinity and the coupling constant tends to zero while their product remains constant <cit.>. However, in realistic situations, the quantum system can only interact with a finite number of photons other than an infinite number of photons. A question naturally raises that under which condition a finite number of photons cause a negligible deviation between the quantum and semiclassical models. Moreover, how quantum corrections in the presence of a finite number of photons contribute to the dynamics remains, to the best of our knowledge, barely explored, which may be helpful in understanding particularly the crossover between the quantum and semiclassical limit. Since fields are introduced in different ways in the quantum and semiclassical Hamiltonians, the light-matter interactions in the quantum and semiclassical limit are typically treated by different theoretical approaches. For instance, there are a variety of theoretical methods developed for the quantum Rabi model and its variants, e.g., von Vleck perturbation <cit.>, unitary transformations <cit.>, variational approach <cit.>, extended coherent states <cit.>, etc. However, the Floquet theory mainly applies to the semiclassical models <cit.>. In contrast to the full quantum models, the field dynamics is ignored in the semiclassical models. Very recently, a so-called photon-resolved Floquet theory has been proposed to study the field dynamics of semiclassical models <cit.>. However, such an approach is established in the semiclassical limit and thus is inapplicable in the crossover between the quantum and semiclassical limit. In this work, we present a unified numerical method that combines two sequential unitary transformations with a time-dependent variational approach equipped with the multiple Davydov ansatz <cit.> to simulate the dynamics of light-matter systems from quantum to semiclassical limit when the field is initially in a coherent state. One advantage of the variational approach is that the multiple Davydov ansatz uses the coherent states as the bases, which effectively prevents the exponential increase in the size of the Hilbert space due to the increase in the number of modes, and thus is applicable to a multimode case. More importantly, the variational approach simultaneously captures the dynamics of both the quantum system and fields. Particularly, we illustrate that the statistical characteristic function of the field can be computed, which allows us to calculate the mean value and variance of the photon numbers of the field as well as the photon-number distribution. We also show how the field dynamics can be calculated from semiclassical models. We apply the present approaches to a variety of light-matter systems including the Jaynes-Cummings (JC) model, the Rabi model, and the Dicke model. By comparing the variational dynamics from the full quantum models with that from the semiclassical models, we examine the consistency between the quantum and semiclassical models in the presence of a large number of photons and illustrate the role of quantum corrections in the quantum-semiclassical crossover. The rest of the manuscript is organized as follows. In Sec. <ref>, we present the variational approach to treat the light-matter interaction in the presence of large mean photon numbers. We also show how to calculate the field dynamics in the semiclassical model. In Sec. <ref>, we apply the variational approach and semiclassical approach to several light-matter systems and calculate the system and field dynamics. In Sec. <ref>, we draw the conclusions. § MODEL AND METHODS The full quantum description of the interaction between a quantum system and a multimode coherent field can be described by the following Hamiltonian (we set ħ=1 throughout this work) H=H_ S+∑_k=1^Nω_kb_k^†b_k+∑_k=1^Ng_k/2(b_k^†V_k+b_kV_k^†), where H_ S is the free Hamiltonian of the quantum system and V_k is the interaction operator acting on the Hilbert space of the quantum system. b_k (b_k^†) is the annihilation (creation) operator of the bosonic field with frequency ω_k. g_k is the coupling constant. N is the total number of modes. Note that either V_k≠ V_k^† or V_k=V_k^† is possible, corresponding to a rotating-wave approximation (RWA) and non-RWA Hamiltonian, respectively. The time evolution of the composite system is governed by the time-dependent Schrödinger equation, id/dt|Ψ(t)⟩=H|Ψ(t)⟩, where |Ψ(t)⟩ is the state of the total system. In this work, we consider a factorized initial state of the total system |Ψ(0)⟩=|ψ(0)⟩⊗|α⃗⟩, where |ψ(0)⟩ is an initial state of the quantum system, and |α⃗⟩ is a multimode coherent state and reads |α⃗⟩ = |α_1⟩⊗|α_2⟩⊗⋯⊗|α_N⟩ ≡ exp(∑_k=1^Nα_kb_k^†- h.c.)|0⟩≡ D(α⃗)|0⟩, where |0⟩ is a multimode vacuum state, D(α⃗) is a displacement operator, and α_k≡|α_k|e^-iϕ_k is a complex number with the modulus |α_k| and phase ϕ_k. §.§ Time-dependent variational approach To achieve a manageable numerical simulation, we convert the time-evolution problem with the initial coherent state of a large number of photons into a new time-evolution problem with an initial vacuum state. This can be achieved by two sequential unitary transformations. First, we transform the time-dependent Schrödinger equation into the interaction picture governed by the free Hamiltonian of the field H_ F=∑_k=1^Nω_kb_k^†b_k. In doing so, we have a new time-evolution problem: id/dt|Ψ̃(t)⟩=H̃(t)|Ψ̃(t)⟩, where the Hamiltonian becomes time dependent H̃(t)=H_ S+∑_k=1^Ng_k/2(V_k^†b_ke^-iω_kt+V_kb_k^†e^iω_kt), and the transformed wave function is related to the original one via |Ψ̃(t)⟩=exp(iH_ Ft)|Ψ(t)⟩. The initial state in the transformed frame remains the same as that in the original one |Ψ̃(0)⟩=|Ψ(0)⟩. To proceed, we apply a displacement transformation on the equation of motion and the initial state, yielding id/dt|Ψ^'(t)⟩=H^'(t)|Ψ^'(t)⟩, where the displaced wave function is related to the original one via |Ψ^'(t)⟩=D^†(α⃗)exp(iH_ Ft)|Ψ(t)⟩, and the transformed Hamiltonian is given by H^'(t) = D^†(α⃗)H̃(t)D(α⃗) = H_ S(t)+∑_k=1^Ng_k/2(V_k^†b_ke^-iω_kt+V_kb_k^†e^iω_kt), where H_ S(t)=H_ S+∑_k=1^NΩ_k/2[V_ke^i(ω_kt+ϕ_k)+V_k^†e^-i(ω_kt+ϕ_k)], Ω_k=|α_k|g_k. In doing so, we have the following initial state |Ψ^'(0)⟩=D^†(α⃗)|Ψ(0)⟩=|ψ(0)⟩|0⟩, where the field is initially in the vacuum state in the transformed frame. Note that in the semiclassical limit of g_k→0 and α_k→∞ but |α_k|g_k=Ω_k, the full quantum Hamiltonian of light-matter interaction becomes the semiclassical Hamiltonian, i.e., H^'(t)→ H_ S(t) <cit.>. We now use a time-dependent variational approach to compute the dynamics described by Eq. (<ref>). The variational approach is based on the Dirac-Frenkel time-dependent variational principle and the multiple Davydov D_2 ansatz. The former allows us to calculate the optimal solution to the time-dependent Schrödinger equation with a parameterized trial state. In this work, we use the multiple Davydov D_2 ansatz |D_2^M(t)⟩, which is suitable for the spin-boson-like problems and is parameterized as follows <cit.>: |D_2^M(t)⟩=∑_n=1^M∑_j=1^N_SA_nj|j⟩|f_n⟩, where M is the number of the coherent states, A_nj are time-dependent variational parameters, {|j⟩|j=1,2,…,N_S} represents a set of orthornormal bases for the quantum system, and |f_n⟩ = exp(∑_k=1^Nf_nkb_k^†- h.c.)|0⟩ are multimode coherent states with f_nk being time-dependent variational parameters. The equations of motion for variational parameters are determined by the Dirac-Frenkel time-dependent variational principle <cit.>, that is, ⟨δ D_2^M(t)|[i∂_t-H^'(t)]|D_2^M(t)⟩=0, where ⟨δ D_2^M(t)| represents the variation of the adjoint state. From Eq. (<ref>), the equations of motion for the variational parameters are simply given by i⟨ j|⟨ f_l|Ḋ_2^M(t)⟩=⟨ j|⟨ f_l|H^'(t)|D_2^M(t)⟩, i∑_j=1^N_SA_lj^∗⟨ j|⟨ f_l|b_k|Ḋ_2^M(t)⟩=∑_j=1^N_SA_lj^∗⟨ j|⟨ f_l|b_kH^'(t)|D_2^M(t)⟩. These equations of motion are a set of implicit first-order nonlinear differential equations and can be solved by the fourth-order Runge-Kutta algorithm <cit.>. The detailed derivation and implementing numerical simulation of Eqs. (<ref>) and (<ref>) are presented in Appendix. On solving the equations of motion, we can compute the quantities of interest for both the quantum system and the field. The observable of quantum system can be directly computed since the applied unitary transformations do not influence the operators acting on the Hilbert space of the matter system, e.g., the population of the state |j⟩ can be given by P_j(t) = ⟨ D_2^M(t)|j⟩⟨ j|D_2^M(t)⟩ = ∑_l,n=1^MA_lj^∗ S_lnA_nj, where S_ln=exp[∑_k=1^N(f_lk^∗f_nk-|f_lk|^2/2-|f_nk|^2/2)]. For the field, we calculate the moment-generating function, G_χ⃗(t) = ⟨ e^i∑_k=1^Nχ_kb_k^†b_k⟩_t = ⟨ D_2^M(t)|D^†(α⃗)e^i∑_k=1^Nχ_kb_k^†b_kD(α⃗)|D_2^M(t)⟩ = ∑_n,l=1^M∑_j=1^N_SA_lj^∗ S_lnA_nj ×exp[∑_k=1^N(e^iχ_k-1)(α_k^∗+f_lk^∗)(α_k+f_nk)], where χ_k∈[0,2π), ⟨·⟩_t represents the average taken over the state in the laboratory frame, and we have used the identity of the displacement operators: D(α)D(β)=exp(αβ^∗-α^∗β/2)D(α+β). The moment-generating function contains the information of interest for the field. Particularly, the mean photon number in the mode k can be computed as n_k(t) = ⟨ b_k^†b_k⟩_t = .d/idχ_kG_χ⃗(t)|_χ_k=0 = ∑_l,n=1^M∑_j=1^N_SA_lj^∗ S_lnA_nj(α_k^∗+f_lk^∗)(α_k+f_nk). The variance of the photon number of the mode k is given by σ_k^2(t)=.d^2/i^2dχ_k^2G_χ⃗(t)|_χ_k=0-n_k^2(t). Additionally, the probability distribution of the photon numbers at time t can also be computed, p(n⃗,t)=1/(2π)^N∫_0^2πG_χ⃗(t)e^-iχ⃗·n⃗dχ⃗, where n⃗=(n_1,n_2,…,n_N) with n_k being a non-negative integer that specifies the photon number of the mode k. §.§ Dynamics in the semiclassical limit In this section, we compute the dynamics of the quantum system and field in the semiclassical limit. We first compute the population dynamics of the quantum system with Eq. (<ref>). To this end, we express the time-evolution operator U^'(t)= Texp[-i∫_0^tH^'(τ)dτ] with T the time-ordering operator by using the product form, U^'(t)=U_ S(t)U_ I^'(t), where U_ S(t)= Texp[-i∫_0^tH_ S(τ)dτ] is the time-evolution operator for the semiclassical Hamiltonian. U_ I^'(t) can be determined by iteration from the time-dependent Schrödinger equation and is readily given in terms of the Dyson's series, which reads U_ I^'(t) = 1-i∫_0^tH_ I(τ_1)dτ_1 -∫_0^t∫_0^τ_1dτ_1dτ_2H_ I(τ_1)H_ I(τ_2)+…, where H_ I(t)=∑_k=1^Ng_k/2[V_k^†(t)b_ke^-iω_kt+V_k(t)b_k^†e^iω_kt], V_k(t)=U_ S^†(t)V_kU_ S(t). With the above results at hand, we can calculate the population of the state |j⟩ as follows: P_j(t) = ⟨Ψ^'(t)|j⟩⟨ j|Ψ^'(t)⟩ = ⟨Ψ^'(0)|U^'†(t)|j⟩⟨ j|U^'(t)|Ψ^'(0)⟩ = P_j^ sc(t)+P_j^ qc(t), with P_j^ sc(t)=⟨Π_j(t)⟩_0, P_j^ qc(t) = ∑_k=1^Ng_k^2/4∫_0^t∫_0^tdτ_1dτ_2⟨ V_k^†(τ_1)Π_j(t)V_k(τ_2)⟩_0e^-iω_k(τ_1-τ_2) - Re∑_k=1^Ng_k^2/2∫_0^t∫_0^τ_1dτ_1dτ_2⟨Π_j(t)V_k^†(τ_1)V_k(τ_2)⟩_0e^-iω_k(τ_1-τ_2)+O(g_k^4), where ⟨·⟩_0 represents the average taken over the initial state of the quantum system |ψ(0)⟩ and Π_j(t)=U_ S^†(t)|j⟩⟨ j|U_ S(t). In the above derivation, we have used the fact that the fields are in the vacuum state in the initial state |Ψ^'(0)⟩. Note that P_j^ sc(t) is just the semiclassical dynamics, which is fully driven by the semiclassical Hamiltonian, that is, U̇_ S(t)=-iH_ S(t)U_ S(t). P_j^ qc(t) represents quantum corrections to the semiclassical dynamics, which is shown up to the second order in g_k. Clearly, there are higher-order corrections, which can be computed similarly. Such quantum corrections become vanishing in the limit of g_k→0, i.e., in the semiclassical limit. Since in the semiclassical limit |α_k|→∞, we calculate the change in the mean photon number of the mode k, which is simply given by Δ n_k(t) = ⟨Ψ^'(t)|D^†(α⃗)(b_k^†b_k-|α_k|^2)D(α⃗)|Ψ^'(t)⟩ = ⟨Ψ^'(0)|U^'†(t)(b_k^†b_k+α_k^∗b_k+α_kb_k^†)U^'(t)|Ψ^'(0)⟩. Plugging Eq. (<ref>) into (<ref>), using the fact that the fields are in the vacuum state in the initial state |Ψ^'(0)⟩ and U_ S(t) commutes with the field operators, and taking the semiclassical limit, we readily have Δ n_k(t)=-Ω_k Re∫_0^ti⟨ V_k(τ)⟩_0e^i(ω_kτ+ϕ_k)dτ. Similarly, we can calculate the variance in the photon number of the mode k. σ_k^2(t) = ⟨Ψ^'(t)|D^†(α⃗)(b_k^†b_k-|α_k|^2)^2D(α⃗)|Ψ^'(t)⟩-Δ n_k^2(t) = |α_k|^2+Δ n_k(t)-Δ n_k^2(t) +Ω_k^2/2∫_0^t∫_0^t⟨ V_k^†(τ_1)V_k(τ_2)⟩_0e^-iω_k(τ_1-τ_2)dτ_1dτ_2 -Ω_k^2 Re∫_0^tdτ_1∫_0^τ_1dτ_2⟨ V_k(τ_1)V_k(τ_2)⟩_0 × e^iω_k(τ_1+τ_2)+2iϕ_k. When t=0, one simply has σ_k^2(0)=|α_k|^2, which is the feature of a coherent state. In practice, since |α_k|^2 is a large number in the semiclassical limit, we subtract |α_k|^2 from σ_k^2(t) and calculate the change in the variance, i.e., Δσ_k^2(t)=σ_k^2(t)-|α_k|^2. This quantity characterizes the variation of the width of photon statistical distribution. Note that Eqs. (<ref>) and (<ref>) are only justified in the semiclassical limit g_k→ 0, α_k→∞, and g_k|α_k|=Ω_k. In other words, they are inapplicable in the quantum and crossover regimes. As long as the semiclassical time-evolution operator is obtained, we can easily compute Δ n_k(t) and Δσ_k^2(t). The time-evolution operator U_ S(t) can be computed via directly integrating the time-dependent Schrödinger equation. Alternatively, it can be numerically or analytically computed with (generalized) Floquet theory. § APPLICATIONS In this section, we apply the present variational approach and the semiclassical approach to study the dynamics of some light-matter systems. In doing so, we address the consistency between the quantum variational dynamics and semiclassical dynamics in the presence of a large number of photons and explore the role of quantum corrections. In the following calculations, the quantum system is assumed to be initially in the ground state while the field is initially in a coherent state. Hereafter, the variational results will be denoted by “M-D_2” with M specifying a concrete number of coherent states used in simulations. The semiclassical results calculated from Eqs. (<ref>), (<ref>), and (<ref>) will be denoted by “SC” in the plots. §.§ Jaynes-Cummings model To begin with, we consider the JC model, the Hamiltonian of which reads H=1/2ω_0σ_z+ω b^†b+g/2(bσ_++b^†σ_-), where ω_0 is the transition frequency between two levels: |1⟩ and |2⟩, and σ_z=|2⟩⟨2|-|1⟩⟨1| and σ_+=σ_-^†=|2⟩⟨1|. The notations for the field part are the same as in Eq. (<ref>) and since there is a single mode, we have neglected the subscripts that label different modes for the field. This model and its semiclassical counterpart are exactly solvable, which provides transparent insights into the system and field dynamics as well as the quantum corrections to the semiclassical dynamics. For the quantized JC model, we use the variational approach to calculate the dynamics of the system and field. In the following, we introduce some analytical results from the semiclassical JC model. The semiclassical Hamiltonian of the JC model is given by H_ S(t)=1/2ω_0σ_z+Ω/2(σ_+e^-iω t+σ_-e^iω t). The time-evolution operator reads <cit.> U_ S(t) = e^-iω tσ_z/2[cos(Ω_ Rt/2)-isin(Ω_ Rt/2)δσ_z+Ωσ_x/Ω_ R], where δ=ω_0-ω is the detuning and Ω_ R=√(Ω^2+δ^2) is the Rabi frequency. With the time-evolution operator and considering the initial state of the two-level system |ψ(0)⟩=|1⟩, we can easily obtain the excited-state population of the two-level system P_2^ sc(t)=Ω^2/Ω_ R^2sin^2(Ω_ Rt/2). This is the celebrated Rabi oscillation. The change in photon number for the semiclassical JC model is obtained as Δ n(t)=-P_2^ sc(t). This means that the change in photon number and the excited-state population of the two-level system oscillate out of phase, which just reflects the conservation of the excitation number of the total system. The change in the variance of the mean photon number is given by Δσ^2(t) = (Ω^2-δ^2)Ω^2/Ω_ R^4sin^2(Ω_ Rt/2)-Ω^4/Ω_ R^4sin^4(Ω_ Rt/2) -Ω^4t/2Ω_ R^3sin(Ω_ Rt). The above analytical results on the field part are obtained in the semiclassical limit and quantum corrections are not involved. We now address how the deviation in the dynamics between the quantum and semiclassical JC model emerges due to the variation of the the initial mean numbers of photons by comparing the variational and analytical results. In Fig. <ref>, we show the dynamics of the system and field by computing the excited-state population P_2(t), the change in photon number Δ n(t), and the change in variance of photon number Δσ^2(t) from the quantum and semiclassical JC models for ω=ω_0. For the quantum model, we consider the three values of the initial mean number of photons α^2 ranging from 10^5 to 10^3. The coupling constant and the Rabi frequency are set by gα=Ω=0.5ω_0. Figures <ref>(a)-<ref>(c) show that when α^2=10^5, the quantum and semiclassical dynamics for either the system or field are in perfect agreement in the time interval, indicating the consisitency between the quantum and semiclassical models in the large mean photon number limit. Figures <ref>(d)-<ref>(f) show that when α^2=10^4, the quantum variational dynamics and the semiclassical dynamics are almost the same when t<200ω_0^-1 while deviate from each other when t>200ω_0^-1. Moreover, Figs. <ref>(g)-<ref>(i) show that when α^2=10^3, the quantum variational dynamics just coincides with the semiclassical dynamics in the first few cycles of Rabi oscillation. As the time goes on, one readily notes that for full quantum dynamics, the Rabi oscillation in the population of the system experiences a significant collapse, which is a quantum feature and is absent in the semiclassical limit <cit.>. The striking difference between the quantum and the semiclassical results reflects the role of quantum corrections to the semiclassical dynamics in the quantum-semiclassical crossover. To provide a criterion for assessing the time scale within which the quantum and semiclassical models yield consistent dynamics, we further explore the role of the quantum corrections. We calculate the difference between the variational population dynamics P_2(t) and the semiclassical dynamics P_2^ sc(t), which is just the “exact” quantum correction. Alternatively, we can calculate the quantum correction up to the second order in g for the JC model. It follows from Eqs. (<ref>) and (<ref>) that the second-order quantum correction to the semiclassical population dynamics is given by P_2^ qc(t) = g^2Ω^2/4Ω_ R^4{Ω^2t^2/4cos(Ω_ Rt)+4δ^2-Ω^2/4Ω_Rtsin(Ω_ Rt). .-4δ^2/Ω_ R^2sin^2(Ω_ Rt/2)}. The analytical result shows that the amplitude of oscillation is proportional to t^2. This means that even if g is a small quantity, the quantum correction can contribute significantly to the long-time limit. Moreover, this result is ill-defined as t→∞, suggesting that the second-order quantum correction to the semiclassical dynamics is insufficient in the quantum-semiclassical crossover and higher-order quantum corrections must be involved. In Fig. <ref>, we plot the numerically exact and second-order quantum corrections as a function of time t for α^2=10^4, Ω=gα=0.5ω_0, and ω=ω_0. One readily finds that the second-order quantum correction plays a predominant role and agrees with the exact one in the finite time interval. This finding actually can be used to estimate the upper bound of time t_c below which the quantum dynamics and semiclassical dynamics are consistent. Roughly speaking, we require g^2t_c^2<1 such that the second-order correction is a small quantity. Using g=Ω/|α|, we have t_c<|α|/Ω, that is, the larger the mean photon number is, the more the quantum and semiclassical dynamics coincide with each other over a longer time interval. For instance, when considering the parameters in Fig. <ref>(c), we find that t_c≈63ω_0^-1, which turns out to be a good estimation of the time scale. The present variational approach can also be used to calculate the photon-number distribution at given times. Figures <ref>(a)-<ref>(c) show p(n,t) as a function of n at two given times for ω=ω_0, g=0.5ω_0/α, and three values of α. When t=0, the photon-number distribution is the Poisson distribution peaked at n=|α|^2, which is the nature of the coherent state. When α^2=10^5 or 10^4, we see that there is almost no difference between the initial (t=0) and final (t=500ω_0^-1) photon-number distribution. This finding is consistent with the results in Figs. <ref>(c) and <ref>(f), where one finds that although the oscillation amplitude of the change in the variance of photon number Δσ^2(t) increases with time t, its magnitude is far smaller than the initial variance σ^2(0)=α^2, i.e., Δσ^2(t)≪σ^2(0). Consequently, the width of the photon-number distribution hardly changes in the semiclassical limit. However, when α^2=10^3, Fig. <ref>(c) shows that the final photon-number distribution is different from the initial one. The present results suggest that the photon-number distribution is rarely changed in the semiclassical limit but can be changed due to the light-matter interaction in the crossover region between the quantum and semiclassical limit. The present results suggest that the variational approach is applicable to light-matter systems not only in the quantum limit <cit.> but also in the semiclassical limit as well as the crossover in between, and is feasible to tackle a relatively large mean number of photons. On the other hand, the present results confirm that field dynamics can also be calculated in the semiclassical model. §.§ Rabi model §.§.§ Single-mode case The JC model may be inadequate due to the RWA and we move to consider the quantum Rabi model, the Hamiltonian of which reads H=1/2ω_0σ_z+ω b^†b+g/2(b+b^†)σ_x, where σ_x=σ_++σ_-. We apply the variational approach to study the dynamics of the quantum Rabi model. We also calculate the semiclassical dynamics of the system and field, which is based on the numerically calculated time-evolution operator for the semiclassical Rabi model: H_ S(t)=1/2ω_0σ_z+Ωcos(ω t)σ_x. We examine the consistency between the variational dynamics and the semiclassical dynamics under the initial condition of a large number of photons, i.e., α^2≫1. In Fig. <ref>, we show the dynamics of the excited-state population P_2(t), the change in photon number Δ n(t), and the change in the variance of the mean photon number Δσ^2(t) for the quantum and semiclassical Rabi model. We consider the three values of α^2 and ω=ω_0. The Rabi frequency and coupling constant are given by Ω=0.5ω_0 and g=Ω/α, respectively. One notes that the quantum variational dynamics and semiclassical dynamics are almost the same when α^2=10^5. However, when α^2=10^4 or 10^3, the difference between the variational and semiclassical dynamics appears as the time increases, which is attributed to the quantum corrections to the semiclassical dynamics. This situation is similar to that encountered in the JC model. In addition, one notes that there are beat behaviors in Fig. <ref>, which results from the counter-rotating terms <cit.>. To explore the validity of the variational approach in the quantum limit, we calculate the dynamics of the system and field for the quantum Rabi model by using the variational approach and the numerically exact diagonalization (ED) of the quantum Rabi Hamiltonian for ω=ω_0, g=0.2ω_0, and α^2=5. Since g=0.2ω_0 is comparable to ω, the light-matter coupling is ultrastrong. This is a regime where the two-level system ultrastrongly interacts with a few photons, and thus the semiclassical approach is inapplicable. Figure <ref> shows that in comparison with the ED method, the variational approach can produce the accurate dynamics of system and field as well as the photon-number distribution for the quantum Rabi model in an ultrastrong coupling regime. The present findings suggest that the variational approach provides a unified method to tackle light-matter systems in both the quantum and semiclassical limits. More importantly, it captures the dynamics of both system and field. This is of particular importance in potential applications of studying statistical properties of the field. §.§.§ Two-mode case We now exploit the developed formalisms to study the system and field dynamics in the two-mode cases, where multiphoton processes become remarkable. The Hamiltonian of the two-mode quantum Rabi model is given by H=ω_0/2σ_z+∑_k=1^Nω_kb_k^†b_k+∑_k=1^Ng_k/2(b_k+b_k^†)σ_x, where the number of modes is N=2. The Hamiltonian of the two-mode semiclassical Rabi model is given by H_ S(t)=ω_0/2σ_z+[Ω_1cos(ω_1 t)+Ω_2cos(ω_2 t)]σ_x, which describes a bichromatically driven two-level system. As is well-known, the two-level system may absorb n+1 photons from one field and emit n photons into the other field, which can lead to the multiphoton resonances occurring at ω_0≈ (n+1)ω_1-nω_2 with n an integer. Such multiphoton resonances have been studied in the semiclassical model with and without the RWA <cit.>, and are illustrated by calculating the transition probabilities of the two-level system. Here we revisit this phenomenon by calculating not only the population of the system but also the photon-number dynamics by making use of variational and semiclassical approaches. In the latter approach, the time-evolution operator of the semiclassical model is numerically calculated by the Runge-Kutta method. The multiphoton resonance condition can be numerically obtained by the generalized Floquet theory <cit.>. To be concrete, we consider ω_1=0.6449ω_0, ω_2=0.8449ω_0, and Ω_1=Ω_2=0.3ω_0 for the semiclassical model, which corresponds to the resonance that the two-level system absorbs two photons from the mode 2 and emit one photon into the mode 1. For the quantized field, we set g_1=g_2=0.3ω_0/α and α_1^2=α_2^2=α^2=10^5. In Figs. <ref>(a)-<ref>(c), we depict the excited-state population P_2(t), the change in photon number Δ n_k(t), and the change in the variance Δσ^2_k(t) for the quantum and semiclassical models. It is evident that the dynamics of the system and fields from the quantum model are in agreement with those from the semiclassical model, confirming that the variational approach can be applied to simulate the semiclassical dynamics in the two-mode cases provided the initial mean photon number is sufficiently large. In Fig. <ref>(a), we see that the excited-state population can reach a maximum value P_2(t)=1, which signifies the occurrence of resonance. In Fig. <ref>(b), the change in photon numbers in the two modes manifests the multiphoton feature of the resonance. Figures <ref>(d)-<ref>(f) show the dynamics of the system and fields for the model with quantized fields in the case of α^2_1=α^2_2=α^2=25, g_1=g_2=Ω_1/α=0.06ω_0. Comparisons between the variational results and those of the ED method confirm that the variational approach is applicable in the quantum limit in the two-mode case. In addition, we see that the system and field dynamics in the case of α^2=25 are apparently different from those in the case of α^2=10^5. Specifically, the excited-state population cannot reach a maximum value P_2(t)=1 in the former case, indicating the disappearance of the multiphoton resonance due to the quantum corrections. In Figs. <ref>(a)-<ref>(b), we compare the photon-number distribution p(n⃗,t) with n⃗=(n_1,n_2) at t=500ω_0^-1 calculated from the variational and the ED methods for the same parameters as in Figs. <ref>(d)-<ref>(f). The two methods predict almost the same photon-number distribution, which is apparently different from the initial two-dimensional Poisson distribution due to the strong light-matter coupling. §.§ Dicke model In this section, we consider the Dicke model <cit.>, which is described by the following Hamiltonian H=ω b^† b+1/2∑_j=1^N_q[ω_0,jσ_z,j+g(b+b^†)σ_x,j], where N_q is the number of the two-level systems, ω_0,j is the transition frequency of the jth two-level system, σ_μ,j is the Pauli matrix for the jth two-level system. The semiclassical counterpart of the Dicke model is given by H_ S(t)=∑_j=1^N_q[1/2ω_0,jσ_z,j+Ωcos(ω t)σ_x,j]. Figure <ref> shows the change in photon number Δ n(t) calculated by the variational and semiclassical approaches for N_q=2, ω=ω_0,1, Ω=0.5ω_0,1, and the three values of ω_0,2. For the quantum model, we consider two values of α^2 and g=Ω/α. In Figs. <ref>(a)-<ref>(c), we see that when α=10^5, the variational quantum dynamics coincides with the semiclassical dynamics. Figures <ref>(d)-<ref>(f) show that when α^2=10^3, the oscillation from the quantum model undergoes collapse. The present results on the consistency between the quantum and semiclassical models are similar to that in the JC and Rabi models. Figure <ref> also provides insights into how the field responds due to the presence of multiple two-level systems. When the two-level systems are identical, i.e., ω_0,2=ω_0,1, the photon number dynamics is similar to that of the Rabi model shown in Fig. <ref>(b) except that the amplitude of the oscillation is about 2, which reflects the fact that the two photons can be absorbed by the two subsystems. On the other hand, when the two-level systems have different transition frequencies, there are strong beat behaviors that can be tuned by the frequency difference of the two subsystems, which result from the two independent Rabi oscillations of the two subsystems that have different Rabi frequencies. § CONCLUSIONS In summary, we have presented a time-dependent variational approach to study the system and field dynamics of light-matter systems when the field is in a coherent state and possesses a finite mean number of photons. In addition to the variational approach, we have shown that the field dynamics can also be deduced from the system dynamics in the semiclassical model. By using the variational and semiclassical approaches, we have examined the consistency in the system and field dynamics between the quantum and semiclassical models of light-matter interaction in the large mean photon number regimes. We have illustrated that the variational approach can produce accurate semiclassical dynamics of either the system or the field as long as the initial mean photon number is sufficiently large. Moreover, it can also produce accurate quantum dynamics of the system and field when a few photons strongly interact with the quantum system and can apply to the crossover between the quantum and semiclassical limits. In the crossover region, we have shown that the excited-state population of the quantum system and the change in photon number experiences a collapse in the amplitude of the oscillations, which reflects the quantum feature. The present variational approach provides a unified treatment of light-matter interaction in the quantum and semiclassical limits as well as the crossover in between. The variational approach can also be extended to open quantum systems based on two routines. One is that the dissipation is taken into account by considering a set of harmonic oscillators to be the environment such as the well-known spin-boson model <cit.>. The other is that the dissipation is modeled by non-Hermitian Hamiltonians. The variational approach with the Davydov ansatz can be extended to solve the time-dependent Schrödinger equation with a non-Hermitian Hamiltonian. This has the potential to describe cavity-QED systems with a lossy cavity. Support from the National Natural Science Foundation of China (Grant No. 12005188, No. 11774226, and No. 11774311) is gratefully acknowledged. * § THE EQUATIONS OF MOTION FOR THE VARIATIONAL PARAMETERS AND NUMERICAL IMPLEMENTATION The variation of the joint state of |D_2^M(t)⟩ is given by ⟨δ D_2^M(t)| = ∑_l=1^M∑_j=1^N_S⟨ j|⟨ f_l|{δ A_lj^∗+A_lj^∗∑_k=1^N[δ f_lk^∗.. ×..(b_k-1/2f_lk)-1/2δ f_lkf_lk^*]}. Substituting ⟨δ D_2^M(t)| into Eq. (<ref>), one simply derives 0 = ∑_l=1^M∑_j=1^N_Sδ A_lj^∗⟨ j|⟨ f_l|[i∂_t-H^'(t)]|D_2^M(t)⟩ +∑_l=1^M∑_k=1^Nδ f_lk^∗∑_j=1^N_SA_lj^∗⟨ j|⟨ f_l|(b_k-1/2f_lk) ×[i∂_t-H^'(t)]|D_2^M(t)⟩ -∑_l=1^M∑_j=1^N_SA_lj^∗∑_k=1^Nδ f_lkf_lk^*/2⟨ j|⟨ f_l|[i∂_t-H^'(t)]|D_2^M(t)⟩. To ensure the above equation holds for arbitrary variation of the parameters, we simply have ⟨ j|⟨ f_l|[i∂_t-H^'(t)]|D_2^M(t)⟩=0, ∑_j=1^N_SA_lj^∗⟨ j|⟨ f_l|(b_k-1/2f_lk)[i∂_t-H^'(t)]|D_2^M(t)⟩=0. Equation (<ref>) corresponds to Eq. (<ref>) in the main text and can be used to simplify Eq. (<ref>), which leads to Eq. (<ref>) in the main text. To derive the explicit forms of the equations of motion, we use the time derivative of the trial state <cit.> |Ḋ_2^M(t)⟩ = ∑_n=1^M∑_i=1^N_S[a_ni+A_ni∑_p=1^Nḟ_npb_p^†]|i⟩|f_n⟩, where a_ni=Ȧ_ni-1/2A_ni∑_p=1^N(ḟ_npf_np^∗+f_npḟ_np^∗). To proceed, we carry out calculation for ⟨ j|⟨ f_l|Ḋ_2^M(t)⟩, ∑_j=1^N_SA_lj^∗⟨ j|⟨ f_l|b_k|Ḋ_2^M(t)⟩, ⟨ j|⟨ f_l|H^'(t)|D_2^M(t)⟩, and ∑_j=1^N_SA_lj^∗⟨ j|⟨ f_l|b_kH^'(t)|D_2^M(t)⟩ to express them in terms of the variational parameters, which yields ⟨ j|⟨ f_l|Ḋ_2^M(t)⟩=∑_n=1^M∑_i=1^N_S(a_nj+A_nj∑_p=1^Nf_lp^∗ḟ_np) S_ln, ∑_j=1^N_SA_lj^∗⟨ j|⟨ f_l|b_k|Ḋ_2^M(t)⟩=∑_n=1^M∑_j,i=1^N_S(A_lj^∗a_njf_nk+A_lj^∗A_nj∑_p=1^N(δ_k,p+f_lp^∗f_nk)ḟ_np) S_ln, [I⃗_j]_l = ⟨ j|⟨ f_l|H^'(t)|D_2^M(t)⟩ = ∑_n=1^M∑_i=1^N_S⟨ j|H_S(t)|i⟩ A_ni S_ln+∑_n=1^M∑_i=1^N_S∑_p=1^Ng_p/2A_ni(⟨ j|V_p^†|i⟩ e^-iω_ptf_np+⟨ j|V_p|i⟩ e^iω_ptf_lp^∗) S_ln, [I⃗_f]_lk = ∑_j=1^N_SA_lj^∗⟨ j|⟨ f_l|b_kH^'(t)|D_2^M(t)⟩ = ∑_n=1^M∑_j,i=1^N_SA_lj^∗⟨ j|H_S(t)|i⟩ A_ni S_lnf_nk+∑_n=1^M∑_j,i=1^N_S∑_p=1^Ng_p/2A_lj^∗A_ni[⟨ j|V_p^†|i⟩ e^-iω_ptf_npf_nk +⟨ j|V_p|i⟩ e^iω_pt(δ_p,k+f_lp^∗f_nk)] S_ln, where [I⃗_f]_lk is viewed as a column vector with its component position being specified by l and k. By substituting these quantities into Eqs. (<ref>) and (<ref>), we can rewrite the equations of motion in a matrix form i([ S C^(1); S C^(2); ⋱ ⋮; S C^(N_S); C^(1)† C^(2)† ⋯ C^(N_S)† D ])([ a⃗_1; a⃗_⃗2⃗; ⋮; a⃗_N_S; Ḟ⃗ ])=([ I⃗_1; I⃗_2; ⋮; I⃗_N_S; I⃗_f ]), where S is an M× M matrix whose elements are S_ln, a⃗_j=(a_1j,a_2j,…,a_Mj)^T, Ḟ⃗ is a vector whose components are given by ḟ_np, C^(j) is an M× MN matrix whose elements are given by C_l,np^(j)=A_nj S_lnf_lp^∗, and D is an MN× MN matrix whose matrix elements are D_lk,np=∑_j=1^N_SA_lj^∗A_nj(δ_k,p+f_lp^∗f_nk) S_ln. To perform numerical simulation, we first numerically solve the matrix equation (<ref>) as a set of linear equations to obtain the values of a⃗_j and Ḟ⃗. The former can be combined with Eq. (<ref>) to calculate the derivatives of A_nj. The latter are just the derivatives of f_np. On obtaining the derivatives of A_nj and f_np, we can use the 4th-order Runge-Kutta algorithm to update the variational parameters.
http://arxiv.org/abs/2407.12263v1
20240717021828
A difference-free conservative phase-field lattice Boltzmann method
[ "Chunheng Zhao", "Saumil Patel", "Taehun Lee" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
a]Chunheng Zhao b]Saumil Patel c]Taehun Lee [a]Sorbonne Université and CNRS, Institut Jean Le Rond d'Alembert UMR 7190, F-75005 Paris, France [b]Computational Science Division, Argonne National Laboratory [c]Department of Mechanical Engineering, The City College of the City University of New York § ABSTRACT We propose an innovative difference-free scheme that combines the one-fluid lattice Boltzmann method (lBM) with the conservative phase-field (CPF) lBM to effectively solve large-scale two-phase fluid flow problems. The difference-free scheme enables the derivation of the derivative of the order parameter and the normal vector through the moments of the particle distribution function (PDF). We further incorporate the surface tension force in a continuous surface stress form into the momentum equations by modifying the equilibrium PDF to eliminate the divergence operator. Consequently, the entire computation process, executed without any inter-grid finite difference formulation, demonstrates improved efficiency, making it an ideal choice for high-performance computing applications. We conduct simulations of a single static droplet to evaluate the intensity of spurious currents and assess the accuracy of the scheme. We then introduce the density or viscosity ratio and apply an external body force to model the Rayleigh-Taylor instability and the behavior of a single rising bubble, respectively. Finally, we employ our method to study the phenomenon of a single bubble breaking up in a Taylor-Green vortex. The comparison between the difference-free scheme and the finite difference method demonstrates the scheme's capability to yield accurate results. Furthermore, based on the performance evaluation, the current scheme exhibits an impressive 47% increase in efficiency compared to the previous method. * We propose a new LB scheme to solve the two-phase fluid flow system without derivative calculation. * The mass-conserving character of the new scheme is validated. * The new scheme highly improves the efficiency for large-scale emulsion problems. lattice Boltzmann method difference-free mass-conservation 0000 1111 0000 1111 § INTRODUCTION Emulsions, which involve the multi-phase flows of two immiscible fluids with similar densities <cit.>, have found numerous applications in industry such as froth flotation <cit.>, medicine delivery <cit.>, and oil production <cit.>. The behaviors of emulsions, such as intricate surface deformation, droplet breakup, and coalescence, pose challenges to numerical simulations. The methodology for modeling the surface effect in this case is of significant importance. When simulating the surface force using the continuum surface force (CSF) method <cit.>, the presence of small droplets with large curvature can lead to computational instability. The continuous surface stress (CSS) method improves the numerical stability since the curvature evaluation is not essential compared to the CSF <cit.>. However, CSS suffers from the problem of spurious currents <cit.>. Based on the diffuse interface method, potential form surface force formulation can alleviate the difficulty of the curvature calculation as well, and it is able to eliminate the spurious currents to round off <cit.>. However, in this case, mass loss is a significant concern due to the large curvature effect <cit.>. Another difficulty of emulsion simulation is that conducting large-scale three-dimensional simulations requires extensive computational resources and time. Normally, when the derivative computation is needed, one has to employ the message-passing interface (MPI) library <cit.> to transfer the essential data between multiple computers or memory which is time-consuming. Numerous numerical studies have been conducted on emulsions using the diffuse interface method, but a recurring issue encountered is the problem of mass loss <cit.>. Using the pseudopotential lattice Boltzmann method (lBM), droplet statistics and correlation with turbulence were systematically investigated <cit.>. While in those studies, significant mass loss was found due to the droplet dissolution. The Cahn-Hilliard (C-H) method was employed for modeling emulsions of binary flow <cit.>. In this method, a finite interface thickness is considered, and the evolution of the interface is captured by a fourth-order partial differential equation. Although the C-H equation is in a so-called conservative form, it suffers from the mass loss problem for the simulation of small droplets <cit.>. Currently, the conservative phase-filed (CPF) method is utilized to solve the interface evolution using a second-order partial differential equation. Unlike the C-H method, the CPF method eliminates the curvature-driven term, enhancing its conservative nature for simulations involving large curvature morphologies <cit.>. This characteristic has been particularly beneficial in studies that focus on the mass loss of large curvature droplets by using the C-H method <cit.>. Additionally, the CPF method achieves higher efficiency compared to the C-H method, as it avoids higher-order derivative calculations. Geier et al. <cit.> utilized lBM to solve CPF. Based on this method, ternary fluid flow and adaptive mesh refinement technology have also been applied to the lBM CPF <cit.>. Using asymptotic analysis on a diffusive scale <cit.>, the CPF lBM can evaluate the derivative of the order parameter ϕ and the normal vector 𝐧 from the central moment of the PDF <cit.>, up to 𝒪 (ϵ^2), where ϵ∼Δ x or ϵ^2∼Δ t in diffusive scale. The lBM has been widely validated as a solver for multi-phase flow simulations using different models. In the single-fluid model, it has been successfully employed to solve the Navier-Stokes (N-S) equations, where phases are separated by pressure. Alternatively, the multi-fluid model couples the N-S equations with phase-field equations, enabling an accurate representation of phase separation <cit.>. The evolution of the particle distribution function (PDF) in the lBM follows a two-step process: propagation and collision. This localized computation of the collision operator makes the lBM well suited for parallel computing. Additionally, the advection, including the material derivative of the PDF, is typically solved using the Crank-Nicholson method, a semi-implicit finite difference approach that ensures numerical stability. An advantageous feature of the lBM is its ability to implicitly recover derivatives without the need for additional finite difference calculations <cit.>. For instance, the original ideal gas lBM computes the pressure gradient and viscous stress force using the moments of the PDF. During simulation, the propagation step provides crucial information for derivative calculations. By applying specific assumptions and considering certain limits, we are able to reconstruct the equilibrium PDF and extract valuable information such as the surface stress, density derivative, and normal vector <cit.>. A recent advancement by Reis introduced a one-fluid model that addresses the implicit calculation of surface stress by modifying the equilibrium PDF <cit.>. By neglecting the high-order error (𝒪(Ma^2/Re)), the N-S momentum equation with the CSS formulates the surface force, allowing for the recovery of pressure evolution. Since the divergence of the surface stress is implicitly solved, the efficiency is highly improved. We here propose to combine the one-fluid lBM model with the CPF lBM <cit.> to construct a difference-free lBM to solve the binary fluid flow. In this method, unlike the advection-diffusion-sharpening lBM <cit.>, the normal vector and density gradient are implicitly recovered using the central moment, which can be obtained through the CPF lBM. By combining these two methods, we can effectively solve binary fluid flow without the need for the finite difference method. All derivative computations are implicitly resolved using the moments of the PDF. The difference-free method proposed in this study is rigorously validated through a series of tests. In the single-droplet simulation, we assess the accuracy and reliability of the method, and it demonstrates excellent agreement with established research in this field. Subsequently, we expand the scope by incorporating density ratio and viscosity ratio parameters, allowing us to successfully simulate both the Rayleigh-Taylor instability and the single rising bubble benchmarks. These test cases further confirm the effectiveness and consistency of our method, aligning well with previous findings. Furthermore, we investigate the challenging scenario of a single bubble droplet breakup within a Taylor-Green vortex cube using the derived difference-free scheme. Using the localized evaluation of derivatives, our simulation exhibits enhanced efficiency, especially for large-scale simulations. In particular, the results obtained from this problem show strong consistency with our previously developed approach, providing additional confidence in the accuracy and robustness of the difference-free method. § NUMERICAL METHODOLOGY §.§ governing equations The governing equations consist of the Navier-Stokes equations augmented with the continuous surface stress tensor, which represents the surface tension stress, and the conservative phase-field equation. Mathematically, these equations can be expressed as follows: 1/ρ c_s^2∂ p/∂ t+∇·𝐮=0, ∂ρ𝐮/∂ t+∇·(ρ𝐮⊗𝐮)= ∇·(Π_𝐯+Π_𝐬-p𝐈)+ρ𝐆, ∂ϕ/∂ t+∇·(ϕ𝐮)= ∇· M(∇ϕ-4ϕ(1-ϕ)/δ𝐧). In the given context, we utilize several symbols and terms to represent different physical quantities. Eq. (<ref>) represents the pressure evolution equation, where ρ denotes the local density, p represents the pressure, 𝐮 signifies the external velocity vector and c_s corresponds to the speed of sound. In the low Mach number regime (Ma=|𝐮|/c_s≪1), it is valid to assume that the velocity field is approximately divergence-free, as established in previous studies <cit.>. On the other hand, Equation (<ref>) introduces the tensor product operation denoted by ⊗, the identity tensor represented by 𝐈, and the external body forcing term denoted by ρ𝐆. The terms Π_𝐯 and Π_𝐬 correspond to the viscous stress and the continuous surface stress, respectively, and they can be expressed as follows: Π_𝐯= η(∇𝐮+(∇𝐮)^T), Π_𝐬= σ|∇ϕ|(𝐈-𝐧⊗𝐧). Here, η denotes the local viscosity, δ represents the interface thickness, σ corresponds to the surface tension between the two phases, and 𝐧 represents the normal vector. In Equation (<ref>), the order parameter ϕ takes on values within the range of [0,1], serving as a distinguishing parameter between different fluids. Specifically, within the interface region, 0<ϕ<1, the order parameter varies, while in the bulk fluids, it takes either the value ϕ=0 or ϕ=1. Additionally, in Equation (<ref>), the coefficient M is referred to as the diffusion coefficient or mobility. Eqs. (<ref>) and (<ref>) in their current form, without the inclusion of a body force, are formulated in what is commonly referred to as the conservative form. This particular formulation is advantageous as it ensures the preservation of both momentum and mass throughout the simulation <cit.>. Using a Chapman-Enskog analysis with convective scaling, it becomes possible to recover the stress force, which includes the contribution of surface tension <cit.>. A comprehensive derivation of the conservative phase-field equation given by Eq. (<ref>), can be found in previous work such as <cit.>. Moreover, a comparative analysis of the mass conservation characteristics between the Cahn-Hilliard method and the conservative phase-field method has been explored in previous research <cit.>. §.§ one-fluid lattice Boltzmann model for N-S equations Governing Eqs. (<ref>) and (<ref>) are solved using the recently developed one-fluid lattice Boltzmann method (lBM) as proposed by Reis <cit.>. In this approach, the surface tension stress is incorporated into the equilibrium particle distribution function (PDF). Notably, the divergence term in Equation (<ref>) associated with surface stress is implicitly resolved by utilizing moments of the PDF. We start with the discrete Boltzmann equation of the PDF f_i with an external forcing term: (∂/∂ t+𝐞_i·∇)f_i=-1/λ_ρ(f_i-f_i^eq)+F_i. The PDF restriction is given as ∑_i f_i=p/c_s^2. In the above equation, 𝐞_i denotes the discrete velocity vector in a D2Q9 lattice <cit.>, which is given as: e_i={ (0,0), i=0 (cosθ_i,sinθ_i), θ_i=(i-1)π/2, i=1,2,3,4 √(2)(cosθ_i,sinθ_i), θ_i=(i-5)π/2+π/4, i=5,6,7,8, . The parameter λ_ρ represents the relaxation time, which is connected to the local kinematic viscosity ν through the equation ν=λ_ρ c_s^2, where c_s=1/√(3). The external forcing term F_i is comprised of two components: the compensation part S_i (provided in <ref>), and the external body force part: R_i=t_i(𝐞_i-𝐮/c_s^2+𝐞_i·𝐮/c_s^4𝐞_i)·(ρ𝐆). As demonstrated in the work by Reis <cit.>, the equilibrium PDF f^eq_i, which incorporates the surface stress, can be expressed as follows: f_i^eq=t_i(p/c_s^2+ρ( 𝐞_i ·𝐮/c_s^2+ (𝐞_i·𝐮)^2/2c_s^4- |𝐮|^2/2c_s^2) +1/2c_s^4Π_s :(𝐞_i⊗𝐞_i-c_s^2𝐈)). Here, the weights t_i associated with the equilibrium PDF have specific values: t_0=4/9, t_1=t_3=t_5=t_7=1/9, and t_2=t_4=t_6=t_8=1/36. Notably, through the modification of the equilibrium PDF, it is observed that the explicit divergence operator is implicitly resolved up to 𝒪(Ma^2) accuracy. Further details and a comprehensive Chapmann-Enskog analysis of this aspect can be found in the appendix. Using the Crank-Nicolson finite difference method, the discrete Boltzmann equation given by Eq. (<ref>) can be effectively solved through the application of the Lattice Boltzmann method <cit.>: f̅_i(𝐱+Δ t 𝐞_i,t+Δ t)-f̅_i(𝐱,t)=-1/τ_ρ+0.5(f̅_i(𝐱,t)-f̅_i^eq(𝐱,t))+Δ t F_i, where τ_ρ=λ_ρ/Δ t denotes the dimensionless relaxation time, and the modified equilibrium PDF is given as f̅_i^eq=f_i^eq-0.5Δ t F_i. The solution to Eq. (<ref>) is obtained by performing a local operator collision and a non-local operator propagation. The after-collision PDF, denoted as f_i^*, can be calculated using the following equation: f_i^*=f̅_i-1/τ_ρ+0.5(f̅_i-f̅_i^eq)+Δ t F_i. It is followed by a Lax–Wendroff propagation method: f̅_i(𝐱,t+Δt)=f_i^*(𝐱,t)-c[f_i^*(𝐱,t)-f_i^*(𝐱- 𝐞_i,t)] -0.5c(1-c)[f_i^*(𝐱+Δt𝐞_i,t)-2f_i^*(𝐱,t)+f_i^*(𝐱-Δt𝐞_i,t)], f̅_i(𝐱+Δt𝐞_i,t+Δt)= f̅_i(𝐱,t+Δt), where the Courant number c=0.997 is fixed in our current work. Finally, the pressure and momentum can be recovered by the PDFs by: p=c_s^2∑_i f̅_i+c_s^2Δ t/2𝐮·∇ρ, ρ𝐮=∑_i f̅_i 𝐞_i-Δ t/2ρ𝐆. §.§ conservative phase-field lattice Boltzmann method The conservative phase-field equation is solved using the lattice Boltzmann scheme described in Geier et al. <cit.>. The discrete Boltzmann equation for the PDF of the order parameter g_i can be expressed as follows: (∂/∂ t+𝐞_i·∇)g_i=-1/λ_ϕ(g_i-g_i^eq), with g_i^eq=t_iϕ[1+𝐞_i·𝐮/c_s^2+(𝐞_i·𝐮)^2/2c_s^4-|𝐮|^2/2c_s^2]+t_i𝐞_i·𝐒. The source term 𝐒 is incorporated to account for the separation flux term in Eq. (<ref>). It can be expressed as follows: 𝐒=4ϕ(1-ϕ)/δ𝐧. By means of the Crank-Nicholson method, the lattice Boltzmann equation can be shown as: g_i(𝐱+Δ t 𝐞_i,t+Δ t)-g_i(𝐱,t)=-1/τ_ϕ+0.5(g_i(𝐱,t)-g_i^eq(𝐱,t)), where τ_ϕ=λ_ϕ/Δ t denotes the relaxation time which is related to mobility M=τ_ϕ c_s^2Δ t. Similar to the one-fluid lBM, the lattice Boltzmann equation given by Eq. (<ref>) is solved using a collision and Lax-Wendroff propagation method, with a chosen Courant number c=0.997. The order parameter ϕ can be computed as the summation of g_i values, while the local density ρ is determined as ρ=ρ_1ϕ+ρ_2(1-ϕ). The density gradient ∇ρ and the normal vector 𝐧 in Eq. (<ref>) and (<ref>) can be determined using the central moment of the PDF, as demonstrated in Geier et al. <cit.>. It has been established that the normal vector and the derivative of the order parameter can be accurately recovered through the central moment approximation up to 𝒪(Ma^3). Within this method, we represent the first-order central moment vector as 𝐊_1, which can be expressed as follows: 𝐊_1=∑_i g_i(𝐞_i-𝐮 ). The normal vector can be recovered from the central moment thanks to the lattice Boltzmann method: 𝐧=-𝐊_1/|𝐊_1|. Furthermore, once we have the expression for the normal vector, we can obtain the first derivative of the order parameter as follows: ∇ϕ=1/(τ_ϕ+0.5)c_s^2(τ_ϕ c_s^2𝐒-𝐊_1). The density gradient can be calculated as ∇ρ=Δρ∇ϕ, where Δρ=ρ_1-ρ_2 represents the density difference between the two fluids. The detailed derivation of this expression is provided in Appendix <ref>. Finally, let us summarize the computation process of this difference-free method. The order parameter ϕ is calculated from the moments of the PDF g_i, while the gradient ∇ϕ and the normal vector 𝐧 are obtained from the central moment 𝐊_1. With these values, we can then recover the pressure and the momentum of the fluid flow using Eq. (<ref>) and (<ref>). § SIMULATION RESULTS §.§ Single static droplet The objective of the single static droplet simulation is to assess the accuracy and stability of the difference-free scheme. As the initial droplet morphology deviates from a perfect circle, the interface undergoes deformation driven by curvature differences until it reaches equilibrium. It is worth noting that inconsistent discretization schemes or imbalanced surface tension force formulations can lead to the emergence of spurious currents, as described in previous studies <cit.>. In the 2-D or 3-D droplet simulation, Eq.(<ref>) provides an approximate solution rather than an exact one-dimensional solution. Consequently, after initialization, the droplet undergoes further deformation due to this approximation. The interface deformations induce spurious currents, which are eventually dissipated by viscous effects, leading to a pressure equilibrium. In our previous work, we evaluated three surface force formulations for the single static droplet: continuum surface force (CSF)<cit.>, finite difference based continuous surface stress (FDCSS) <cit.>, and potential form surface force <cit.>. In this study, we apply the same parameters and compare the results with the previous work using the difference-free scheme. The Laplace number, La = σρ D/η, is used to characterize the balance between surface effects and viscous effects. In this test, a droplet with a diameter D is initialized at the center of a square domain with a length of L_0 = 2D. The density ratio and viscosity ratio are fixed at ρ_1/ρ_2=1 and η_1/η_2=1. It is important to note that in this comparison, the only difference between the current numerical scheme and the previous method described in <cit.> lies in the surface effect. The droplet shape is described by the hyperbolic tangent function: ϕ=1/2[1+tanh2(|𝐳|-D/2)/δ]. The interface thickness δ is characterized by the Cahn number Cn = δ/D = 0.06. The distance from a point to the droplet interface is denoted by |𝐳|-D/2, where |𝐳| represents the distance between the local point axis and the center of the droplet. In this test, we apply periodic boundary conditions. The comparison of the four formulations with different La values is presented in Table <ref>. The intensity of the spurious currents is computed by |𝐮_max|^2, where |𝐮_max| denotes the maximum norm of the velocity vector. Those values are obtained after 200T/t_0 to ensure the simulation reaches a steady state, and t_0=η D/2σ is the viscous time scale <cit.>. As discussed in <cit.>, the CSF formulation compensates for the curvature-driven effect, which is subtracted in the conservative phase-field equation. As a result, the intensity of spurious currents is smaller in this formulation compared to the other three formulations or schemes. The difference-free scheme yields slightly worse results than the previous approach; however, it exhibits lower spurious currents' intensity as the La value decreases. It has been argued that the difference-free scheme requires a relatively thicker interface to fully resolve the interface dynamics <cit.>. Therefore, when dealing with interfaces undergoing large deformations, we suggest using a higher resolution to ensure accurate simulations. To further assess the accuracy of the difference-free scheme, we perform a convergence study by comparing the relative error between the numerical results and the analytic solution. The relative error is calculated using the following formula: ||δϕ||_2=√(∑_x,y(ϕ-ϕ_0)^2/∑_x,yϕ_0^2). Here, we set the initial order parameter profile of the droplet as ϕ_0. We maintain a fixed relaxation time τ_ρ=0.02 and Laplace number La=1, while varying the resolution L_0/Δ x in the range of 100 to 400. The interface thickness in the test remains fixed at Cn=0.06. The convergence study results are depicted in Figure <ref>, where it can be observed that as the resolution increases while keeping the same Cn, the relative error decreases and exhibits second-order accuracy. Notably, for interface thicknesses δ/Δ x > 4, a better-resolved simulation is expected. §.§ Rayleigh-Taylor instability The Rayleigh-Taylor instability is a well-known benchmark problem extensively studied in the field of two-phase flow <cit.>. In contrast to the single droplet simulation, this test introduces an external forcing term in the momentum equations, as shown in Eq. (<ref>), while also considering the effect of density ratio. The objective of this test is to analyze the behavior of the interface between two fluids subjected to gravitational acceleration and evaluate the performance of the difference-free scheme in capturing the instabilities and interface dynamics. In the Rayleigh-Taylor instability test, we initialize two fluids in a rectangular pool with a domain length of L_0× 4L_0. The initial position of the interface is determined by the equation y/L_0=0.1 cos(2π x/L_0), and two center lines of the rectangular pool are indicated by x/L_0=0.5 and y/L_0=0. We set the density ratio and viscosity ratio between the top and bottom fluids as ρ_1/ρ_2=3 and η_1/η_2=1, respectively, which allows us to calculate the Atwood number as At=(ρ_1-ρ_2)/(ρ_1+ρ_2)=0.5. To characterize the simulation, we introduce the Reynolds number Re=ρ_1 U L_0/η_1=2000, where U=√(gL_0) represents the velocity scale. Additionally, we consider the surface effect by incorporating the capillary number Ca=η_1 U/σ=0.1. These parameters enable us to assess the behavior and evolution of the interface under the influence of gravitational acceleration and surface tension. In Figure <ref>, we present the evolution of the Rayleigh-Taylor instability for a time range of T/t_0=[0.5,3], where t_0=√(2L_0/g). As anticipated, the instability progresses over time, resulting in complex interfacial dynamics. To assess the accuracy of our method, we compare the evolution of the interface position with previous studies, as illustrated in Figure <ref>. The agreement between our results and those from other methods is excellent, both for the peak and trough positions of the interface. This demonstrates the reliability and effectiveness of our approach in capturing the essential features of the Rayleigh-Taylor instability. §.§ Rising Bubble We apply our model to the validation of two-phase flow simulations with density ratio by considering the single rising bubble benchmark problem. Specifically, we investigate case 1 from Hysing et al. (2009) <cit.>, which is a well-known and widely used benchmark in the field. This benchmark involves the simulation of a single bubble rising in a fluid domain, and it serves as a reliable test to assess the accuracy and performance of two-phase flow models. We conduct the single rising bubble simulation in a rectangular pool, where a bubble with lower density is introduced into a higher-density liquid. The dimensions of the rectangular pool are set as L_0×2L_0, and the diameter of the bubble is chosen as D/L_0=0.5. The bubble, with a density of ρ_2, is initially positioned at c_m(x,y)/D=(1, 1), while the background fluid is filled with liquid of density ρ_1. To establish a density contrast, we set the density ratio and viscosity ratio as ρ_1/ρ_2=η_1/η_2=10. The rising process of the bubble is characterized by the Bond number, which is defined as Bo=Δρ g D^2/σ, where Δρ is the density difference between the bubble and the liquid, g is the acceleration due to gravity, and σ is the surface tension. In our simulation, we consider a Bond number of Bo=10. Additionally, we evaluate the Archimedes number, given by Ar=√(g D^3)/ν_1, where ρ_1 and ν_1 represent the density and dynamic viscosity of the liquid, respectively. For this simulation, we set Ar=35. The evolution of the bubble deformation during the rising process is presented in Figure <ref>(a). To study the effect of the interface thickness, we adjust the Cahn number Cn by varying the length of the rectangular pool while keeping a constant interface thickness δ. By decreasing Cn, the morphology of the rising bubble at T/t_0=3 converges to the same shape. This indicates that the bubble shape becomes independent of the pool size and is primarily influenced by the interface thickness. We further analyze the evolution of the mass center C_m=∑(ϕ y)/∑ϕ and the vertical velocity V_d=∑(ϕ u_y)/∑ϕ of the bubble, and compare the results with the reference data from <cit.>. The center of mass of the simulations with different Cn exhibits highly consistent results. Regarding the evaluation of the vertical velocity, the simulation with lower resolution fails to provide a smooth curve but correctly captures the overall trend. As the resolution increases, the simulations with lower Cn show good agreement with the reference benchmark. The results indicate that by increasing the resolution and reducing the interface thickness, the difference-free scheme produces accurate and reliable simulations, aligning well with the reference data from <cit.>. We then compare the mass conservation between the current difference-free method with our previous CPF lBM which employs the isotropic finite difference method <cit.>. The mass of the droplet is evaluated by m=∑ρϕΔ x^3, and the initial mass is noted as m_0. With the evaluation of mass conducted through quadrature, an increase in the interface area between two liquids results in a smaller computed mass. The comparisons of the results at various resolutions Cn=[0.024,0.047] are presented in Figure <ref>. It is noteworthy that both the isotropic finite difference derivative method and the difference-free method yield very small mass loss. For both methods, the effect of the resolution for mass loss can be neglected. §.§ Droplet breakup in a decayed Taylor-Green vortex One notable feature of the difference-free numerical scheme is its implicit computation of all derivatives using the local PDF. This characteristic makes the scheme particularly advantageous for large-scale simulations when integrated into distributed computing systems. In order to validate both the efficiency and accuracy of this scheme in such scenarios, we conduct a comparison between the current scheme and our previous method <cit.> using a large-scale simulation of a single droplet breakup inside a decayed Taylor-Green vortex liquid pool. We initialize a droplet with a diameter of D/L_0=0.4 located at c_m(x,y,z)/L_0=(0.45,0.45,0.45) within a three-dimensional Taylor-Green vortex cube. The cube has a constant side length of L_0. The initial velocity profile and pressure distribution are defined as follows: u(x,y,z,0) =u_0 sin(x/L_0) cos(y/L_0) cos(z/L_0), v(x,y,z,0) =-u_0 cos(x/L_0) sin(y/L_0) cos(z/L_0), w(x,y,z,0) =0, p = p_0+ρ_0 u_0^2/16(cos(2x/L_0)+ cos(2y/L_0))(cos(2z/L_0)+2). Here, u_0 represents the reference velocity, which is determined by the Reynolds number Re=ρ_0 u_0 L_0 /η=1600. The time scale for this problem is defined as t_0=L_0/u_0. Furthermore, a periodic boundary condition is applied to the simulation, ensuring that the flow pattern repeats throughout the computational domain. In this particular test, the surface effect is characterized by the capillary number Ca=η u_0/σ. Convergence tests for velocity and kinetic energy using this method have already been presented in our previous work <cit.>, and thus, we will not present those results again in this context. However, it is important to validate the morphology evolution by examining the calculation of the normal vector, which is solved using Eq. (<ref>). To this end, we compare the results of the morphology evolution under different normal vector calculation methods. In Figure <ref>, we present the results obtained using the difference-free method for computing the normal vector. On the other hand, in Figure <ref>, we utilize the isotropic finite difference method for the normal vector computation, following the expression 𝐧=∇ϕ/|∇ϕ| as proposed by Lee et al. <cit.>. From the two figures, it can be observed that the initial droplet deforms and eventually breaks up into smaller droplets. At the late stage, due to the surface effect, the droplets evolve into a sphere shape. Overall, the simulation results obtained from both methods show comparability. While it is worth noting that the isotropic finite difference method yields smoother simulation results compared to the difference-free numerical scheme. The presence of some uneven structures in the difference-free scheme can be attributed to the unresolved interface, as discussed in previous studies <cit.>. The presence of uneven structures can either advance or delay the breakup of droplets. To mitigate these undesired structures, increasing the Cahn number (Cn) and the Courant number (c) can be effective measures. The mass conservation is evaluated as the same manner as we did for the rising bubble test case where m=∑ρϕΔ x^3. The relative mass difference is shown in figure <ref>, which indicates both methods obtain similar mass loss during the simulation. We further evaluate the kinetic energy E_k=∑ρ𝐮^2/2 evolution of both methods. Figure. <ref> shows the comparison of the kinetic energy evolution for both methods with different Ca=[0.01,1] under different resolutions. Without the surface effect, the initial kinetic energy E_0 will be fully consumed by the viscosity. As the surface effect is incorporated into the system, a portion of the kinetic energy is consumed by the expanding total surface area, leading to relatively rapid dissipation. According to the comparisons, when L_0/Δ x=128, Ca=[0.1,1], the kinetic energy evolution for both methods is in good agreement. As we increase the resolution to L_0/Δ x=256, both methods obtain similar kinetic evolution for Ca=1. When we further increase the surface effect to Ca=0.01, the dissipation trend of both methods still remains the same, but a slight deviation is observed during the T/t_0=[5,10]. The initial and the final evolution of both cases remain consistent. Finally, We conduct the strong scaling (performance) in terms of Million Floating-Point Operations Per Second (MLUPS) for two schemes, which can be expressed as: MLUPS=lattice points in the whole domain × iterations /computing time × 10^6. As shown by this equation, increasing the number of processors reduces the computing time, resulting in improved performance. We conduct the comparison by using IMEXLBM <cit.> which is open-source software for heterogeneous platforms. In addition, the comparisons are carried out on ThetaGPU <cit.>, specifically on the NVIDIA DGX A100. The comparison of the performance is shown in figure <ref>. We maintain a fixed total number of computing grid points at 256^3 and assess the performance by incrementally increasing the number of GPUs. Consequently, as the number of GPUs is gradually increased, the performance of the difference-free scheme improves progressively compared to the isotropic finite difference method. With the utilization of 64 GPUs, a noticeable efficiency increase of 47% is observed. § CONCLUDING REMARKS We have proposed a difference-free multi-phase fluid flow CPF lBM solver. This method offers several advantages, such as the local calculation of density/order parameter derivatives and normal vectors through the central moment of the particle distribution function. Additionally, the surface effect is implicitly incorporated into the equilibrium PDF of the pressure. By combining these numerical schemes, the multi-phase fluid flow can be solved without the need for additional derivative evaluations using finite difference methods. To validate the proposed method, we conducted various benchmark tests. Firstly, we compared the intensity of spurious currents between the current numerical scheme and different surface force formulations. The results showed that the new method achieved similar outcomes to the previous finite difference based continuous surface stress formulation. Moreover, the relative error of the order parameter demonstrated second-order accuracy, indicating the robustness and accuracy of the method. Furthermore, we investigated classical benchmark problems including Rayleigh-Taylor instability and rising bubble scenarios by increasing the density ratio and viscosity ratio. The simulation results of both problems exhibited high consistency with previous research. Additionally, we applied the new method to study droplet breakup in a decayed Taylor-Green vortex liquid pool and compared it with our previous approach. The local calculation of derivatives in the new method resulted in improved computational efficiency compared to the previous approach. The droplet distribution and kinetic energy evolution were also in good agreement with the previous method. However, it is important to acknowledge the limitations of this method, particularly when dealing with sharp interfaces. In cases where the interface thickness is small and cannot adequately resolve the interface region, the method may exhibit some unsmooth structures at the interface. Further research is needed to investigate the mobility and limitations of the method in order to enhance its performance. Overall, the proposed difference-free multi-phase fluid flow LB solver has shown promising results in various benchmark tests, highlighting its potential for accurate and efficient simulations. § ACKNOWLEDGEMENT This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative. This research was supported by the National Science Foundation under Grant No. 1743794, PIRE: Investigation of Multi-Scale, Multi-Phase Phenomena in Complex Fluids for the Energy Industries. § The details of the construction of the one-fluid LBE have been given by Tims Reis <cit.>, and we show the simple derivation of this method. We apply the Chapman-Enskog analysis in convection scaling (Δ t∼Δ x) from the discrete Boltzmann equation without the body force: ∂ f_i/∂ t+𝐞_i·∇ f_i=-1/λ_ρ(f_i-f^eq_i)+S_i. The restriction equations can be given as follows: ∑_i f_i=∑_i f^eq_i=p/c_s^2 ∑_i f_i 𝐞_i=∑_i f^eq_i 𝐞_i=ρ𝐮 ∑_i f_i^neq=0, ∑_i f_i^neq𝐞_i=0 ∑_i S_i =𝐮·∇ρ. In this case, Eq. (<ref>) can recover the macroscopic equation: 1/ρ c_s^2∂ p/∂ t+∇·𝐮=0, by the zeroth order moment of the PDFs. We further consider the following expansion series: f_i=f^eq_i+λ _ρ f_i^1+ ... Π=Π^eq +λ_ρΠ^1+... 𝐐=𝐐^eq +λ_ρ𝐐^1+... ∂ t=∂ t_0 +λ_ρ∂ t_1+..., where tensor Π=∑_i f_i 𝐞_i𝐞_i denotes the second order moment of the PDF, and 𝐐=∑_i f_i 𝐞_i𝐞_i𝐞_i is the third order moment of the PDF. The DBE equation of the zeroth order of τ_ρ can be shown as: ∂ f_i^eq/∂ t_0+𝐞_i·∇ f^eq_i=f_i^1, and its zeroth order moment is: 1/ρ c_s^2∂ p/∂ t_0+∇·𝐮=0. The first order moment can be computed by multiplying 𝐞_i: ∂ρ𝐮/∂ t_0+∇·Π^eq=0, The second moment is then: ∂Π^eq/∂ t_0+∇·𝐐^eq=-Π^1+∑_i S_i 𝐞_i⊗𝐞_i. In order to recover the N-S momentum equation as shown in Eq. (<ref>), the flux term Π^eq in Eq. (<ref>) needs to be constraint by: Π^eq=ρ𝐮⊗𝐮+p𝐈-Π_s. The construction of the equilibrium PDF can be shown that: f_i^eq=w_i(p/c_s^2+ρ( 𝐞_i ·𝐮/c_s^2+ (𝐞_i·𝐮)^2/2c_s^4- |𝐮|^2/2c_s^2) +1/2c_s^4Π_s :(𝐞_i⊗𝐞_i-c_s^2𝐈)). With the equilibrium PDF, we can compute the divergence of the third order moment ∇·𝐐^eq as: ∇·𝐐^eq =[c_s^2∇·(ρ𝐮)]𝐈+c_s^2∇ρ𝐮+c_s^2(∇ρ𝐮)^T =[c_s^2∇·(ρ𝐮)]𝐈+c_s^2ρ∇𝐮 +c_s^2𝐮⊗∇ρ+c_s^2ρ(∇𝐮)^T +c_s^2(𝐮)^T⊗∇ρ =[c_s^2∇·(ρ𝐮)]𝐈+ρc_s^2(∇𝐮+∇(𝐮)^T) +c_s^2𝐮⊗∇ρ+c_s^2∇ρ⊗𝐮. The first order temporal derivative of the tensor Π^eq can be shown as: ∂Π^eq/∂t_0=∂ρ𝐮⊗𝐮/∂t_0+∂p𝐈/∂t_0-∂Π_s/∂t_0, and: ∂ρ𝐮⊗𝐮/∂ t_0 =𝐮⊗∂( ρ𝐮)/∂ t_0+∂ (ρ𝐮) /∂ t_0⊗𝐮- ∂ρ/∂ t_0𝐮⊗𝐮 =-𝐮⊗∇·Π^eq-∇·Π^eq⊗𝐮+(ρ c_s^2∇·𝐮) 𝐮⊗𝐮 =-𝐮⊗∇ p-∇ p ⊗𝐮+𝐮⊗∇·Π_s+∇·Π_s⊗𝐮+𝒪(Ma^3), ∂ p𝐈/∂ t_0=-ρ c_s^2(∇·𝐮) 𝐈. With those relations available, we can rearrange Eq (<ref>) as: Π^1 =ρ c_s^2(∇·𝐮)𝐈+𝐮⊗∇ p+∇ p⊗𝐮-𝐮⊗∇·Π_s-∇·Π_s⊗𝐮+∂Π_s/∂ t_0 -ρ c_s^2(∇𝐮+ (∇𝐮)^T) +∑_i S_i𝐞_i𝐞_i- c_s^2∇·(ρ𝐮)𝐈-c_s^2𝐮⊗∇ρ-c_s^2∇ρ⊗𝐮+𝒪(Ma^3). We then construct the forcing S_i=(Γ_i-w_i)(𝐞_i-𝐮)·∇ρ, the last three terms can be compensated. The full second-order tensor can be computed by Eq. (A8): Π=ρ𝐮⊗𝐮+p𝐈-Π_s-Π_v+𝒪(Ma^3)+𝒪(Ma^2/Re). Finally, Eq. (<ref>) can be recovered. § The details of the derivation can be found in  <cit.>. We here present a simple derivation based on the previous approach. To obtain the first derivative of the order parameter, we apply the asymptotic analysis of Eq. (<ref>) by diffusive scaling (Δ t∼Δ x^2). We first denote the central moment of the PDF g_i by: K_0=∑_i g_i, 𝐊_1=∑_i g_i(𝐞_i-𝐮). The asymptotic series is then: K=K^0+ϵ K^1+ϵ^2 K^2+... where ϵ∼Δ x. The Taylor series expansion of the discrete Boltzmann equation Eq. (<ref>) then can be shown as: ϵ^2∂ g_i/∂ t-ϵ𝐞_i∇· g_i=0, from which we can obtain the moment relation: ∇·𝐊̅_1^0=0, ∂ K_0^0/∂ t=-∇·𝐊̅_1^1, 𝐊_1^1=𝐊_1^*1-∇·𝐊̅_2^0, Where K^*=K-(K-K^eq)/(τ_ϕ+0.5) denotes the post-collision state, τ_ϕ is the relaxation time, and K̅=0.5(K+K^*) represents the average value of moments. 𝐊_2 takes the form 𝐊_2=∑_i g_i(𝐞_i-𝐮)(𝐞_j-𝐮) can be shown to be a tensor, while when i≠ j, the component of the tensor becomes zero. When we insert Eq. (<ref>) into Eq. (<ref>), we can obtain: 𝐊_1^1=𝐊_1^eq,1-(τ_ϕ+0.5)∇·𝐊̅_2^0, 𝐊_1^*1=𝐊_1^eq,1-(τ_ϕ-0.5)∇·𝐊̅_2^0, 𝐊̅_1^1=𝐊_1^eq,1-τ_ϕ∇·𝐊̅_2^0. When we construct the equilibrium momentum as: 𝐊_1^eq=τ_ϕ4ϕ(1-ϕ)/δ∇ϕ/|∇ϕ|, the moment 𝐊_1 is then: 𝐊_1^1=Λ∇ϕ, where Λ is a priory parameter. Finally, the normal vector can be approximated as: 𝐧=-𝐊_1/|𝐊_1|. From Eq. (<ref>), it is also possible to compute the derivative of the order parameter up to second order ∼𝒪(ϵ^2): ∇ϕ=1/(τ_ϕ+0.5)c_s^2(τ_ϕ c_s^24ϕ(1-ϕ)/δ𝐧-𝐊_1). elsarticle-num
http://arxiv.org/abs/2407.12195v1
20240716214309
CLUE: Safe Model-Based RL HVAC Control Using Epistemic Uncertainty Estimation
[ "Xianzhong Ding", "Zhiyu An", "Arya Rathee", "Wan Du" ]
eess.SY
[ "eess.SY", "cs.SY" ]
: Safe Model-Based RL HVAC ControL Using Epistemic Uncertainty Estimation Xianzhong Ding 0000-0001-6114-2801,  Zhiyu An 0009-0000-1627-4194,  Arya Rathee0009-0008-6844-0463,  and Wan Du0000-0002-2732-6954, Member, IEEE A preliminary version of this work was published in the Proceedings of ACM BuildSys 2023 <cit.>. Xianzhong Ding and Zhiyu An contributed equally to this work. Xianzhong Ding, Zhiyu An and Wan Du are with the Department of Computer Science and Engineering, University of California, Merced, Merced, CA 95343, USA (e-mail: xding5@ucmerced.edu; zan7@ucmerced.edu; wdu3@ucmerced.edu). (Corresponding author: Wan Du.) Arya Rathee is with the Department of Electrical Engineering, University of California, Santa Cruz, Santa Cruz, CA 95064, USA (e-mail: arathee@ucsc.edu). This work was accomplished when Arya Rathee was an undergraduate student at UC Merced and conducted an internship in Dr. Wan Du’s research group. ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT Model-Based Reinforcement Learning (MBRL) has been widely studied for Heating, Ventilation, and Air Conditioning (HVAC) control in buildings. One of the critical challenges is the large amount of data required to effectively train neural networks for modeling building dynamics. This paper presents CLUE, an MBRL system for HVAC control in buildings. CLUE optimizes HVAC operations by integrating a Gaussian Process (GP) model to model building dynamics with uncertainty awareness. CLUE utilizes GP to predict state transitions as Gaussian distributions, effectively capturing prediction uncertainty and enhancing decision-making under sparse data conditions. Our approach employs a meta-kernel learning technique to efficiently set GP kernel hyperparameters using domain knowledge from diverse buildings. This drastically reduces the data requirements typically associated with GP models in HVAC applications. Additionally, CLUE incorporates these uncertainty estimates into a Model Predictive Path Integral (MPPI) algorithm, enabling the selection of safe, energy-efficient control actions. This uncertainty-aware control strategy evaluates and selects action trajectories based on their predicted impact on energy consumption and human comfort, optimizing operations even under uncertain conditions. Extensive simulations in a five-zone office building demonstrate that CLUE reduces the required training data from hundreds of days to just seven while maintaining robust control performance. It reduces comfort violations by an average of 12.07% compared to existing MBRL methods, without compromising on energy efficiency. Our code and dataset are available at <https://github.com/ryeii/CLUE>. HVAC Control, Model-based Deep Reinforcement Learning, Model Predictive Control, Energy efficiency, Optimal control § INTRODUCTION The field of Heating, Ventilation, and Air Conditioning (HVAC) control has seen extensive research on Reinforcement Learning (RL) techniques <cit.>. RL offers adaptability by learning control policies through direct interactions with the environment <cit.>. The prevalent approach, Model-Free Reinforcement Learning (MFRL), obtains optimal HVAC control policies through trial-and-error interactions with real-world buildings. However, MFRL requires a substantial number of interactions to converge, with experiments indicating a need for 500,000 timesteps (equivalent to 5200 days) to achieve desirable control performance. Although simulated building models can accelerate the training process, they demand highly accurate calibration, posing significant challenges <cit.>. Model-Based Reinforcement Learning (MBRL) offers a significant advantage in optimizing HVAC controls compared to traditional Model Predictive Control (MPC) and MFRL methods <cit.>. Unlike MPC, which relies on an accurately formulated building thermal dynamics model—a challenging and often impractical task <cit.>, MBRL utilizes historical data to train a dynamics model and subsequently determine optimal control actions <cit.>. On the other hand, MFRL, despite its direct policy learning approach through interaction with the building environment, suffers from extensive convergence times, often requiring years to yield practical results <cit.>. A critical issue with MBRL-based HVAC control is the substantial volume of training data needed for model convergence. For instance, the MB^2C (Multi-zone HVAC Control with Model-Based Deep Reinforcement Learning) approach  <cit.> models the building dynamics using an ensemble of deep neural networks combined with a Model Predictive Path Integral (MPPI) algorithm to optimize control actions. This Deep Ensemble (DE) model necessitates 183 days of training data to achieve reliable predictions for effective control <cit.>. Extensive experiments conducted on a simulated five-zone office building reveal that despite using years of training data, the model still exhibits epistemic uncertainty due to biases in the training data, resulting in inaccurate predictions for infrequently encountered states. To counteract this issue, we introduce an MBRL-based HVAC control approach designed to accommodate inaccuracies from models trained on limited datasets. This approach incorporates an awareness of the model’s prediction uncertainty, prompting conservative actions when predictions are uncertain. Enabling safe HVAC control necessitated an initial quantification of the epistemic uncertainty inherent in the building dynamics models. Epistemic uncertainty estimation has significantly enhanced control performance in fields like robotic motion planning <cit.> and reinforcement learning <cit.>. For scenarios with low-dimensional, discrete state spaces, such as multi-armed bandits, the count-based method <cit.> is often employed. This method quantifies model error by counting the frequency of training data instances corresponding to each input. However, this approach is less effective for building dynamics, which involve high-dimensional continuous variables where count-based methods struggle. Conversely, the DE method <cit.> employs multiple neural networks to generate predictions, using the variance among these predictions to measure uncertainty. As introduced above, it requires a large amount of data to train the neural networks for HVAC control. To bridge the existing gap in HVAC control, we introduce CLUE, a novel MBRL system that leverages a Gaussian Process (GP) model for uncertainty-aware modeling of building dynamics. The GP model accepts inputs such as the state of the building (zone temperature, outdoor temperature, occupancy, etc.) and action (heating and cooling setpoints) and predicts the next state as a Gaussian distribution with both a mean and variance. The variance, indicative of the prediction's uncertainty, increases in scenarios with sparse data, serving as a measure of epistemic uncertainty. Research has shown that GP models can surpass conventional methods like neural networks, random forests, and support vector machines in minimizing model error <cit.>. The integration of GP models with MPPI for uncertainty-aware control represents a significant advancement in HVAC control. This approach allows for explicit modeling of prediction uncertainty, enhancing decision-making processes under uncertain conditions. By incorporating GP-derived uncertainty estimates, the MPPI algorithm can select safer and more energy-efficient actions, which is particularly important in environments where data is sparse or biased. However, integrating GP into a robust and efficient HVAC control system necessitates novel approaches within CLUE: 1) Despite GP's non-parametric nature, it relies on a predefined kernel function with specific hyperparameters. We have developed a novel training method called meta kernel learning, which significantly enhances the efficiency of setting GP hyperparameters. 2) The GP-derived uncertainty is integrated into an MPPI algorithm, enabling the identification of safe, energy-efficient actions. Selecting appropriate kernel parameters for GP with limited data is a challenge <cit.>. Traditional methods for kernel selection typically require either expert experience to manually determine parameters or the use of gradient descent, which demands substantial data volumes to develop an effective kernel <cit.>. Inadequate kernel parameters can significantly degrade GP performance. To overcome these limitations, we developed a meta kernel learning approach. This technique leverages the availability of extensive datasets from various buildings, recognizing that while obtaining large amounts of historical data for a specific target building may be challenging, similar data from other buildings are often readily accessible. Meta kernel learning uses this reference data to automatically learn an optimal kernel initialization, thereby minimizing the learning requirements without sacrificing model accuracy. This method contrasts with traditional approaches by significantly reducing the need for large volumes of training data, making it more suitable for practical applications where data collection can be expensive and time-consuming. To translate uncertainty estimates into safe control actions effectively, we devised a confidence-based control algorithm that enables the safe and efficient application of MBRL even when model predictions are imprecise. Utilizing the MPPI control algorithm, our system generates an array of potential action trajectories using our building dynamics model. Each trajectory, a sequence predicting future actions, is assessed by MPPI to select the one that optimizes for both energy savings and minimal human comfort violations. Initially, the controller acts on the first step of this optimal trajectory. Incorporating GP-based uncertainty estimates, we developed a two-stage, uncertainty-aware HVAC control algorithm. The first stage employs a confidence threshold to exclude trajectories whose initial actions exhibit high uncertainty, preventing their execution to avoid potential risks. To establish the optimal confidence threshold for ensuring safe HVAC operations, we analyzed the relationship between the GP model's confidence values and the predicted model errors. This analysis helped in the development of an algorithm capable of optimizing this threshold offline by testing against historical data. The second stage involves refining the selection process using a modified MPPI objective function that considers the uncertainty of each action in all remaining trajectories. Additionally, we introduced a fallback mechanism, a reliable default control strategy activated when no safe actions are identified. This mechanism ensures the system maintains safe operation despite uncertainties. We conducted extensive simulations to evaluate CLUE in a 463m^2 five-zone office building across three cities, using EnergyPlus <cit.>. Our initial tests focused on the accuracy of uncertainty estimation provided by the GP-based building dynamics model within CLUE. The results showed that CLUE's building dynamics model significantly outperformed all baseline models in both modeling accuracy and uncertainty estimation. Subsequently, we applied CLUE to manage HVAC controls within the simulated environment, comparing its performance in terms of human comfort and energy consumption against the state-of-the-art MBRL solutions. Our findings revealed that CLUE consistently reduced comfort violations by an average of 12.07% compared to the baseline MBRL methods. Additionally, CLUE achieved comparable energy savings and showcased remarkable data efficiency by reducing the data requirement from hundreds of days to just seven days. In summary, this paper brings several significant advancements to the field of HVAC control systems. Below are the four main contributions: * Innovative MBRL System for HVAC Control: Introduction of CLUE, a novel MBRL system that utilizes a GP model for uncertainty-aware modeling of building dynamics. This system is tailored to enhance safety and efficiency by incorporating uncertainty directly into the control process. Our approach reduces the data requirements typically associated with GP models in HVAC applications by leveraging meta kernel learning, which optimizes GP hyperparameters using extensive datasets from various buildings, thus minimizing the need for large volumes of training data from a specific building. * Meta Kernel Learning: Development of a new training method called meta kernel learning, which significantly improves the efficiency of setting GP hyperparameters. This method leverages extensive datasets from various buildings to reduce learning requirements without sacrificing model accuracy. This approach effectively reduces the data requirements by utilizing reference data from other buildings to enhance the GP model's performance, thereby addressing the challenge of sparse or noisy data in specific building scenarios. * Uncertainty-Aware Control Algorithm: Design of a control algorithm that integrates GP-based uncertainty estimates with the MPPI algorithm. This includes mechanisms for excluding uncertain action trajectories and refining the selection process to ensure optimal and safe control actions. * Extensive Simulation and Evaluation: Comprehensive simulations were conducted in a five-zone office building across three cities to evaluate CLUE's performance. These simulations demonstrated that CLUE significantly outperforms existing baselines in modeling accuracy, uncertainty estimation, human comfort, and energy efficiency. It also showcased remarkable data efficiency, reducing the data requirement from hundreds of days to just seven days. The remainder of this paper is organized as follows. Section <ref> provides an overview of related work in HVAC control systems, focusing on various model-based and model-free reinforcement learning techniques. Section <ref> introduces the application of MBRL to HVAC control, covering state/action variables, optimization strategies, and a reward function balancing comfort and energy efficiency. Section <ref> presents motivation experiments to understand model errors caused by data sparsity in state-of-the-art MBRL methods. Section <ref> details our proposed method, CLUE, including its architecture, modeling of building dynamics with GP, the meta-kernel learning technique, and the implementation of the confidence-based control mechanism. Section <ref> describes the experimental setup, including the simulation environment, performance metrics, and baseline comparisons. Section <ref> discusses the limitations and future research directions. Finally, Section <ref> concludes the paper with a summary of our findings. § RELATED WORK There are a number of approaches to solving HVAC energy optimization problems in the literature, including model predictive control <cit.>, Model-Free RL for HVAC <cit.>, Model-based RL for HVAC control <cit.>, Model-based approaches and Complementary Model-Free Approaches. <cit.>, Safe RL for General and HVAC Applications <cit.>, and Adaptive-Predictive and Event-Based Control Strategies <cit.>. MPC for HVAC control. MPC is an iterative strategy used to solve optimal control problems, utilizing a receding time horizon to make decisions. In HVAC control, MPC has been instrumental in reducing energy consumption while maintaining occupant comfort. For example, Beltran et al. have developed an MPC framework tailored specifically for HVAC systems, focusing on optimizing energy efficiency and comfort <cit.>. More recently, the OFFICE framework has been introduced by Winkler et al., which balances energy costs and occupant comfort using a gray-box model. This model combines first principles with parameters that are dynamically updated, reflecting real-time system dynamics <cit.>. Further building on this concept, Li et al. proposed an economic model predictive control (EMPC) approach. This method optimizes energy consumption and indoor comfort by employing a lattice piecewise linear approximation of the Predicted Mean Vote (PMV) index, enhancing both the economic and comfort aspects of building management <cit.>. However, the broader adoption of MPC in HVAC is hindered by significant challenges, primarily due to its dependency on precise models. Modeling is estimated to consume up to 75% of the resources needed for MPC implementation, a notable barrier given the diverse nature of building environments <cit.>. Each building or thermal zone requires a custom model to effectively manage its unique characteristics, adding complexity and cost to the deployment of MPC systems <cit.>. Model-Free RL for HVAC. Recent advancements in MFRL have demonstrated significant potential for optimizing HVAC controls. These techniques utilize learning agents that develop policies through active interaction with the environment and extensive trial and error. For example, traditional RL has been used to effectively adjust thermostat set-points, achieving a balance between energy efficiency and occupant comfort <cit.>. DRL implementations have also been successful in real-life settings, managing radiant heating systems in office buildings <cit.>. In tropical regions, Le et al. <cit.> have deployed DRL to control air-free cooled data centers. Furthermore, Vazquez-Canteli et al. <cit.> developed a multi-agent RL strategy to optimize load shaping in grid-interactive buildings. Additionally, Gao et al. <cit.> used Deep Deterministic Policy Gradients (DDPGs) to formulate thermal comfort control policies, significantly enhancing HVAC system performance. Despite these successes, the primary focus of these studies remains on the HVAC subsystem. Extended research has introduced comprehensive building control frameworks that integrate HVAC, lighting, and window subsystems. These frameworks employ advanced algorithms, such as the Branching Dueling Q-network (BDQ), to improve control efficacy <cit.>. Nonetheless, RL applications in HVAC are challenged by high sample complexity, which necessitates prolonged training periods to develop robust control strategies, particularly in environments with large state-action spaces. In contrast, Gnu-RL utilizes a differentiable MPC policy that integrates domain knowledge, thereby enhancing both planning and system dynamics. This approach is noted for its data efficiency and interpretability <cit.>. However, its reliance on the local linearization of water-based radiant heating system dynamics may restrict its applicability to more complex scenarios. Model-based RL for HVAC control. Collecting large-scale real-world data for HVAC systems presents significant challenges. To address sample complexity, researchers have increasingly adopted MBRL techniques. Zhang et al. <cit.> developed an MBRL strategy where a neural network learns complex system dynamics, which are then integrated into an MPC framework using rolling horizon optimization for executing control actions. Similarly, Chen et al. <cit.> proposed MBRL-MC, a novel control strategy that combines MBRL with MPC. This strategy starts with the supervised learning of thermal dynamics for a specific zone and then designs a neural network planning framework that merges RL with MPC, diverging from traditional MBRL methods. It avoids the compounding error problem by not mimicking the outcomes of MPC's random shooting and excludes the bootstrapping technique from critical network updates, thereby enhancing stability. Despite their potential, MBRL methods typically excel in scenarios with lower state and action dimensions, such as in single-zone buildings. However, their performance often diminishes in more complex multi-zone buildings. Ding et al. <cit.> extended these capabilities to multi-zone buildings employing an ensemble of environment-conditioned neural networks dynamics models and a more efficient MPPI controller, achieving satisfactory results after 183 days of training data. Model-based approaches and Complementary Model-Free Approaches. Baldi et al. <cit.> and Korkas et al. <cit.> have explored model-based strategies for HVAC control, utilizing EnergyPlus for model design. Baldi et al. presented a switched self-tuning approach for automating occupant-building interaction via smart zoning of thermostatic loads, demonstrating significant improvements in energy efficiency and occupant comfort <cit.>. Similarly, Korkas et al. proposed a distributed control and human-in-the-loop optimization for grid-connected microgrids, which has applications in HVAC systems <cit.>. These works align with our approach by leveraging EnergyPlus for precise model-based control, highlighting both the benefits and challenges of using simulation-based models for HVAC optimization. Complementary model-free approaches also offer promising solutions for HVAC control to reduce data requirements, particularly in complex and dynamic environments. Michailidis et al. <cit.> demonstrate a proactive control strategy in a high-inertia building that uses simulation-assisted methods to optimize energy usage and thermal comfort, illustrating the effectiveness of model-free approaches in complex settings. Baldi et al. <cit.> introduced Parametrized Cognitive Adaptive Optimization (PCAO), designing efficient "plug-and-play" building optimization and control systems that learn optimal strategies with minimal human input, significantly improving energy efficiency and thermal comfort. These studies not only align with but also enrich our research objectives with innovative approaches to reduce data requirements and increase iteration efficiency. However, despite these advancements, building models often lack uncertainty awareness and require extensive training periods to achieve satisfactory performance. Safe RL for General and HVAC Applications. Safe RL has become essential for ensuring the deployment of RL agents in real-world environments without causing harm <cit.>. One prominent approach is Constrained Policy Optimization, which integrates safety constraints within the RL framework to prevent agents from taking harmful actions <cit.>. Although effective in theory, defining constraints that accurately reflect safety requirements is challenging. Model-Based Safe RL, another approach, uses simulated outcomes from learned environment models to enable safer exploration <cit.>. However, these models can sometimes yield suboptimal decisions due to inaccuracies, especially when applied to real-world scenarios. In the context of HVAC systems, ensuring safety during real-world exploration is critical, as outcomes of certain actions can be unpredictable. Research has suggested using simulators to train RL agents before deploying them in actual building settings <cit.>. However, this method faces challenges due to potential mismatches between simulator predictions and real-world dynamics, which can cause "safe" decisions in simulations to become risky in practice. Alternatively, batch RL methods have been employed, utilizing historical data to refine and enhance control policies without direct interaction with the environment during training <cit.>. This method reduces risk but depends heavily on the data's quality and volume, influencing the effectiveness and safety of the control strategies implemented. Adaptive-Predictive and Event-Based Control Strategies. Adaptive-predictive control strategies adjust control parameters in real-time based on environmental changes and system performance, enhancing the adaptability and responsiveness of HVAC systems. Event-based control strategies improve performance by responding to specific triggers or events rather than following a fixed schedule, offering potential for more dynamic and efficient HVAC management. Integrating these strategies with MBRL systems like CLUE could further optimize energy use and occupant comfort. Recent advancements in sensing technologies, communication networks, and embedded systems have enabled the development of sophisticated adaptive-predictive control strategies. These strategies use predictive models to anticipate environmental changes and adjust control parameters accordingly, providing more precise and responsive HVAC management <cit.>. Event-based control strategies, on the other hand, focus on reducing unnecessary control actions by only responding to significant changes or events, thus improving energy efficiency and system performance <cit.>. These methods have shown promise in managing the complexities of HVAC systems by dynamically adjusting to varying conditions, which is particularly beneficial in distributed and large-scale applications <cit.>. The integration of adaptive-predictive and event-based control strategies with our proposed CLUE system offers a promising direction for future research, providing a framework that can leverage real-time data and event-driven triggers to enhance HVAC control performance. § MBRL FOR HVAC CONTROL HVAC control can be modeled as a Markov Decision Process (MDP), represented as ℳ:{𝒮, 𝒜, r, 𝒫, γ}. This includes a state space 𝒮, an action space 𝒜, a reward function r:𝒮×𝒜→ℝ, a dynamics function 𝒫(s'|s, a), and a discount factor γ. In this framework, at each time step t, the controller is in state s_t∈𝒮, takes an action a_t∈𝒜, receives a reward r_t=r(s_t, a_t), and moves to a new state s_t+1 determined by s_t+1∼𝒫(s_t, a_t). The objective at every step is to select an action that maximizes the discounted sum of future rewards, calculated as ∑_t'=t^∞γ^t'-tr(s_t', a_t'), where γ∈[0, 1] weights the importance of immediate versus future rewards. MBRL-based control integrates two core components: the dynamics model and the controller. The dynamics model, denoted by f_θ(s_t, a_t) and parameterized by θ, leverages historical data {(s_t, a_t, s_t+1)}_n to predict the next state s_t+1 from the current state s_t and action a_t. This prediction informs the controller's decision-making process, which involves solving the optimization problem: (a_t, ⋯, a_t+H-1) = max_(a_t, ⋯, a_t+H-1)∑_t'=t^t+H-1γ^t'-tr(s_t', a_t') Here, the controller aims to select an action sequence that maximizes the cumulative discounted rewards over the next H time steps. Typically, only the first action in this sequence is executed before re-evaluating and updating the plan at each subsequent time step based on the new state. While the random shooting method <cit.> has been traditionally used to address the optimization challenge in Eq.<ref>, recent advancements indicate that the MPPI strategy offers superior optimization outcomes <cit.>. Our implementation of MBRL-based HVAC control utilizes insights from MB^2C <cit.>, which outlines three components of MBRL. States: A state in the building dynamics model comprises a set of variables serving as both inputs and outputs. We categorize states into two groups: disturbances and the zone state, detailed in Table <ref>. Disturbances include variables independent of HVAC system actions, such as weather conditions and occupancy levels. In contrast, the zone state refers specifically to the temperature of the controlled thermal zone, which is directly influenced by our control actions and is critical for calculating the reward of the building system. Actions: The action within our system is defined as the temperature setpoint for the controlled thermal zone. On our experimental platform, the HVAC system's temperature setpoints range from a minimum of 15^∘ C to a maximum of 30^∘ C. Each controlled thermal zone is equipped with both a heating and a cooling setpoint, leading to two action dimensions per zone. Rewards: Our experimental platform utilizes the reward function outlined in <cit.>, as expressed by Eq. <ref>. The comfort zone is defined between two temperature bounds, z and z, representing the lower and upper limits of comfortable zone temperature, respectively. At each timestep t, Z_t indicates the current zone temperature, and E_t quantifies the total energy consumption. To effectively balance comfort against energy usage, a weighting factor w_e∈[0, 1] is employed. This factor adjusts the priority between maintaining comfort and minimizing energy consumption based on system needs. r(s_t) = - w_eE_t - (1-w_e)(|Z_t - z|_+ + |Z_t - z|_+) In Eq. <ref>, the weight w_e is set to 0.1 during occupied periods to prioritize comfort, and to 1 during unoccupied periods to focus on energy savings. The energy term E_t is derived using the L-1 norm of the difference between the action (setpoint temperature) and the actual zone temperature, representing the HVAC system’s heating or cooling effort <cit.>. The comfort limits z and z are set to 20^∘ C and 23.5^∘ C during winter, and 23^∘ C and 26^∘ C during summer, respectively. § MOTIVATION To evaluate the predictive performance of state-of-the-art MBRL methods <cit.>, we conducted a series of EnergyPlus simulations <cit.> within a 463 m^2 building comprising five zones <cit.>. Utilizing actual 2021 TMY3 weather data from Pittsburgh, PA <cit.>, we initially focused on a single climate for preliminary experiments, later extending our analysis to encompass three distinct climate zones. Our investigation into the prediction errors of building dynamics models involved the application of the DE method detailed in <cit.>. Specifically, we examined three critical aspects: the correlation between model errors and the distribution of training data, the influence of varying training data sizes, and the temporal distribution of significant model errors. Experiment results: Figure <ref> illustrates the relationship between the training data distribution and the prediction errors of the building dynamics model. The model was trained on 120,000 time steps (3.42 years) using data collected via the default rule-based controller, across 150 epochs with a learning rate of 1e-3 to ensure convergence. After training, the model was tasked to predict the next 3000 time steps, during which we recorded the prediction errors and the initial zone states for each prediction. To enhance the analysis, model errors were categorized based on the input zone temperatures, and average errors were computed for each bin. In Figure <ref>, our analysis revealed that the DE model demonstrated significantly higher prediction errors in regions of the state space with sparse data. Particularly, this model exhibited enhanced precision when the zone state temperatures were between 15^∘ C and 25^∘ C. While the average model error across all conditions was 0.29^∘ C, we observed that errors consistently exceeded 1^∘ C in scenarios where the zone temperatures rose above 32^∘ C. The highest model error recorded was 9.36× greater than the average and remarkably 15,325 × larger than the smallest model error. To understand the significant prediction errors observed in the DE models, we utilized Kernel Density Estimation (KDE) to examine the density distributions of the training data, which consisted of 10,000 transitions. The results are depicted in Figure <ref>. The blue curve illustrates the typical distribution patterns of the training data for the dynamics model in MBRL-based HVAC control. Notably, it features two distinct peaks around 17.5^∘ C and 21^∘ C, corresponding to the typical night and day temperatures, respectively. During summer, these modes often merge into a single peak. The majority of the transition data tend to cluster around these central modes, which highlights an intrinsic bias in the thermal data. This bias significantly impacts the accuracy of predictions made by data-driven dynamics models. Two Questions Arise: The first question centers on whether augmenting the size of historical data used for training models could reduce catastrophic model errors. Unfortunately, this approach does not yield the desired outcome. We trained a DE model using varied data sizes, from 10 to 1200 days, and evaluated its absolute error over the subsequent 30 days, as depicted in Figure <ref>. We deemed a model error exceeding 2°C as unacceptable because, in the building sensor domain, such a deviation is typically indicative of a sensor fault <cit.>. Despite utilizing nearly four years of data, errors surpassing 2°C were still evident; specifically, 2.1% of all predictions deviated by more than 2°C from actual temperatures. This persistent error is largely due to epistemic uncertainty rooted in intrinsic biases within the thermal data of buildings, which significantly hampers the accuracy of predictions and leads to sub-optimal actions. The second question explores whether temporal independence of model errors allows the thermal system to withstand brief controller discrepancies. Regrettably, this is not feasible. To investigate, we utilized the DE model to analyze each of the 3000 time steps (representing one year) from previous experiments, recording model errors and specifically those exceeding 2^∘ C. We reviewed these errors chronologically to determine the intervals between successive significant errors, plotting the cumulative density function of these intervals. This experiment was replicated with the DE model trained on datasets of varying sizes, from 20 to 1200 days, to observe changes related to training data volume. The results, depicted in Figure <ref>, indicate that a predominant majority (68% to 89%) of error intervals are zero, suggesting that substantial model errors tend to occur in sequences. This clustering of high errors, if overlooked, could result in prolonged periods of malfunction in the thermal control system. Our Key Idea: Based on our observations, our primary objective is to mitigate the negative impacts of high model errors caused by distribution bias. Instead of refining the model or training process—which may be impractical—we have developed a procedure to preemptively alert the controller about potential high errors in the current time step. Essentially, our system anticipates the uncertainty in the model's predictions by analyzing its training data and the current input. When a high-error state-action pair is identified, we either disregard the prediction in favor of more reliable alternatives, or, if no prediction is deemed sufficiently reliable, we allow the building’s default controller to take over, thus addressing the lack of data in that region of the state space. § THE DESIGN OF CLUE In this section, we detail the architecture of CLUE, focusing on problem formulation, dynamic modeling of the system using GP, and confidence-based MPPI control approach. §.§ Overview Figure <ref> illustrates the comprehensive framework of CLUE. At a high level, CLUE integrates two key components: a dynamic model of building systems and a control mechanism based on MPPI. Our dynamic model employs a GP to predict the future state of a building's HVAC system based on its current state, outputting both the predicted next state and a corresponding confidence interval. These predictions and their confidence intervals inform the MPPI controller's decisions, enabling it to select the most suitable actions. The workflow of CLUE is initiated by conducting meta-kernel learning (detailed in Section <ref>), which establishes the GP prior using reference data from other buildings or simulations, without prior knowledge of the target building ℬ. This learning phase helps adapt the GP kernel based on externally sourced data. Once sufficient data from the target building ℬ is accumulated, the model fine-tunes the GP based on this building-specific data, incorporating the initial learned GP prior. This tailored GP model then serves as the dynamic model for the building. The system sets an uncertainty threshold ϵ within the MPPI algorithm to manage decision-making risks. During deployment, CLUE executes its confidence-based control strategy by generating multiple MBRL prediction trajectories using the GP model. It disregards any trajectory where the initial time step's uncertainty surpasses the threshold ϵ. If all trajectories are discarded, a default action is dispatched to the building's actuator system. Conversely, if suitable trajectories remain, the MPPI leverages these to determine the optimal sequence of actions a(·), dispatching the first in this sequence to the actuator. Following action deployment, CLUE stops for a predefined interval (e.g., 15 minutes) to acquire new state data, which it then integrates into the historical dataset 𝒟_ℬ, thereby readying the system for the next control cycle. §.§ Modeling Building Dynamics with GP We model the dynamics of the building using GP, interpreting the variance of each prediction as an indication of uncertainty—higher variance indicates higher uncertainty. The system's state, which includes environmental conditions (outdoor air temperature, humidity, occupancy, etc.) and zone-specific variables (zone air temperature), is consolidated into a state vector s_t ∈𝒮. This state vector is then combined with the action vector a_t ∈𝒜, which includes settings like heating and cooling setpoints, to form the input variable denoted as x ∈𝒳 = 𝒮×𝒜. The output variable is the subsequent zone state s_t+1. For simplicity and consistency, the input and output for each instance are denoted as x_i and y_i, respectively, with the historical dataset defined as 𝒟 = (X, Y) = {(x_i,y_i)}_i=1^n. Modeling with GP involves a two-stage process. It starts from a pool of candidate functions (GP prior) and computes beliefs conditioned on training data (GP posterior). We will introduce both stages in the following. First, we select a function that intuitively quantifies the similarity between two inputs, denoted as k: 𝒳×𝒳→ℝ. This function is the GP kernel. The idea is that buildings under similar conditions performing similar actions should transition to similar future states. Thus, if k(x_1, x_3) > k(x_2, x_3), it suggests that y_1 is more relevant to predicting y_3 than y_2 is. Notably, each variable in our inputs contributes differently to the similarity to other inputs, depending on their level of impact according to the unknown dynamics function. The kernel function accommodates these differences using adjustable parameters, known as hyperparameters. CLUE employs the Radial Basis Function (RBF) kernel, known for its expressive power in modeling complex systems such as building thermal dynamics, as represented in Eq. <ref> and discussed in <cit.>. The hyperparameters are θ: {θ_scale, Θ}, where θ_scale is a scalar, and Θ is an 8×8 matrix. Overall, these parameters include a total of 65 real numbers. The selection of these parameters is critical as they significantly impact both the modeling accuracy and the uncertainty estimation accuracy. The second stage involves fitting the data and making predictions. Using the exact regression technique, we first compute the covariance matrix K := k(X, X), and then calculate the GP posterior using Eq. <ref> and Eq. <ref>. Here, I represents the identity matrix, and σ_n^2 is the noise variance (set to zero by default) to account for aleatoric uncertainty. From this posterior, the prediction of y_* at x_* follows a Gaussian distribution as specified in Eq. <ref>. k(x,x')=θ_scaleexp(-1/2(x-x')^⊤Θ^-2(x-x')) m_post(x) = m(x)+k(x, X)(K+σ_n^2I)^-1(y-m(X)) k_post(x,x') = k(x, x')-k(x, X)(K+σ_n^2I)^-1k(X, x') 𝒢𝒫(x_*) = 𝒩(m_post(x_*), k_post(x_*,x_*)) The prediction mean m_post(x_*) represents the most likely model outcome, while the variance k_post(x_*,x_*) indicates the level of uncertainty associated with the model outcome given data 𝒟 and input x_*. This variance, serving as the indicator of epistemic uncertainty, is higher if x_* is in a data-scarce region of the input space 𝒳, emphasizing areas of greater uncertainty and guiding decision-making under these conditions. To clearly understand the GP model integration process, we outline the steps involved: Steps for GP Model Integration: * Input Representation: Consolidate state and action variables into input vectors x_t ∈𝒳 = 𝒮×𝒜, where 𝒮 represents state space and 𝒜 represents action space. * Kernel Function Selection: Employ the RBF kernel for its expressive power in modeling complex building dynamics. * GP Training: Use historical data 𝒟 = {(x_i, y_i)}_i=1^n to train the GP model, optimizing kernel hyperparameters through meta-kernel learning. * Prediction and Uncertainty Estimation: For a given input x_*, predict the next state as a Gaussian distribution 𝒢𝒫(x_*) = 𝒩(m_post(x_*), k_post(x_*, x_*)), where m_post(x_*) is the mean and k_post(x_*, x_*) is the variance indicating prediction uncertainty. The steps for GP model integration provide a structured approach to implementing the GP model within CLUE. The next section will discuss the meta-kernel learning technique, which is essential for optimizing the GP model's hyperparameters to enhance prediction accuracy and uncertainty estimation. §.§ Meta Kernel Learning To model building dynamics with GP effectively, we optimize the kernel parameters' initialization using data from a varied collection of reference buildings. For this purpose, we have developed a novel meta kernel learning method that integrates meta-learning <cit.> with kernel learning <cit.>. With a given set of training data 𝒟:{X, Y}, kernel learning optimizes the kernel parameters through gradient descent. It defines the loss function as the mean squared error (MSE) between the GP model's predictions and the actual data, ℒ(𝒢𝒫_θ_k) = MSE(𝒢𝒫_θ_k(X), Y). The process involves calculating the gradient of the loss with respect to the kernel parameters, ∇_θ_kℒ(𝒢𝒫_θ_k), and updating the parameters accordingly. This procedure repeats for n iterations until a set of kernel parameters with minimized model error is obtained. Meta-learning <cit.> enhances an agent's ability to learn from a variety of tasks, allowing for quick adaptation to new tasks. This method moves beyond the standard optimization of model parameters, traditionally expressed as: θ_k^* = min_θ_kℒ(𝒢𝒫_θ_k), Instead, meta-learning focuses on fine-tuning the parameters to reduce the aggregated loss across multiple tasks, which can be formalized as: θ_k^* = min_θ_k∑ℒ_𝒯_i(𝒢𝒫_θ_k) Here, 𝒯_i represents an individual task, and the aim is to minimize the overall model error by optimizing across a diverse set of tasks. Our meta kernel learning process fine-tunes the GP kernel hyperparameters, θ_k, through a tailored kernel learning approach. We redefine tasks typically associated with MDPs to focus on the modeling of building data over specific periods. For example, the task might involve optimizing the GP model's accuracy using building data from Pittsburgh over the summer months, encapsulated by the model loss function in Eq. <ref>. In this equation, X_𝒯_i represents the input features (e.g., historical temperature readings, occupancy levels) and Y_𝒯_i denotes the corresponding output targets (e.g., future temperature readings) for task 𝒯_i. This equation quantifies the discrepancy between GP predictions and actual observations when applying the optimized kernel hyperparameters, θ_k. The essence of our meta-learning approach is to minimize this model error aggregated over an entire suite of such tasks, collectively denoted by p(𝒯). The main objective function, aiming at comprehensive error reduction, is outlined in Eq. <ref>. By executing this strategy, we systematically enhance the kernel's initial parameter set, enabling more precise subsequent adaptations to the specific thermal dynamics of different buildings. ℒ_𝒯_i(𝒢𝒫_θ_k) = MSE(𝒢𝒫_θ_k(X_𝒯_i), Y_𝒯_i) θ_k^* = min_θ∑_i=1^nℒ_𝒯_i(𝒢𝒫_θ_k)|_𝒯_i∼ p(𝒯) To clearly understand the meta-kernel learning process, we outline the steps involved: Steps for Meta-Kernel Learning: * Data Collection: Gather extensive datasets from various reference buildings, capturing diverse thermal dynamics. * Meta-Learning Initialization: Randomly initialize the GP kernel parameters θ_k. Utilize these reference datasets to further refine the initial parameters. Define tasks 𝒯_i for different building data subsets. * Kernel Parameter Optimization: For each task, compute the loss ℒ(𝒢𝒫_θ_k) = MSE(𝒢𝒫_θ_k(X_𝒯_i), Y_𝒯_i) and update kernel parameters through gradient descent. * Meta-Optimization: Aggregate losses across tasks and perform meta-optimization to refine kernel parameters, minimizing the overall model error. Our meta kernel learning approach is detailed in Algorithm <ref>, where kernel parameters are refined using gradients from multiple tasks, thus enhancing the kernel's general applicability and avoiding overfitting to a single data set. For the step sizes of hyperparameters, we have chosen α=β=1e-3. This conservative step size slows the training pace but is essential for ensuring convergence in our experiments. The resulting set of kernel parameters forms the initial configuration for the GP model, which is then fine-tuned on target building data. After applying meta-learning, we fine-tune the pre-trained kernel with a small data subset from the target building, using standard kernel learning methods to achieve a tailored building-specific kernel. The integration of meta-kernel learning into the GP model significantly enhances its performance by optimizing the kernel parameters using diverse datasets. The next section will discuss the confidence-based control approach, which utilizes these GP models to make informed control decisions. §.§ Confidence-based Control CLUE employs online planning through MPPI control to determine actions. Starting from the current building state s_t at time t, and considering a prediction horizon H, CLUE utilizes the GP-based dynamics model 𝒢𝒫(x_*) to predict the future states s_t:t+H based on the proposed action sequence a_t:t+H = {a_t, …, a_t+H}. At each timestep t, the MPPI controller implements the initial action a_t from this sequence. The accuracy of the first predicted state in the sequence is crucial, as it greatly influences the overall performance of the control system. To ensure reliability, the model's uncertainty estimation, which does not always align directly with actual model error, is converted into a tangible model error threshold. This conversion is performed by evaluating the model against historical data. Subsequently, CLUE screens and excludes any prediction trajectories where the uncertainty exceeds this predefined threshold, ensuring that only the most reliable trajectories influence control decisions. To better understand how CLUE implements this process, we outline the steps for the MPPI control approach: Steps for MPPI: * Trajectory Generation: Generate multiple action trajectories from the current state s_t, each predicting future states s_t:t+H based on the GP dynamics model. * Uncertainty Filtering: Exclude trajectories where the initial time step's uncertainty exceeds a predefined threshold ϵ, ensuring only reliable trajectories are considered. * Trajectory Evaluation: For each remaining trajectory, compute the objective function ∑_t=1^Hγ^t(r(x_t)-λσ(x_t)), balancing rewards and uncertainties. * Action Selection: Select the trajectory with the highest objective value and execute the first action a_t. §.§.§ Uncertainty Threshold Translation In our GP dynamics model, uncertainty at a given input x is denoted by σ(x). We define a model error threshold e^* (in degrees Celsius) and an uncertainty flagging threshold ϵ∈ℝ^+. If σ(x) > ϵ, the state-action pair is flagged as having high model error. The goal of our threshold translator is to optimize the identification of true model errors greater than e^*, effectively maximizing the detection of true positives and true negatives. To determine the optimal ϵ, we utilize an offline procedure that leverages historical data from the target building, 𝒟:{X, Y}. The GP model, 𝒢𝒫, predicts each input x_i ∈ X, yielding results (μ⃗, σ⃗) = 𝒢𝒫(X), where μ⃗ represents predicted states and σ⃗ signifies associated uncertainties. The absolute model errors e⃗ = |Y - μ⃗| are calculated, testing the model's accuracy against historical data. For all prediction pairs {(μ, σ)}_n, we solve the following optimization problem: minimize: count(|y_i - μ_i| < e^*)-count(|y_i - μ_i| > e^*) s.t. σ_i > ϵ By setting ϵ, we aim to maximize the model's accuracy in classifying errors larger than e^*, ensuring that σ > ϵ accurately reflects high model errors. This objective aligns with our goal to establish the most effective uncertainty threshold ϵ, which will be employed in subsequent predictions, using prediction accuracy as a surrogate measure to evaluate the optimization outcome. To provide HVAC engineers and building managers with greater control, we introduce the following mechanism: Uncertainty threshold control knob. Our method accommodates any model error threshold, e^*∈ℝ^+, as specified by HVAC engineers. Each expected model error threshold is systematically converted into an uncertainty threshold. CLUE performs this translation offline to determine the optimal flagging threshold, which is then utilized in the confidence-aware MPPI control process. This introduces an uncertainty threshold control knob, enabling HVAC engineers or building managers to set the maximum acceptable model error. Consequently, a lower threshold leads CLUE to adopt more conservative actions, resulting in fewer violations but possibly higher energy usage. Conversely, a higher threshold encourages CLUE to take bolder actions to conserve energy, which may increase the rate of violations. The impact of adjusting the uncertainty threshold control knob will be empirically evaluated in our upcoming experiments, as detailed in Section <ref>. §.§.§ Confidence-Aware MPPI During each planning cycle, the MPPI controller generates multiple trajectories. CLUE evaluates the uncertainty of the initial time step for each trajectory, comparing it against a predefined model error threshold established by the uncertainty threshold translator. Trajectories that exceed this threshold are flagged and subsequently discarded. This process ensures that only trajectories with acceptable uncertainty levels are considered for action selection, effectively removing those with high uncertainty from decision-making. After filtering trajectories based on their initial uncertainties, the confidence-based MPPI selects the optimal trajectory from those that remain. While these trajectories have low initial uncertainties, it is crucial to consider uncertainties at future time steps as they impact the accuracy of predicted future rewards. CLUE integrates this aspect into the optimization of the objective function, detailed in Eq. <ref>, by using λ to balance the trade-offs between uncertainty and reward. Specifically, Eq. <ref> is designed to maximize the sum of discounted rewards while minimizing the sum of discounted uncertainties across the prediction horizon. This approach recognizes that as the controller rolls out trajectories step by step, the significance of initial model accuracy gradually decreases. Therefore, each uncertainty value is discounted by a rate γ, reflecting its diminishing influence over time. This adjustment to the objective function allows the controller to optimize simultaneously for high rewards and low uncertainties, enhancing decision-making under uncertainty. a(·)^* = max_a(·)∑_t=1^Hγ^t(r(x_t)-λσ(x_t)) a_new = a_prev+∑_k=1^K δ a_kexp(1/η∑_t=1^H γ^t (r(x_t)-λσ(x_t)))/∑_k=1^K exp(1/η∑_t=1^H γ^t (r(x_t)-λσ(x_t))) Integrating the newly designed optimization objective function into MPPI controllers is both straightforward and broadly applicable. For example, we demonstrate this process with the MPPI controller used in recent HVAC control research <cit.>. MPPI determines the optimal action sequence by evaluating expected rewards across randomly generated trajectories, then calculates the weighted sum of these trajectories’ action sequences using exponential weighting based on the cumulative discounted rewards <cit.>. A standard MPPI implementation for HVAC control might use a reward function as described in Eq. <ref>, where R = ∑_t=1^H γ^t r(s_t), and H is the length of the prediction horizon. To adapt our optimization method, we modify this reward function to align with the objective outlined in Eq. <ref>. The adaptation results in Eq. <ref>, where a represents an action sequence, K the number of trajectories, δ_a_k the action perturbation, and η a hyperparameter. Eq. <ref> illustrates how the weighted exponential sum of action sequences is now calculated concerning both the discounted rewards and their associated uncertainties. This method of reward evaluation can be similarly adopted by other MBRL controllers by substituting their reward functions with our proposed formulation. §.§.§ Fallback Mechanism In situations where all trajectories generated by CLUE exhibit high uncertainty, the system defaults to a rule-based controller. This default controller, commonly used in current HVAC systems, offers conservative and reliable actions, ensuring safety and dependability when predictive confidence is low. § EVALUATION We conducted two comprehensive sets of experiments to thoroughly evaluate CLUE, focusing on different aspects of its performance. The first set of experiments assessed the modeling accuracy of GP equipped with meta-kernel learning. This stage also tested the reliability of the uncertainty prediction mechanisms provided by the fallback strategy, critical for managing unpredictable scenarios. The second set of experiments involved deploying CLUE within various simulated environments, where its performance was rigorously compared against current state-of-the-art solutions to determine its effectiveness in realistic settings. §.§ Experiment Setting §.§.§ Platform Setup Our experimental platform was carefully chosen to ensure high fidelity and effective integration of software tools. We employed EnergyPlus <cit.>, a sophisticated simulation tool widely recognized for its accuracy in modeling building energy systems. For the deep learning components, PyTorch <cit.> was selected for its flexibility and robustness in handling complex neural network architectures. We utilized GPyTorch <cit.>, an extension to PyTorch that provides advanced capabilities for GP computations, particularly for meta-kernel learning with GPU acceleration. This setup allowed for rapid iterations and real-time processing. Sinergym <cit.>, serving as the virtual testbed, enabled seamless interaction between CLUE and the EnergyPlus environment. It managed the exchange of actions and state observations effectively, ensuring that the decision-making process of CLUE was both reactive and informed based on real-time data feedback. All components used in our experiments are open-source, which promotes transparency and allows other researchers to replicate our results easily or extend them in future studies. §.§.§ Implementation details Throughout our experiments, we maintained consistent hyperparameters to ensure the reliability of our results. For DE models, we set the epochs to 150, the learning rate to 1 × 10^-3, and the weight decay to 1 × 10^-5. For GP kernel learning, we set the number of iterations to 800 with a learning rate of 1 × 10^-2, and for fine-tuning the GP kernels, we used 200 iterations with the same learning rate. MSE served as the loss criterion, and we used the Adam optimizer for all training processes. In the context of meta-kernel learning, the GP model was trained on data spanning 35,040 time steps, equivalent to one year, from each reference building. For the MPPI controller, we adhered to the optimal hyperparameter configuration previously tested, with a sample number of 1000 and a horizon of 20 steps <cit.>. For the confidence-based MPPI, we set λ to 1 × 10^-2. §.§.§ Environment selection Our simulation was conducted on a building with an area of 463 m^2 and five zones, as outlined in Sinergym <cit.>. We selected three climate-distinct cities in the United States—Pittsburgh, Tucson, and New York—for the experiments, which took place from January 1st to January 31st and July 1st to July 31st. Each city was chosen to represent a different climate type, according to ASHRAE standards: Pittsburgh features a continental climate (ASHRAE 4A), Tucson a hot desert climate (ASHRAE 2B), and New York a humid continental climate (ASHRAE 4A) <cit.>. We used actual 2021 TMY3 weather data for these cities <cit.>. This selection strategy was intended to ensure the generalizability of CLUE across diverse climate conditions, providing a robust test of its applicability in varied regional settings. To ensure a comprehensive evaluation, we adopted specific parameters and evaluation metrics tailored to capture different aspects of CLUE's performance. §.§.§ Performance Metrics We use two different sets of performance metrics to evaluate the uncertainty estimation accuracy and building control efficiency respectively. 1) Metrics for the uncertainty estimation: * Accuracy: the sum of true positive and true negative results divided by the total number of results. Higher accuracy indicates that the model makes fewer mistakes in total. * Precision: the number of true positive results divided by the number of all positive results, including those not identified correctly. Higher precision indicates that the model is more efficient, i.e. the model wastes less control steps from falsely flagging low model error predictions. * Recall: the number of true positive results divided by the number of all samples that should have been identified as positive. Higher recall indicates that the model is safer, i.e. it correctly flags a larger portion of all high model error predictions. Note that we did not use the F_1 score metric due to its equal weighting of precision and recall <cit.>, which does not align with our control system's objective. Instead, we directly logged the precision and recall values for informative data representation. 2) Metrics for building control efficiency: * Cumulative reward: the weighted sum of the building system rewards calculated according to Eq. <ref>. Higher cumulative reward indicates overall better constraint compliance and energy efficiency. * Violation rate: the ratio of time steps where zone temperature violates comfort constraints to the total number of time steps. A lower violation rate indicates improved constraint compliance and safer operation of the HVAC system. * Energy consumption: total energy consumption in kWh. With a low violation rate, lower total energy consumption indicates higher energy efficiency. §.§.§ Baselines To evaluate our method's performance, we benchmarked it against several baselines in two key areas: uncertainty estimation accuracy and building control efficiency. Uncertainty Estimation Accuracy We compared our method's uncertainty estimation capabilities with the following models: * DE: Utilizes an ensemble of five neural networks. Although the original study <cit.> does not focus on uncertainty estimation, we adopted the uncertainty measurement method described by Lakshminarayanan et al. <cit.>. Here, uncertainty is quantified as the variance of the predictions from the ensemble, calculated using the formula: σ_*^2(x) = 1/M∑_m=1^M μ_θ_m^2(x) - μ_*^2(x), where M=5 represents the number of models, μ_θ_m(x) denotes the prediction of model m, and μ_*(x) is the average prediction across all models. * GP: Employs a GP method with kernel learning, using an uncertainty estimation approach identical to ours. This represents a standard method for regression in building thermal dynamics. Building Control Efficiency For evaluating building control efficiency, we benchmarked against the following systems: * Rule-based: This is the default controller provided by the simulation environment, serving as a basic comparison point. * DE-MBRL <cit.>: An MBRL system that integrates a DE with MPPI control. * CLUE w/o Confidence-Based control (CB): Our system, CLUE, excluding the confidence-based control feature, to evaluate the impact of this component on overall system performance. The comparison criteria focused on evaluating the precision, recall, and accuracy of uncertainty estimation as well as the cumulative reward, violation rate, and energy consumption for control efficiency. This comprehensive evaluation ensured a holistic understanding of CLUE's performance. §.§ Modeling and Uncertainty Estimation §.§.§ Modeling Accuracy We evaluated the performance of our solution, GP with meta-kernel learning (denoted as GP-M in Figure <ref>), against two baseline methods: the DE and a standard GP. The DE model was trained using 2,000 time steps, equivalent to 20.83 days of data from the target building. Similarly, the GP model was trained and fitted using the same dataset. The GP-M approach commenced with meta-kernel learning before being fine-tuned and fitted to the same data as the DE model. Our findings reveal that GP-M achieved a mean absolute model error that was 20.7% lower than that of the DE model and 85.1% less than that of the standard GP model. In all tested environments, GP-M consistently outperformed the GP model and surpassed the DE model in five of the six environments. These results underscore the effectiveness of meta-kernel learning in providing a robust initialization for kernel parameters, thereby significantly enhancing the modeling performance of the GP approach. §.§.§ Uncertainty Estimation Accuracy In this experiment, we evaluated the uncertainty estimation accuracy of GP-M compared to other baseline methods. We integrated our fallback mechanism (described in Section <ref>) into all three models to measure their capability to accurately identify model errors exceeding 1^∘ C. This threshold is stricter than the typical sensor fault tolerance of 2^∘ C <cit.>. We then assessed the accuracy, precision, and recall of each model over the following 30 days of the training dataset. To evaluate the stability of each method, we trained new models and repeated the experiment five times, calculating the mean and standard deviation of the results. These results are detailed in Table <ref>, with the best performances highlighted in bold for each metric across different environments. We found that GP-M consistently outperformed both baselines in terms of overall accuracy. Notably, GP-M had a higher recall but lower precision compared to GP, indicating that while GP-M effectively identified the most significant model errors, it also mistakenly flagged some minor errors. This cautious approach ensures that GP-M minimizes the risk of overlooking critical errors larger than the threshold. Regarding stability, both the standard GP and GP-M exhibited minimal instability, with variations less than 0.5% across all metrics. In contrast, the DE model displayed considerable instability during the experiments, with a standard deviation reaching up to 12% for recall. This instability is attributed to the random nature of parameter initialization and the inherent variability in the training process of neural network models. Conversely, GP-based methods showed a stronger resistance to such randomness, underscoring their reliability in uncertainty estimation. §.§.§ Model Convergence We conducted experiments to determine the convergence time step for CLUE and its DE-MBRL counterpart in terms of cumulative reward, as depicted in Figure <ref>. In these tests, DE-MBRL was trained offline with varying amounts of data from the target building and subsequently used to control a 5-zone building in Pittsburgh during January. Conversely, CLUE employed meta kernel learning followed by fine-tuning with different data volumes from the same building. The GP model was fitted to 700 time steps of this building data (aligning with the data size used in <cit.>) to balance effective modeling with computation efficiency. For setting the model error threshold e^*, we conducted an exhaustive search to determine the optimal threshold, ranging from 0.5^∘ C to 3^∘ C. The results showed that DE-MBRL required 50 days of offline training data to exceed the performance of the default controller, and an additional 250 days to achieve peak performance. In contrast, CLUE's performance stabilized after just 7 days of data, with no further improvements in control performance with additional training. Based on this finding, we limited CLUE to 7 days of data for subsequent experiments. For subsequent experiments, DE-MBRL was trained on 120,000 time steps of data (approximately 3.42 years), collected using the default PID controller from the target building. CLUE followed the same training regimen as initially outlined. The model error threshold was set at 0.5^∘ C. Using these settings, we then evaluated CLUE and baseline methods for controlling the building's HVAC system, focusing on thermal comfort and energy efficiency, which are discussed in the following subsections. §.§ Building Control §.§.§ Thermal Comfort In our analysis, we quantitatively assessed the thermal comfort by examining the violation rates of various control methods, as depicted in Figure <ref>. Our study reveals that CLUE demonstrates superior performance over baseline methods across all tested locations—Pittsburgh, Tucson, and New York—as well as in the aggregated average. Notably, even when leveraging a GP-M dynamics model with a higher model error, as indicated by our model error comparison in Figure <ref>, CLUE secured lower violation rates than the DE-MBRL approach. This outcome underscores CLUE's robustness and its ability to generate effective control actions despite the presence of model inaccuracies. In contrast, the variant of CLUE w/o CB underperformed relative to DE-MBRL. This disparity was anticipated and illustrates the significant role that CB plays in the efficacy of CLUE. The inclusion of CB helps to mitigate the impact of model inaccuracies, highlighting the importance of uncertainty quantification in achieving high-quality control actions. Through this experiment in simulated environments that emulate diverse climatic conditions, CLUE not only achieves a balance between energy efficiency and occupant comfort but also exhibits remarkable data efficiency. This is crucial for practical applications where extensive data collection is often challenging and costly. §.§.§ Energy Consumption Figure <ref> compares the energy consumption of CLUE with other established methods. Our analysis found that CLUE tends to consume slightly more energy than methods without a confidence-based mechanism. This increase is linked to the fallback mechanism, which, when active, consumes energy comparable to a traditional rule-based system. Despite this, the energy use of CLUE aligns well with other MBRL methods. Importantly, the energy CLUE used during the fallback phase contributes to maintaining a high level of thermal comfort. Thus, CLUE is more effective in providing comfortable conditions without significantly increasing energy use compared to the DE-MBRL approach. The advantage of CLUE is its efficiency in offering more periods of comfort for every kilowatt-hour of energy used, making it a strong candidate for applications where occupant comfort is a critical factor. This efficient trade-off between energy consumption and comfort makes CLUE particularly well-suited for use in settings that prioritize a comfortable environment. Its capability to sustain high comfort levels with a modest uptick in energy demand recommends it for use in locations like healthcare facilities, where comfort is essential, and residential areas that are looking to improve the well-being of residents while managing energy costs effectively. §.§.§ Effect of confidence-based control Our study showed CLUE's performance with its fallback mechanism against CLUE without it (denoted as CLUE w/o CB), as depicted in Figures <ref> and <ref>. CLUE markedly reduced comfort violations compared to all baselines, demonstrating the fallback mechanism’s ability to handle model inaccuracies. Notably, CLUE w/o CB outperformed the DE-MBRL model in two out of the three cities tested, suggesting that CLUE can be effective even without the fallback under certain conditions. In terms of energy usage, CLUE with the fallback mechanism showed a slight increase when compared to CLUE w/o CB, attributable to the fallback's engagement during uncertainty, as shown in Figure <ref>. Despite this, CLUE remains efficient, balancing energy consumption with the assurance of maintained comfort levels. The fallback mechanism's contribution to CLUE’s robust performance across various environmental conditions is evident. It allows the system to manage uncertainty intelligently and maintain high control standards. The results support the incorporation of confidence measures into control strategies, significantly strengthening HVAC system performance by providing a calculated balance between energy efficiency and user comfort. §.§.§ Effect of confidence threshold control knob This section examines how adjusting the confidence threshold—a key parameter in the confidence-based controller—affects the performance of CLUE. We tested CLUE with varying confidence thresholds in the same environments used in prior experiments to capture the threshold's impact. Our evaluation comprises two parts: first, assessing the empirical results from translating uncertainty thresholds (see Section <ref>), and second, observing how these translated thresholds influence control outcomes in simulated EnergyPlus environments. Empirical Results from Uncertainty Threshold Translation: Continuing from the previous experiments, after fine-tuning the GP kernel over 7 days of data to obtain the thermal dynamics model, we utilized an uncertainty threshold translation module. This module interprets a set threshold for expected model error into corresponding uncertainty values produced by the GP model. To mimic real-world conditions where only a limited dataset is available, we employed the same 7-day historical dataset, initially used for model fine-tuning, to facilitate the translation process. We processed a range of expected model error thresholds from 0.5^∘ C to 3^∘ C, incrementing in steps of 0.5^∘ C. For each increment, the translation module identified a GP variance threshold that maximized classification accuracy for identifying the predetermined model error. These translation accuracies are summarized in Table <ref>. A notable observation was the positive correlation between the GP variance thresholds and the expected model error thresholds. Additionally, higher thresholds yielded better classification accuracy. This increase in accuracy aligns with the skewed distribution of the 7-day dataset, where sample counts dwindle at elevated error thresholds, leading to a smaller validation set and, consequently, an apparently higher accuracy. Impact of Varying Thresholds on Control Performance: Our investigation into the impact of different GP variance thresholds on the control performance of CLUE utilized three specific values: 0.3, 0.8, and 1.1. These thresholds were applied in simulations of CLUE managing buildings located in Pittsburgh, Tucson, and New York, as shown in Figure <ref>. A clear inverse relationship was observed: lower GP variance thresholds, indicating tighter control, corresponded with a decrease in comfort violations but an uptick in energy consumption. Conversely, higher thresholds, suggesting a more relaxed control approach, resulted in increased comfort violations but reduced energy usage. This inverse trend was consistently observed across all three cities, indicating a robust pattern. In Pittsburgh, the violation rate saw a significant decrease from approximately 0.11% to under 0.095%, while energy consumption rose from about 1124 kWh to nearly 1134 kWh as the threshold tightened. Tucson displayed the most dramatic shift with a reduction in violation rates, accompanied by the sharpest increase in energy demand. In New York, the adjustment in thresholds also resulted in a marked improvement in comfort at the cost of higher energy requirements. The consistency of these trends across varied climates suggests that CLUE's fallback mechanism is responsive to the threshold settings, enabling a tunable balance between energy efficiency and thermal comfort. §.§.§ Analysis of the control performance gain of CLUE To investigate the reasons behind the building control performance gain of CLUE compared with the baseline method, we plotted the data from one day of our simulation experiment for CLUE and CLUE w/o CB in Figure <ref>. The simulation scenario was in January, so only the heating temperature setpoint was displayed. Out of the 85 time steps shown in the figure, 26 time steps had the fallback mechanism activated, i.e. the control action was overridden by the default controller about 30% of the times. During a one-day period, an ideal controller is expected to keep the zone temperature within the comfort range during the occupied times and let the zone naturally cool down to the environment temperature during the unoccupied times to save energy. There are usually two origins of performance gain for MBRL approaches <cit.>. First, the MBRL controllers learn to pre-heat the room to desired temperatures before the room is occupied to get a lower violation rate. Second, the MBRL approaches cool and re-heat the room repeatedly, keeping the temperature within the comfort range while saving energy during the time periods when the heating is turned off. We found that the CLUE w/o CB made a mistake mid-way between 16:00 to 21:00. It falsely believed that the zone temperature would take longer to cool down and would turn the heating off prematurely. This usually happens when the dynamics model overestimates the room's thermal capacity. The same mistake happened to the DE-MBRL controller. After observing that the temperature has plummeted, it underestimated the amount of heating to reheat the zone and eventually caused 135 minutes of comfort violation. Our method, on the other hand, detected high model uncertainty and overrode 2 time steps of control action with the default controller between 16:00 to 21:00. This successfully kept the zone temperature within the comfort range and also within the data distribution, allowing the controller to correctly predict the amount of heating needed for the rest of the occupied times. As a result, the fallback mechanism prevented 135 minutes of comfort violation. §.§.§ Computation Overhead Analysis Figure <ref> presents the computation time analysis for each controller utilized during our experimental evaluation. We observed that CLUE w/o CB exhibited the highest computational overhead, with decision times generally spanning from 1 to 1.5 seconds. This extra time requirement is primarily due to the GP predictions, which tend to be more resource-intensive compared to the DE forward passes used in DE-MBRL. In contrast, CLUE demonstrates computational efficiency by occasionally bypassing complete trajectory rollouts. It utilizes the detection of high uncertainty in the immediate next time step to reduce the need for full prediction sequences, thus reducing processing times significantly. The rule-based controller, unsurprisingly, registered the lowest overhead, reflecting its less complex decision-making process. The comparable computational overheads of DE-MBRL and CLUE w/o CB indicate that, although both methods utilize trajectory rollouts, the intensive computations required for the GP predictions in CLUE w/o CB lead to longer processing times. The capability of CLUE to streamline decision-making without sacrificing prediction accuracy or control efficacy illustrates an advanced optimization of computational resources, aligning well with the real-time demands of building control systems. This optimization highlights CLUE's practical application advantage where processing efficiency is as critical as control performance. § DISCUSSION Despite the promising results, there are several limitations and challenges to our approach that should be addressed in future research: * Reliance on GP Model Accuracy: The effectiveness of the GP model in predicting building dynamics heavily depends on the quality and quantity of the training data. In scenarios where the data is sparse or noisy, the model's accuracy can significantly degrade. While our meta-kernel learning approach enhances the initial parameter estimation, future work could explore advanced data augmentation techniques to synthetically expand the dataset or transfer learning methods to leverage pre-trained models from similar buildings. Furthermore, building models rarely remain static due to factors such as HVAC degradation and changes in building usage patterns. This necessitates the continuous updating of the model to maintain accuracy. Future research should focus on developing adaptive learning mechanisms that can dynamically update the GP model in response to changing building conditions, ensuring sustained performance over time. * Computational Overhead: The GP predictions, especially during real-time operations, incur substantial computational costs. This overhead can hinder the scalability of the system for large-scale deployments involving multiple buildings. Future research could focus on developing more efficient GP approximations, such as sparse GP or leveraging variational inference techniques. Hybrid models that combine the strengths of GPs with faster machine learning techniques like deep learning could also be investigated to balance accuracy with computational efficiency. Additionally, incorporating the MPPI algorithm into the system, while effective, adds to the computational burden. Reporting the computational time of the proposed approach is crucial for understanding its practical applicability. Future work should include detailed computational time analyses and optimizations, such as parallel processing or leveraging specialized hardware (e.g., GPUs) to reduce the computational load and make the system more feasible for real-time applications. * Implementation Challenges: Several challenges were encountered during the implementation of the CLUE system: * Initial Setup and Calibration: Adapting CLUE to specific building characteristics requires extensive data collection and fine-tuning. This process can be resource-intensive and time-consuming. Automating parts of this process or developing more generalized models that require minimal fine-tuning could alleviate these challenges. * Simulation Limitations: Our current focus was on simulation-based evaluations, and we did not integrate CLUE with various HVAC hardware and control protocols. Future research should explore real-world integrations and address compatibility issues with existing HVAC systems. * Ongoing Maintenance: Regular maintenance and periodic retraining of the model are necessary to adapt to changing building dynamics and external conditions. Establishing automated monitoring and maintenance routines can ensure sustained performance over time. * Practical Implications: Deploying CLUE in real-world HVAC systems presents challenges: * Scalability: Leveraging edge computing or cloud-based solutions to manage computational requirements can facilitate scalability across multiple buildings and systems. These approaches can distribute the computational load and provide real-time processing capabilities. * Integration: Developing standardized protocols for integration with various HVAC hardware and control systems is essential. This ensures seamless operation and reduces the complexity of deploying CLUE in different buildings. Addressing these challenges through targeted research and development can pave the way for the successful deployment of CLUE in real-world settings, offering significant benefits in terms of energy efficiency and occupant comfort. § CONCLUSION This paper introduces CLUE, a Model-Based Reinforcement Learning method specifically designed for HVAC control in buildings, focusing on enhancing energy efficiency while improving human comfort. By employing a GP to model building dynamics, CLUE effectively integrates output uncertainty to increase the reliability of HVAC operations. The key innovation of our approach is a meta-kernel learning technique that utilizes domain knowledge from historical data of multiple buildings, significantly reducing the data required for GP hyperparameter tuning. This advanced technique is incorporated into an MPPI framework, enabling the system to compute optimal HVAC actions that prioritize energy efficiency and minimize risks associated with uncertain conditions. Our empirical evaluation conducted in a simulated five-zone building demonstrates that CLUE requires just seven days of training data to achieve performance levels that meet or exceed those of existing state-of-the-art MBRL methods. Notably, it accomplishes this with a 12.07% reduction in comfort violations. These results underscore CLUE’s capability as a scalable and efficient solution for HVAC control, offering significant advancements in both energy conservation and occupant comfort. This positions CLUE as a potent tool for sustainable building management, potentially setting new standards for the integration of MBRL in real-world applications. IEEEtran
http://arxiv.org/abs/2407.12978v1
20240717194827
Force Profiling of a Shoulder Bidirectional Fabric-based Pneumatic Actuator for a Pediatric Exosuit
[ "Mehrnoosh Ayazi", "Ipsita Sahin", "Caio Mucchiani", "Elena Kokkoni", "Konstantinos Karydis" ]
cs.RO
[ "cs.RO" ]
A Framework for testing Federated Learning algorithms using an edge-like environment [ July 22, 2024 ==================================================================================== empty empty § ABSTRACT This paper presents a comprehensive analysis of the contact force profile of a single-cell bidirectional soft pneumatic actuator, specifically designed to aid in the abduction and adduction of the shoulder for pediatric exosuits. The actuator was embedded in an infant-scale test rig featuring two degrees of freedom: an actuated revolute joint supporting shoulder abduction/adduction and a passive (but lockable) revolute joint supporting elbow flexion/extension. Integrated load cells and an encoder within the rig were used to measure the force applied by the actuator and the shoulder joint angle, respectively. The actuator's performance was evaluated under various anchoring points and elbow joint angles. Experimental results demonstrate that optimal performance, characterized by maximum range of motion and minimal force applied on the torso and upper arm, can be achieved when the actuator is anchored at two-thirds the length of the upper arm, with the elbow joint positioned at a 90-degree angle. The force versus pressure and joint angle graphs reveal nonlinear and hysteresis behaviors. The findings of this study yield insights about optimal anchoring points and elbow angles to minimize exerted forces without reducing the range of motion. § INTRODUCTION Fabric-based pneumatic actuators <cit.> have been integrated into soft robotic devices for assistance and rehabilitation. Such devices consider a range of applications including lower limb <cit.>, hip <cit.>, and upper limb <cit.> support, grasping <cit.>, supernumerary robotic limbs <cit.>, and haptics <cit.>. Yet, existing efforts have mostly focused on adults <cit.>. Only a few have considered developing devices to support upper-extremity (UE) movement in very young children <cit.>. To ensure comfortable and safe interaction between the user and the device, it is crucial to understand the exerted forces. Exerted forces can be determined based on the device's mechanical components and affordances, such as actuator deformation as a function of input pressure <cit.> and anchoring <cit.>. Desired values for these forces are determined based on the users' needs and tolerances <cit.>. Previous studies have focused on determining desired values to support motion about different joints and then developing adult devices capable of generating such forces. For example, the average peak torque requirement produced by a healthy deltoid muscle to completely elevate the arm was found to be 43 Nm <cit.>. To reach maximum arm elevation, a force of at least 270 N must be generated <cit.>. Based on this analysis, a soft wearable deltoid assistance device <cit.> was developed to produce about 300 N force on the upper arm to elevate the arm to 90^∘. An exomuscle <cit.> can exert forces on the arm in the range of [0-500] N with varying actuator inflating pressure and shoulder abduction angle while testing on a one-degree-freedom apparatus. Other studies like a soft shoulder assistive device with pneumatic artificial muscles <cit.> and a cable-driven soft orthotic device <cit.> have targeted the generation of a maximum force output of approximately 100 N, which is similar to the compressive force generated by biological anterior deltoid muscle. Despite these findings, there have been no studies so far on the forces exerted by actuators or soft wearable UE devices on the infant body. Our prior work has focused on the design and kinematics of actuators to support UE movement in very young children <cit.>, as well as on the development of feedback control methods for exosuit prototypes employing those actuators tested with an engineered mannequin <cit.>. In this work, we study the force generation of our previously developed fabric-based pneumatic shoulder actuator using a static testing rig (Fig. <ref>). We vary two key conditions—anchoring points of the actuator on the torso and upper arm, and elbow joint angle—and observe how the exerted force and range of motion are affected. The goal is to identify which conditions can lead to minimal exerted forces and maximal range of motion. This information, in turn, can help pinpoint how to improve the exosuit's functionality. § MATERIALS AND METHODS §.§ Actuator Description The soft pneumatic actuator used in this study was a rectangular-shaped, single-cell actuator, crafted from flexible thermoplastic polyurethane (TPU) fabric (Oxford 200D heat-sealable coated fabric with a thickness of 0.020 cm). The inflatable portion of the actuator is 15×5 cm. Two non-inflatable segments of about 3×5 cm at each end are used to facilitate attachment to the torso and upper arm using velcro straps. This actuator can support shoulder abduction and adduction without impeding other degrees of freedom (DoFs) at the joint by being placed in a low-profile position within the axilla. Among various actuators evaluated for shoulder assistance <cit.>, this variant was selected based on rigorous criteria, with a primary focus on range of motion (ROM). §.§ Hardware Experimentation Setup We developed a 3D-printed test rig modeled based on the UE of the 50th percentile of a 12-month-old infant <cit.> (Fig. <ref>). The rig comprises a base (torso), an upper arm, a forearm, and a weighted attachment (0.06 kg) at the end of the forearm to emulate the hand. The upper arm is connected to the base via a revolute joint acting as the shoulder joint. The forearm is connected to the upper arm through another revolute joint, acting as the elbow joint, which can be locked in place at four distinct angles (EA∈{0^∘,30^∘,60^∘,90^∘}). The upper arm measures 16.4 cm in length and 14.7 cm in circumference, while the forearm measures 10.85 cm in length and 14.51 cm in circumference. Both the upper arm and forearm are hollow, and sandbags were embedded within to match the weight of an infant's upper arm and forearm (total weight about 0.432 kg), based on the infant anthropometrics for a total body mass of 10.8 kg <cit.>. Actuator inflation/deflation was controlled by an external pneumatic board (Programmable-Air kit). The board has two compressor/vacuum pumps and three pneumatic valves to regulate airflow (2 L/min). Airflow is adjusted via the pump duty cycle ([0-100]%). The pressure range is [-50, 50] kPa. The board also incorporates a pressure sensor (SMPP-03) and an Arduino Nano (ATMega328P) for control and monitoring. The embedded Arduino facilitates connectivity to a computer through a serial-to-USB interface, allowing users to send commands to the pumps and valves, and log sensor data. The developed force measurement system consisted of two 5 kg load cells (YZC-133) embedded in the upper arm and base of the test rig. The upper arm load cell was positioned along the vertical centerline on the inner side of the upper arm. A plate measuring 11.1× 1.36 cm was attached to the load cell to measure the force applied to the area covered by the plate. The load cell on the base was aligned with the vertical centerline on the side facing the inner side of the upper arm and was coupled with a 15.6× 1.66 cm plate. Each load cell was connected to an amplifier (HX711, Sparkfun); data were read at a frequency of 10 Hz. A 1024 pulse per rotation rotary encoder (E6B2, Sparkfun) was used to measure the angle of the shoulder joint throughout the experiments. The amplified outputs of load cells and the encoder's data were transmitted to a computer through another microcontroller (Arduino UNO). §.§ Experimental Conditions The performance of the actuator in terms of force exerted on the upper arm and torso was assessed within a total of eight different conditions varying based on the upper arm anchoring point and the locked elbow angle (Table <ref>). A total of 30 trials were conducted for each condition. The actuation control board was operating at a 100% duty cycle to achieve the fastest possible inflation and deflation. A complete reaching motion in infants lasts about two seconds <cit.>; hence, we assessed both inflation and deflation phases each lasting for 5 seconds, starting with inflation. Internal actuator pressure peaked at a maximum of 34 kPa and dropped to a minimum of -34kPa when fully inflated and deflated, respectively. § RESULTS AND DISCUSSION §.§ Assessment of Shoulder Joint ROM We first assessed shoulder joint ROM under the different conditions listed in Table <ref>. It can be observed (Fig. <ref>) that increasing the elbow joint angle (i.e. flexing the elbow) results in a larger maximum shoulder joint angle as well as increased ROM. During elbow flexion, there is a reduction of the moment arm between the point where the mass at the end of the forearm is located and the shoulder joint axis of rotation. Less mechanical work is thus required but the amount of input pressure remains the same, which leads to larger ROM. Further, the shoulder joint has larger ROM when the actuator is anchored at 2/3 the UA length. The moment arm between the actuator force location on the upper arm and shoulder joint increases when the placement of the actuator moves from half to two-thirds of UA length. Consequently, the actuator generates more torque with the same actuator pressure and force on the upper arm. §.§ Assessment of Actuator Force Generation We then examined the variation in the force exerted by the actuator on the torso and upper arm. Generally, the maximum (on average) force applied to the torso and upper arm decreases as the elbow joint angle increases (Fig. <ref>). However, this trend is not observed across all cases. The peak force on the torso and upper arm is created by the interaction between two segments of the actuator, which apply force to these areas. As the elbow flexes and the moment of inertia decreases, the interaction force between the segments decreases, thus reducing the maximum force applied to both the upper arm and torso. Further, when comparing peak forces between the two anchoring points, a notable trend emerges: while the force on the torso increases, the force on the upper arm stays relatively consistent. As the anchoring point shifts from two-thirds to half the length of UA, the area covered by the actuator on the torso expands which results in a larger force applied to the torso with the same actuator inner pressure. Since the actuator maintains the same coverage area on the upper arm in both scenarios, the maximum force exerted on the upper arm does not change. The decline in force on both the upper arm and torso during actuator inflation can be associated with varying contacts during the process. Placed under the armpit, a single-cell actuator naturally folds, creating two distinct sections that effectively function as two separate cells until a critical point when they merge. These sections interact by pushing outward while inflating, increasing force on both the torso and upper arm. Around halfway through inflation, these sections merge into a single cell (crucial for maximizing ROM). However, this merging leads to a notable decrease in contact with both the upper arm and torso. Consequently, force at both points decreases abruptly. After unfolding, the actuator remains attached to the torso at its non-inflatable end, causing the force measured at the torso to drop nearly to zero. The force on the upper arm also declines but at a smaller degree since the inflatable part of the actuator still partially covers the upper arm. Hence, the force at the upper arm increases until the end of the inflation period. During deflation, the upper arm force decreases while the torso force remains near zero. We also analyzed the relation between the forces exerted on the upper arm and torso and the actuator's internal pressure. The forces were found to be nonlinear and exhibit hysteresis as the input pressure varied (Fig. <ref>). For pressures below 10 kPa, the relation is linear; however, around 10 kPa, the actuator begins to unfold and a notable drop occurred because of the reduction in the contact area. The actuator pressure also decreases, likely due to the suddenly increased volume from the actuator unfolding. After this point, the force on the torso remains unchanged while the force on the upper arm starts to increase linearly with pressure. During deflation, the force on the torso is constant, whereas the force on the upper arm decreases roughly logarithmically. Further, we analyzed the relation between exerted force and shoulder joint angle (Fig. <ref>). There is a major change when at about half the ROM, marking the point where the actuator begins to unfold. Identifying this angle as the threshold for actuator unfolding, we found that the force behavior aligns with the patterns described previously. Nonlinearity and hysteresis were observed here as well. The maximum force generated by the actuator in this work is significantly lower compared to those developed for adult shoulder abduction/adduction <cit.> which are at the range of [300-500] N. Although a direct comparison of absolute values may not be appropriate, this information may help suggest upper bounds for exerted forces via a mechanics-based scaling. Such an assessment merits further investigation considering that there is limited information in the literature regarding the maximum forces that are safe to exert on infants' arms in the context of pediatric exosuits. § CONCLUSION This study examined the contact forces generated by a single-cell bidirectional soft pneumatic actuator for shoulder abduction/adduction using an infant-scale test rig. Extensive experimentation across different conditions related to actuator anchoring and (fixed) elbow joint angle demonstrated the optimal performance range of the considered actuator: the maximum range of motion while exerting minimal force on the torso and upper arm is achieved when anchored at two-thirds the length of the upper arm when the elbow joint is at a 90^∘ angle. Nonlinearity and hysteresis between actuator input pressure, shoulder joint angle, and the exerted forces were identified. These were caused by the actuator's rapid unfolding during actuation, thereby suggesting future design improvements to produce smoother force profiles. IEEEtran
http://arxiv.org/abs/2407.13119v1
20240718025759
The domain and prime properties for Koszul rings and algebras
[ "Manuel L. Reyes", "Daniel Rogalski" ]
math.RA
[ "math.RA", "math.CT", "math.RT", "Primary: 16E65, 16N60, 16S37, 16U10, Secondary: 16E30, 16S99" ]
label=(*)
http://arxiv.org/abs/2407.13621v1
20240718155755
Differential Privacy Mechanisms in Neural Tangent Kernel Regression
[ "Jiuxiang Gu", "Yingyu Liang", "Zhizhou Sha", "Zhenmei Shi", "Zhao Song" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CR" ]
Differential Privacy Mechanisms in Neural Tangent Kernel Regression Jiuxiang Gu. Adobe Research. Yingyu Liang. The University of Hong Kong. . University of Wisconsin-Madison. Zhizhou Sha Tsinghua University Zhenmei Shi. University of Wisconsin-Madison. Zhao Song. Adobe Research. ========================================================================================================================================================================================================================== Differential Privacy Mechanisms in Neural Tangent Kernel Regression Jiuxiang Gu. Adobe Research. Yingyu Liang. The University of Hong Kong. . University of Wisconsin-Madison. Zhizhou Sha Tsinghua University Zhenmei Shi. University of Wisconsin-Madison. Zhao Song. Adobe Research. ========================================================================================================================================================================================================================== § ABSTRACT Training data privacy is a fundamental problem in modern Artificial Intelligence (AI) applications, such as face recognition, recommendation systems, language generation, and many others, as it may contain sensitive user information related to legal issues. To fundamentally understand how privacy mechanisms work in AI applications, we study differential privacy (DP) in the Neural Tangent Kernel (NTK) regression setting, where DP is one of the most powerful tools for measuring privacy under statistical learning, and NTK is one of the most popular analysis frameworks for studying the learning mechanisms of deep neural networks. In our work, we can show provable guarantees for both differential privacy and test accuracy of our NTK regression. Furthermore, we conduct experiments on the basic image classification dataset CIFAR10 to demonstrate that NTK regression can preserve good accuracy under a modest privacy budget, supporting the validity of our analysis. To our knowledge, this is the first work to provide a DP guarantee for NTK regression. empty § ABSTRACT Training data privacy is a fundamental problem in modern Artificial Intelligence (AI) applications, such as face recognition, recommendation systems, language generation, and many others, as it may contain sensitive user information related to legal issues. To fundamentally understand how privacy mechanisms work in AI applications, we study differential privacy (DP) in the Neural Tangent Kernel (NTK) regression setting, where DP is one of the most powerful tools for measuring privacy under statistical learning, and NTK is one of the most popular analysis frameworks for studying the learning mechanisms of deep neural networks. In our work, we can show provable guarantees for both differential privacy and test accuracy of our NTK regression. Furthermore, we conduct experiments on the basic image classification dataset CIFAR10 to demonstrate that NTK regression can preserve good accuracy under a modest privacy budget, supporting the validity of our analysis. To our knowledge, this is the first work to provide a DP guarantee for NTK regression. § INTRODUCTION Artificial Intelligence (AI) applications are widely employed in daily human life and product activities, such as face recognition <cit.>, recommendation systems <cit.>, chat-based language generation <cit.>, and many more. These applications intrinsically run deep-learning models that are trained on broad datasets, where many contain sensitive user information, e.g., a company trains a model on its user information to provide better-customized service. Consequently, there is a problem with user privacy information data leakage <cit.>, which affects the AI company's reputation <cit.> and may cause severe legal issues <cit.>. Therefore, preserving the privacy of training data becomes a fundamental problem in deep learning. Differential Privacy (DP) <cit.> was proposed to measure privacy rigorously and has been widely studied in many traditional statistical problems. Recently, many brilliant works have applied this powerful tool to machine learning settings. One line of work studies the classic machine learning task, e.g., <cit.> studies DP under the Support Vector Machine (SVM) model. However, these settings are still far from practical deep neural networks nowadays. The other line of work studies the DP in deep learning training, e.g., DP-SGD <cit.> provides DP guarantees for the Stochastic Gradient Descent (SGD) training algorithm. The issue in DP training is that the trade-off between privacy guarantees and test accuracy will worsen as training time increases. DP training may not be practical in today's training paradigm, i.e., pre-training with an adaptation in foundation models <cit.>, as the per-training stage may involve billions of training steps. To bridge the gap between practical deep learning models and practical differential privacy guarantees, in this work, we study DP Mechanisms in Neural Tangent Kernel <cit.> (NTK) Regression. NTK is one of the most standard analysis frameworks for studying optimization and generalization in over-parameterized deep neural networks (DNN). NTK can connect the DNN training by SGD to kernel methods by using the kernel induced by gradient around the neural networks' initialization. Consequently, the DNN optimization can be viewed as NTK regression, and it retains almost the same generalization ability <cit.> as kernel regression even though the DNN is in an over-parameterization regime. Our contributions. In this work, we use the “Gaussian Sampling Mechanism” <cit.> to add a positive semi-definite noise matrix to the neural tangent kernel matrix, and then we can show provable guarantees for both differential privacy and test accuracy of our NTK regression, i.e., Under proper conditions, for any test data x, we have NTK-regression is (ϵ,δ)-DP and has good utility under a large probability. Furthermore, we undertake experiments using the fundamental image classification dataset CIFAR10 to illustrate that NTK regression can maintain high accuracy with a modest privacy budget (see Figure <ref> in Section <ref>). This effectively validates our analysis. To the best of our knowledge, this research is the first effort to offer a differential privacy (DP) guarantee for NTK regression. Roadmap Our paper is organized as follows. In Section <ref>, we provide an overview of Differential Privacy, the Neural Tangent Kernel (NTK), and its applications within the context of Differential Privacy. Section <ref> introduces the definitions of both continuous and discrete versions of the Neural Tangent Kernel, along with the definition of NTK Regression. In Section <ref>, we provide definitions pertinent to the Gaussian Sampling Mechanism and explain how we modify this mechanism to render NTK Regression private. An overview of the techniques employed in this paper is discussed in Section <ref>. In Section <ref>, we conduct experiments on the CIFAR-10 dataset, demonstrating that our algorithm performs effectively with a small privacy budget. In Section <ref>, we offer a thorough examination of the benefits and drawbacks of introducing noise into the kernel or gradient, along with an exploration of the proper way for adding noise to NTK Regression. Finally, we conclude in Section <ref>. § RELATED WORK Differential Privacy Guarantee Analysis. Since the introduction of the concept of differential privacy (DP) by <cit.>, it has emerged as a crucial standard for privacy protection, both theoretically and empirically <cit.>. DP offers a robust and measurable privacy definition that supports the design of algorithms with specific guarantees of privacy and accuracy <cit.> and many more. Furthermore, innovative mechanisms have been developed beyond conventional Laplace, Gaussian, and Exponential methods <cit.>. For instance, the truncated Laplace mechanism <cit.> has been demonstrated to achieve the tightest lower and upper bounds on minimum noise amplitude and power among all (ϵ,δ)-DP distributions. Neural Tangent Kernel. Numerous recent studies suggest that the analysis of optimization and generalization in deep learning should be closely integrated. NTK employs the first-order Taylor expansion to examine highly over-parameterized neural networks, from their initial states, as seen in references like <cit.> and others. Consequently, the optimization of neural networks can be approached as a convex problem. The NTK technique has gained widespread application in various contexts, including preprocessing analysis <cit.>, federated learning <cit.>, LoRA adaptation for large language models <cit.>, and estimating scoring functions in diffusion models <cit.>. Differential Privacy in Machine Learning. Differential privacy (DP) is a thriving and potent method with extensive applications in the field of private machine learning <cit.>. This includes its use during the pre-training stage <cit.>, the adaptation stage <cit.>, and the inference stage <cit.>. Recently, <cit.> has integrated DP into large language models, and <cit.> applied DP into diffusion models. DP has also been widely used in classical machine learning and data structure settings, e.g., near-neighbor counting <cit.>, permutation hashing <cit.>, BDD tree <cit.>, counting tree <cit.>, Jaccard similarity <cit.> and many more. § PRELIMINARY In this section, we will first introduce some basic notations in Section <ref>. Then, we will introduce the definitions of the neural tangent kernel, both discrete and continuous versions in Section <ref>, the definitions and key component of NTK regression in Section <ref>. §.§ Notations For any positive n, let [n] denote the set {1,2,⋯, n}. For any vector z∈^n. We define the ℓ_2-norm of a vector z as z_2 := (∑_i=1^n z_i^2)^1/2, the ℓ_1-norm as z_1 := ∑_i=1^n |z_i|, the ℓ_0-norm as the count of non-zero elements in z, and the ℓ_∞-norm as z_∞ := max_i ∈ [n] |z_i|. The transpose of vector z is indicated by z^⊤. The inner product between two vectors is denoted by ⟨·, ·⟩, such that ⟨ a, b ⟩ = ∑_i=1^n a_i b_i. For any matrix A ∈^m × n. We define the Frobenius norm of a matrix A as A_F:=( ∑_i∈ [m], j∈ [n] A_i,j^2 )^1/2. We use A to denote the spectral/operator norm of matrix A. A function f(x) is said to be L-Lipschitz continuous if it satisfies the condition f(x) - f(y) _2 ≤ L · x - y _2 for some constant L. Let D represent a given distribution. The notation x ∼ D indicates that x is a random variable drawn from the distribution D. We employ [] to represent the expectation operator and [] to denote probability. Furthermore, we refer to a matrix as PSD to indicate that it is positive semi-definite. As we have multiple indexes, to avoid confusion, we usually use i,j ∈ [n] to index the training data, s, t ∈ [d] to index the feature dimension, r ∈ [m] to index neuron number. §.§ Neural Tangent Kernel Now, we are ready to introduce our key concept, the Neural Tangent Kernel induced by the Quadratic activation function. We will introduce Discrete Quadratic Kernel in Definition <ref>, and Continuous Quadratic Kernel in Definition <ref>. Data. We have n training data points D_n = { (x_i, y_i)}_i=1^n = (X, Y), where x ∈^d and y ∈{0, 1 }. We denote X = [x_1, …, x_n] ∈^d× n and Y = [y_1, …, y_n]^⊤∈{-1, +1 }^n. We assume that x_i_2 ≤ B, ∀ i ∈ [n]. Models. We consider the two-layer neural network with quadratic activation function and m neurons f(x) = ∑_r ∈ [m] a_r ⟨ w_r, x ⟩^2, where w_r ∈^d and a_r ∈{-1, +1} for any r ∈ [m]. We draw weights w_r ∼ N(0, σ^2 I_d× d) for any r ∈ [m] and let them be fixed. Then, we define the discrete quadratic kernel matrix H^dis∈^n × n corresponding to D_n, such that ∀ i, j ∈ [n], we have H_i, j^dis = 1/m∑_r=1^m ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩. Note that H^dis is a PSD matrix, where a detailed proof can be found in Lemma <ref>. We define the continuous quadratic kernel matrix H^cts∈^n × n corresponding to D, such that ∀ i, j ∈ [n], we have H_i, j^cts = _w ∼ N(0, σ^2 I_d× d)⟨⟨ w , x_i ⟩ x_i, ⟨ w , x_j ⟩ x_j ⟩. §.§ NTK Regression We begin by defining the classical kernel regression problem as follows: If we have the following conditions: * Let feature map ϕ : ^d →ℱ. * λ>0 is the regularization parameter. A classical kernel ridge regression problem can be written as min_w ∈^n1/2 Y - ϕ(X)^⊤ w _2^2 + 1/2λ w _2^2. Then, we are ready to introduce the NTK Regression problem as follows: If the following conditions are met: * Let 𝖪(·, ·): ^d×^d→ be a kernel function, i.e., 𝖪(x, z) = 1/m∑_r=1^m ⟨⟨ w_r , x ⟩ x, ⟨ w_r , z ⟩ z ⟩, ∀ x, z ∈^d. * Let K ∈^n × n be the kernel matrix with K_i,j = 𝖪(x_i,x_j), ∀ i, j ∈ [n] × [n]. * Let α∈^n be the solution to (K + λ I_n) α = Y. Namely, we have α = (K + λ I_n)^-1 Y. Then, for any data x ∈^d, the NTK Kernel Regression can be denoted as f_K^*(x) = 1/n𝖪(x,X)^⊤α. § MAIN RESULTS In the NTK Regression definition, we have the expression α = (K + λ I)^-1 Y (see also Definition <ref>). To ensure the privacy of α, we initially focus on privatizing K. Subsequently, we demonstrate that α remains private through the application of the Post-processing Lemma (see also Lemma <ref>). Privatizing K is non-trivial, as K = H^dis, indicating that K is a positive semi-definite (PSD) matrix (see Lemma <ref>). We denote K as the privacy-preserving counterpart of K. It is essential that K maintains the PSD property, a condition that classical mechanisms such as the Laplace Mechanism and Gaussian Mechanism cannot inherently guarantee for a private matrix. In the work by <cit.>, the “Gaussian Sampling Mechanism" is employed to address this challenge, ensuring that the private version of the Attention Matrix also retains PSD. Adopting the same setup from <cit.>, we present our findings on the privacy and utility of the “Gaussian Sampling Mechanism" in this section. In Section <ref>, we provide several essential definitions used in our algorithm. In Section <ref>, we will adhere to the framework established in <cit.> and elaborate on the functioning of the “Gaussian Sampling Mechanism," along with its primary results and requirements. In Section <ref>, we will examine the utility implications of employing the "Gaussian Sampling Mechanism." Additionally, we include a Remark that analyzes the trade-off between privacy and utility inherent in our approach. In Section <ref>, we introduce the final theorem of our algorithm, including both privacy guarantees and utility guarantees. §.§ Key Concepts To begin with, we will introduce the definition of neighboring dataset used in “Gaussian Sampling Mechanism". If we have below conditions, * Let B > 0 be a constant. * Let n be the number of data points. * Let dataset D = {(x_i, y_i)}_i=1^n, where x_i ∈^d and x_i_2 ≤ B for any i ∈ [n]. We define D' be a neighbor dataset with one data point replacement of D. Without loss of generality, we have D' = {(x_i, y_i)}_i=1^n-1∪{(x_n', y_n)}.Namely, we have D and D' only differ in the n-th item. Additionally, we assume that x_n and x_n' are β-close. Namely, we have x_n - x_n' _2 ≤β There are also two essential definitions M and Δ used in the privacy proof of “Gaussian Sampling Mechanism", which need to satisfy M < Δ, which is also the Condition 4 in Condition <ref>. We will begin by presenting the definition of M. Let ℳ: (^n)^d →^n × n be a (randomized) algorithm that given a dataset of d points in ^n outputs a PSD matrix. Let 𝒴, 𝒴' ∈ (^n)^d. Then, we define M := ℳ(𝒴)^1/2ℳ(𝒴^')^-1ℳ(𝒴)^1/2-I _F Afterward, we proceed to define Δ. If we have the following conditions: * Let ϵ∈ (0, 1) and δ∈ (0, 1). * Let k denote the number of i.i.d. samples g_1,g_2,⋯,g_k from 𝒩(0,Σ_1) output by Algorithm <ref>. We define Δ := min{ϵ/√(8k log(1/δ)),ϵ/8 log(1/δ)} §.§ Gaussian Sampling Mechanism In this section, we recapitulate the analytical outcomes of the “Gaussian Sampling Mechanism” as presented in Theorem <ref> from <cit.>. The associated algorithm is detailed in Algorithm <ref>. Firstly, we outline the conditions employed in the “Gaussian Sampling Mechanism” as follows: We need the following conditions for DP: * Condition 1. Let ϵ∈ (0,1), δ∈ (0,1), k ∈ℕ. * Condition 2. Let 𝒴,𝒴' denote neighboring datasets, which differ by a single data element. * Condition 3. Let Δ be defined in Definition <ref> and Δ < 1. * Condition 4. Let M, M be defined in Definition <ref> and M ≤Δ. * Condition 5. Let the input Σ = ℳ(𝒴). * Condition 6. Let ρ = O( √( ( n^2+log(1/γ) ) / k )+ ( n^2+log(1/γ) ) /k ). Prior to delving into the primary analysis of the “Gaussian Sampling Mechanism” , we offer a succinct overview of its underlying intuition. As noted at the outset of Section <ref>, the task of obtaining a private positive semi-definite (PSD) matrix is non-trivial. Nevertheless, by leveraging Covariance estimation within the “Gaussian Sampling Mechanism”, we can guarantee that the estimated matrix will remain PSD. This is because for any i ∈ [k], we have g_i g_i^⊤ is PSD matrix, then (k^-1∑_i=1^k g_i g_i^⊤ is also PSD matrix. With this foundation, we can proceed to introduce the analysis of the “Gaussian Sampling Mechanism’’ with confidence. The analysis is presented as follows: If all conditions hold in Condition <ref> and Condition <ref>, then, there exists an Algorithm <ref> such that * Part 1. Algorithm <ref> is (ϵ,δ)-DP. * Part 2. Outputs Σ̂∈𝕊_+^n denotes the private version of input Σ, such that with probabilities at least 1-γ, Σ^-1/2ΣΣ^-1/2-I_n _F ≤ρ * Part 3. (1-ρ) Σ≼Σ≼ (1+ρ) Σ In Lemma <ref>, Part 1 claims the privacy guarantees of the “Gaussian Sampling Mechanism”, Part 2 establishes the critical properties necessary to ensure the utility of the “Gaussian Sampling Mechanism”, and Part 3 presents the ultimate utility outcomes of the algorithm. Note that in our setting, we use Σ = K, where K is non-private Discrete Quadratic NTK Matrix in NTK Regression, and we also have Σ = K, where K denotes the private version of K. We need the following conditions so that we can make Condition 4 in Condition <ref> hold, with probability 1-δ_3. See the detailed proof in Section <ref>. We need the following conditions for the calculation of M (see Definition <ref>). * Condition 1. If D∈^n × d and D' ∈^n × d are neighboring dataset (see Definition <ref>) * Condition 2. Let H^dis denote the discrete NTK kernel matrix generated by D, and H^dis' denotes the discrete NTK kernel matrix generated by neighboring dataset D'. * Condition 3. Let H^dis≽η_min I_n × n, for some η_min∈. * Condition 4. Let β = O(η_min / (n, σ, B)), where β is defined in Definition <ref>. * Condition 5. Let ψ := O (√(n)σ^2 B^3 β ). * Condition 6. Let δ_1, δ_2 , δ_3 ∈ (0, 1). Let δ_1 = δ_2 / (m). Let δ_2 = δ_3 / (n). * Condition 7. Let d = Ω (log (1 / δ_1)). * Condition 8. Let m = Ω(n · d B^2 β^-2log (1 / δ_2)). §.§ Utility of Gaussian Sampling Mechanism In this section, we will provide utility guarantees under “Gaussian Sampling Mechanism". By Lemma <ref>, we will argue that, “Gaussian Sampling Mechanism" provides good utility under differential privacy. We start with introducing the necessary conditions used in proving the utility of “Gaussian Sampling Mechanism". We need the following conditions for Utility guarantees of “Gaussian Sampling Mechanism": * Condition 1. If D∈^n × d and D' ∈^n × d are neighboring dataset (see Definition <ref>) * Condition 2. Let H^dis denote the discrete NTK kernel matrix generated by D (see Definition <ref>). * Condition 3. Let η_max I_n × n≽ H^dis≽η_min I_n × n, for some η_max, η_min∈. * Condition 4. Let H^dis denote the private H^dis generated by Algorithm <ref> with H^dis as the input. * Condition 5. Let K = H^dis, K = H^dis in Definition <ref>. Then we have f_K^*(x) and f_K^*(x). * Condition 6. Let √(n)ψ / η_min < Δ, where Δ is defined in Definition <ref>. * Condition 7. Let ρ = O( √( ( n^2+log(1/γ) ) / k )+ ( n^2+log(1/γ) ) /k ). * Condition 8. Let ω := 6 d σ^2 B^4. * Condition 9. Let γ∈ (0, 1). We then leverage Part 3 of Lemma <ref> to derive the error between the outputs of the private and non-private NTK Regression, thereby demonstrating the utility of our algorithm. If all conditions hold in Condition <ref>, then, with probability 1 - γ, we have | f_K^*(x) - f_K^*(x) | ≤ O( ρ·η_max·ω/(η_min + λ)^2 ) The interplay between privacy and utility guarantees is complex. Our algorithm exhibits a property akin to that of other classical differential privacy algorithms: an increase in privacy typically results in a decrease in utility, and conversely. We will provide a thorough explanation of the privacy-utility trade-off in the subsequent Remark. An inherent trade-off exists between the privacy and utility guarantees of our algorithm. Specifically, enhancing privacy typically results in a degradation of utility. Recall that the variable k represents the number of sampling iterations in the Gaussian Sampling Mechanism. To ensure privacy, we must adhere to Condition 4 as outlined in Condition <ref>, which requires that M < Δ. Here, M is a constant defined in Definition <ref>, with its precise value calculated in Lemma <ref>. In contrast, Δ is defined in Definition <ref> and is dependent on the value of k. Consequently, to achieve stronger privacy, namely a smaller DP parameter ϵ or δ, it is necessary to decrease k to meet the M < Δ constraint. On the other hand, for utility considerations, as defined by ρ = O( √( ( n^2+log(1/γ) ) / k ) + ( n^2+log(1/γ) ) / k ), a reduction in k results in an increase in ρ. This, in turn, leads to diminished utility. Due to the limitation of space, we refer the readers to Lemma <ref> in Appendix for the details of proof of Lemma <ref>. A detailed explanation of the trade-off between Privacy and Utility can be found in Remark <ref>. §.§ Main Theorem Then, we are ready to introduce our main result, including the privacy and utility of our algorithm. If all conditions hold in Condition <ref>, Condition <ref>, and Condition <ref>, then, for any test data x, with probability 1 - δ_3 - γ, we have that f_K^*(x) is (ϵ,δ)-DP and | f_K^*(x) - f_K^*(x) | ≤ O( ρ·η_max·ω/(η_min + λ)^2 ). It follows from directly combining Lemma <ref>, Lemma <ref>, and Lemma <ref>. § TECHNICAL OVERVIEW In this section, we offer a succinct overview of the proof techniques employed throughout this paper. We begin by detailing the method for calculating the Sensitivity of the Continuous Quadratic NTK in Section <ref>. Subsequently, in Section <ref>, we utilize concentration inequalities, specifically Bernstein's Inequality, to narrow the gap between the Continuous Quadratic NTK and Discrete Quadratic NTK. In Section <ref>, we will delve into the sensitivity of the PSD matrix K, which is the matrix we aim to privatize. Concluding this overview, we present the approaches for establishing privacy guarantees and utility guarantees in Sections <ref> and <ref>, respectively. §.§ Sensitivity of Continuous Quadratic NTK Given that the Discrete Quadratic NTK is defined as H_i, j^dis = 1/m∑_r=1^m ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩ (see also Definition <ref>), analyzing its sensitivity when a single data point is altered in the training dataset poses a challenge. In contrast, the Continuous Quadratic NTK, which incorporates Expectation in its definition (refer to Definition <ref>), is significantly much easier to analyze. The discrepancy between the Discrete and Continuous versions of the NTK can be reconciled through Concentration inequalities. Consequently, we will start our analysis of NTK Regression by focusing on the Continuous Quadratic NTK. In the “Gaussian Sampling Mechanism" (see also Section <ref>), we defined the concept of neighboring datasets being β-close (refer to Definition <ref>). We focus on examining the Lipschitz property of individual entries within the Continuous Quadratic NTK. Without loss of generality, let us consider a dataset 𝒟 of length n, and let neighboring datasets 𝒟 and 𝒟' differ solely in their n-th data point. Consequently, as per the definition of the Continuous Quadratic NTK, the respective kernels H^cts and H^cts' will differ exclusively in their n-th row and n-th column. To examine the Lipschitz property for the n-th row and n-th column, we must consider two distinct cases. Initially, we focus on the sole diagonal entry, the (n, n)-th element, in the Continuous Quadratic NTK. The Lipschitz property for this entry is given by (see also Lemma <ref>): | H^cts_n, n - H^cts_n, n' | ≤ 4 σ^2 B^3 · x - x' _2 Subsequently, we will address the remaining 2n - 2 non-diagonal entries within the n-th row and n-th column, for which the Lipschitz property is as follows: (see also Lemma <ref>). For all { (i, j) : (i = n, j ≠ n) or (i ≠ n, j = n), i, j ∈ [n] }, we have | H^cts_i,j - H^cts_i,j' | ≤ 2 σ^2 B^3 · x - x' _2 Moreover, we compute the sensitivity of the Continuous Quadratic NTK with respect to β-close neighboring datasets. Leveraging the Lipschitz properties of the diagonal and non-diagonal entries discussed earlier, we derive the sensitivity of the Continuous Quadratic NTK as follows: (see also Lemma <ref>) H^cts - H^cts' _F ≤ O( √(n)σ^2 B^3 β) §.§ Bridge the gap between Continuous and Discrete Quadratic NTKs In the previous section, we established the sensitivity of the Continuous Quadratic NTK. Here, we aim to bridge the divide between the Continuous Quadratic NTK and Discrete Quadratic NTK. The underlying rationale is that the number of neurons, m, in the Discrete Quadratic NTK can be extremely large. With the help of this property, we invoke the strength of concentration inequalities, specifically Bernstein's Inequality, to demonstrate that with high probability, the discrepancy between the Continuous Quadratic NTK and Discrete Quadratic NTK is negligible. Namely, we have (see also Lemma <ref>) H^dis - H^cts_F ≤ n ϵ Further, we examine the sensitivity of the Discrete Quadratic NTK with respect to β-close neighboring datasets. Recall that we have previously established the sensitivity of the Continuous Quadratic NTK in Lemma <ref>. We now incorporate both Bernstein's Inequality and the sensitivity of the Continuous Quadratic NTK into our analysis. We have (see also Lemma <ref>) H^dis - H^dis' _F ≤ O (√(n)σ^2 B^3 β) §.§ Sensitivity of PSD Matrix In alignment with <cit.>, this section will present Lemma <ref>, which offers a calculation for M (refer to Definition <ref>). Utilizing this result and noting that Δ is dependent on k (see Definition <ref>), we can subsequently refine the sampling time k to satisfy Condition 4 as mentioned in Condition <ref>. If all conditions hold in Condition <ref>, then, with probability 1 - δ_3, we have * Part 1. (H^dis)^-1/2H^dis' (H^dis)^-1/2 - I ≤ψ / η_min * Part 2. (H^dis)^-1/2H^dis' (H^dis)^-1/2 - I _F ≤√(n)ψ / η_min In Lemma <ref>, Part 1 and Part 2 respectively establish the bounds for M (as defined in Definition <ref>) under the spectral norm and Frobenius norm. These findings offer insights into how to modify k in order to fulfill Condition 4 as required by Condition <ref>. §.§ Privacy Guarantees for Private NTK Regression In Lemma <ref> and under Condition 4. in Condition <ref>, we need to compute M (as defined in Definition <ref>) for the Discrete Quadratic NTK. The pivotal approach is to first utilize the sensitivity of the Discrete Quadratic NTK (refer to Lemma <ref>) to establish the inequalities for positive semi-definite matrices (see Lemma <ref>). Specifically, we obtain (1 - ψ / η_min) H^dis≼H^dis' ≼ (1 + ψ / η_min) H^dis Building on these inequalities, we can further demonstrate that (as shown in Lemma <ref>) (H^dis)^-1/2H^dis' (H^dis)^-1/2 - I _F ≤√(n)ψ / η_min Consequently, since ψ = O (√(n)σ^2 B^3 β ), by choosing a small β, we achieve M = (H^dis)^-1/2H^dis' (H^dis)^-1/2 - I _F ≤Δ which fulfills the requirements of Condition 4. in Condition <ref>. Thus, by Lemma <ref>, we establish the privacy guarantees for our algorithm. §.§ Utility Guarantees for Private NTK Regression In this section, we will present a comprehensive analysis of the utility guarantees for our private NTK Regression. Recall that in the definition of NTK Regression (refer to Definition <ref>), we have f_K^*(x) = 𝖪(x,X)^⊤ (K + λ I_n)^-1 Y We commence by establishing a bound on the spectral norm of (K + λ I_n)^-1 as follows (see also Lemma <ref>) (K + λ I)^-1 - (K + λ I)^-1≤ O( ρ·η_max/(η_min + λ)^2) Here, we employ the Moore-Penrose inverse (see also Lemma <ref>) to address the inversion in (K + λ I_n)^-1. Utilizing the spectral norm bound, we then derive bounds for the ℒ_2 norms of 𝖪(x,X) and Y. After selecting a small β, we arrive at the final result that (see also Lemma <ref>) | f_K^*(x) - f_K^*(x) | ≤ O( ρ·η_max·ω/(η_min + λ)^2) Thus, we have demonstrated that our private NTK Regression maintains high utility while simultaneously ensuring privacy. Similar to other differential privacy algorithms, our algorithm encounters a trade-off between privacy and utility, where increased privacy typically results in a reduction in utility, and conversely. An in-depth examination of this trade-off is provided in Remark <ref>. § EXPERIMENTS This section will introduce the experimental methodology employed on the CIFAR-10 dataset. The corresponding results are visualized in Fig. <ref>. In Section <ref>, we enumerate all the parameters and the experimental setup we utilized. Section <ref> presents a detailed analysis of the outcomes from our experiments. §.§ Experiment Setup Dataset. Our experiments are conducted on the CIFAR-10 dataset <cit.>, which comprises 10 distinct classes of colored images, including subjects such as airplanes, cats, and dogs. The dataset is partitioned into 50,000 training samples and 10,000 testing samples, with each image measuring 32 × 32 pixels and featuring RGB channels. Given that our NTK Regression is tailored for binary classification, we select two random classes from the 10 available classes to conduct the binary classification task. We randomly choose 1,000 images for training and 100 for testing. Feature Extraction. CIFAR-10 images possess a high-dimensional nature (32 × 32 × 3 = 3,072 dimensions), which poses a challenge for NTK Regression. To address this, we leverage the power of ResNet <cit.> to reduce the dimension. Following the approach of <cit.>, we employ ResNet-18 to encode the images and extract features from the network's last layer, yielding a 512-dimensional feature representation for each image. Feature Normalization. Prior to training our NTK Regression, we normalize all image features such that the ℒ_2 norm of each feature vector is equal to 1. NTK Regression Setup. For both NTK Regression and the NTK Regression Kernel Matrix, we select m = 256 neurons and a random Gaussian variance of σ = 1. This means that for each r ∈ [m], the weights w_r are drawn from the normal distribution N(0, I_d× d). Additionally, we set the regularization factor λ = 10. §.§ Main Results In accordance with the experimental setup detailed in Section <ref>, we present the results in Fig. <ref>. During the execution of NTK Regression, we initially compute H^dis (as defined in Definition <ref>) based on the quadratic activation between the training data and m neurons. As H^dis is a symmetric matrix, it is also positive semi-definite. Subsequently, in accordance with the notation in Definition <ref>, we define K = H^dis. We then apply the Gaussian Sampling Mechanism, as described in Section <ref>, to privatize K, denoting the private version as K. Due to the properties of the Gaussian Sampling Mechanism, K remains symmetric and thus positive semi-definite. Lemma <ref> guarantees that K is (ϵ, δ)-DP. We then compute the private α, denoted as α, by α = (K + λ I_n × n)^-1 Y. By the Post-processing Lemma of differential privacy (refer to Lemma <ref>), we confirm that α is also (ϵ, δ)-DP Consequently, NTK Regression itself is (ϵ, δ)-DP. In our experiment, we fix the differential privacy parameter δ = 10^-3. We recall that Δ is defined in Definition <ref>, and M is defined in Definition <ref>. To satisfy Condition 5 in Condition <ref>, we must ensure M < Δ. We select k to be greater than or equal to 8 ·log(1 / δ), ensuring that for any ϵ > 0, the condition ϵ / √(8 k log(1 / δ))≤ϵ / (8 log(1 / δ)) holds. Consequently, we have Δ = ϵ / √(8 k log(1 / δ)) (see also Definition <ref>). Under this setup, the condition M < Δ is equivalent to: k ≤ϵ^2 η_min^2/8 log(1 / δ) n^2 σ^4 B^8 β^2 In our experimental setup, we have σ = 1, B = 1, η_min = 7 × 10^-3, and n = 10^3. We assume β = 10^-6. We then compute the upper bound for k using Eq. (<ref>) and adhere to this upper bound when conducting our experiments. The outcomes are presented in Fig. <ref>. § DISCUSSION DP in kernel and gradient. In DP-SGD <cit.>, they add Gaussian noise on the gradient for privacy. As they are a first-order algorithm, their function sensitivity is more robust for single-step training. However, as discussed in Section <ref>, to guarantee DP for the whole training process, their DP Gaussian noise variance will increase as T becomes larger (see Theorem 1 in <cit.>). The DP-SGD will not be practical when T is too large. On the other hand, our NTK regression setting is a second-order algorithm involving kernel matrix inverse. Then, our key technical issues are (1) introducing a PSD noise matrix to keep kernel PSD property and (2) using L_2 regularization to make the kernel sensitivity more robust (see more details in Section <ref>). Where to add noise? The proper way to add noise is a fundamental problem in differential privacy. In the work, we add noise to the kernel function to make the neural tangent kernel private. The other option is to add noise on the parameter α in Definition <ref>, but the issue is that the sensitivity of the matrix inverse operation is not robust, so the privacy guarantee will not be ideal in this case. § CONCLUSION In conclusion, we have presented the first DP guarantees for NTK regression. Our algorithm provides a provable privacy-utility trade-off validated on CIFAR-10. This work opens new avenues for privacy-preserving deep learning using an NTK-based algorithm. § ACKNOWLEDGEMENT Research is partially supported by the National Science Foundation (NSF) Grants 2023239-DMS, CCF-2046710, and Air Force Grant FA9550-18-1-0166. plain Appendix Roadmap. The Appendix is organized as follows. In Section <ref>, we introduce the fundamental probability, algebra, and differential privacy tools that are utilized throughout the paper. Section <ref> contains the proof of the positive semi-definite (PSD) property for the Discrete Quadratic NTK. Section <ref> restates the analysis results for the "Gaussian Sampling Mechanism." Section <ref> presents a comprehensive proof of the sensitivity for both the Continuous Quadratic NTK and the Discrete Quadratic NTK, which we subsequently employ to ensure the privacy of our Private NTK Regression. Finally, in Section <ref>, we discuss the utility guarantees of our algorithm. § BASIC TOOLS In this section, we display more fundamental concepts and tools for a better understanding of the readers. In Section <ref>, we introduce a useful tail bound for the Chi-square distribution, as well as the classical concentration inequality. In Section <ref>, we demonstrate useful properties of Gaussian distribution and some useful inequalities for matrix norm and vector norm. In Section <ref>, we provide an essential post-processing Lemma for Differential Privacy. §.§ Probability Tools We state some standard tools from the literature. Firstly, we will state the tail bound of the Chi-Squared distribution in the following Lemma. Let X ∼𝒳_k^2 be a chi-squared distributed random variable with k degrees of freedom. Each one has zero means and σ^2 variance. Then, it holds that [X - kσ^2 ≥ (2√(dt) + 2t) σ^2] ≤  exp(-t) [kσ^2 - X ≥ 2√(dt)σ^2] ≤  exp(-t) Further if k ≥Ω(ϵ^-2 t) and t ≥Ω(log(1/δ)), then we have [ | X - k σ^2 | ≥ϵ k σ^2 ] ≤δ. Then, we will present several useful concentration inequalities. Let X_1, ⋯, X_n be independent zero-mean random variables. Suppose that |X_i| ≤ M almost surely, for all i. Then, for all positive t, [ ∑_i=1^n X_i > t ] ≤exp ( - t^2/2 /∑_j=1^n [X_j^2] + M t /3 ) Let x be a non-negative random variable and t > 0, and let f(x) be the probability density function (pdf) of x. Then, we have [x ≥ t] ≤[x]/t. §.§ Basic Algebra Tools We state a standard fact for the 4-th moment of Gaussian distribution. Let x ∼ N(0,σ^2), then it holds that _x ∼ N(0,σ^2)[x^4] = 3 σ^2. We also list some facts related to matrix norm and vector norm here. We have * Let A∈^n × n, then we have A _F≤√(n) A. * Let A ∈^n × n, then we have A ≤ A _F * For two vectors a,b ∈^n, then we have | a b^⊤ | ≤ a _2 · b _2 §.§ Differential Privacy Tools Here, we introduce the critical post-processing Lemma for Differential Privacy. Let ℳ := ℕ^|χ|→ be a randomized algorithm that is (ϵ, δ)-differentially private. Let f: →' be an arbitrarily random mapping. Then is f ∘ℳ: ℕ^|χ|→' (ϵ, δ)-differentially private. § POSITIVE SEMI-DEFINITE MATRICES In this section, we introduce the proof for the PSD property of the Discrete Quadratic NTK Matrix (see Definition <ref>). Let H^dis denote the discrete quadratic NTK matrix in Definition <ref>. Then, we can show that H^dis is PSD. Recall in Definition <ref>, we have H^dis∈^n × n, where the (i, j)-th entry of H^dis satisfies H_i, j^dis = 1/m∑_r=1^m ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩. Let b_r, i = ⟨ w_r , x_i ⟩ x_i, where for any i ∈ [n], b_r, i∈^d. Let B_r = [b_r, 1, b_r, 2, ⋯ , b_r, n], where B_r ∈^n × n. Then, we have H^dis = 1/m∑_r = 1^m B_r B_r^⊤ Since B_r B_r^⊤ is PSD matrix, H^dis is also PSD matrix. § GAUSSIAN SAMPLING MECHANISM In this section, we restate the analysis for “Gaussian Sampling Mechanism", which guarantees the privacy of our algorithm, and provides potential tools for demonstrating its utility. If we have the below conditions, * Condition 1. Let ϵ∈ (0,1), δ∈ (0,1), k ∈ℕ. * Condition 2. Neighboring datasets 𝒴,𝒴' differ in a single data element. * Condition 3. Let Δ be denoted as Definition <ref> and Δ < 1. * Condition 4. Let M, M be denoted as Definition <ref> and M ≤Δ. * Condition 5. An input Σ = ℳ(𝒴). * Condition 6. ρ = O( √( ( n^2+log(1/γ) ) / k )+ ( n^2+log(1/γ) ) /k ). Then, there exists an Algorithm <ref> such that * Part 1. Algorithm <ref> is (ϵ,δ)-DP. * Part 2. Outputs Σ̂∈𝕊_+^n such that with probabilities at least 1-γ, Σ^-1/2ΣΣ^-1/2-I_n _F ≤ρ * Part 3. (1-ρ) Σ≼Σ≼ (1+ρ) Σ § SENSITIVITY OF NEURAL TANGENT KERNEL In this section, we provide proof of the sensitivity of the NTK Kernel Matrix. In Section <ref>, we demonstrate the sensitivity of Continuous Quadratic NTK under β-close neighboring dataset. Then, in Section <ref>, we use concentration inequalities to bridge the gap between Continuous Quadratic NTK and Discrete Quadratic NTK. Further we prove the sensitivity of the Discrete Quadratic NTK under β-close neighboring dataset. Based on previous Section, we introduce the calculation for (H^dis)^-1/2H^dis' (H^dis)^-1/2 - I _F in Section <ref>, which plays a critical role in satisfying the requirements of privacy guarantees. §.§ Sensitivity of Continuous Quadratic NTK We first introduce the fundamental property of the nested inner product. For any w, a, b ∈ℝ^d, we have ⟨⟨ w , a ⟩ a, ⟨ w , b ⟩ b ⟩ = w^⊤ a a^⊤ b b^⊤ w ⟨⟨ w , a ⟩ a, ⟨ w , b ⟩ b ⟩ =  ⟨⟨ w , a ⟩ a, x ⟨ b, w ⟩⟩ =  ⟨ w^⊤ a a, b b^⊤ w ⟩ =   w^⊤ a a^⊤ b b^⊤ w where the 1st step is because of ⟨ w , b ⟩∈ℝ, the 2nd step is due to ⟨ w , a ⟩ = w^⊤ a, the 3rd step is from basic property of inner product. Then, we are ready to consider the Lipschitz property for each entry in the Continuous Quadratic NTK Matrix. Since diagonal entries and off-diagonal entries have different Lipschitz properties, we need to consider these two cases separately. We first consider the off-diagonal case. If we have the below conditions, * Let B > 0 be a constant. * Let n be the number of data points. * Let m be the number of neurons. * Let dataset D = (X, Y), where X ∈^n × d and Y ∈^n. * Let x_i ∈^d denote X(i, *), and x_i _2 ≤ B, for any i ∈ [n]. * Let D' be the neighbor dataset (see Definition <ref>). Without loss of generality, we assume that D and D' only differ in the n-th item. * Let x := x_n ∈ D and x' := x_n' ∈ D'. * Let H^cts and H^cts' be defined as Definition <ref>, where H_i, j^cts = _w⟨⟨ w , x_i ⟩ x_i, ⟨ w , x_j ⟩ x_j ⟩. And H^cts' ∈^n × n is the kernel corresponding to D'. Then, we can show, for all { (i, j) : (i = n, j ≠ n) or (i ≠ n, j = n), i, j ∈ [n] }, we have | H^cts_i,j - H^cts_i,j' | ≤ 2 σ^2 B^3 · x - x' _2 Without loss of generality, we only consider the i = n, j ≠ n case. The other case follows by symmetry. Let v = x_j ∈^d. Then, we have the following | H^cts_n,j - H^cts_n,j' | =   |_w [⟨⟨ w , v ⟩ v, ⟨ w , x ⟩ x ⟩] - _w [⟨⟨ w , v ⟩ v, ⟨ w , x' ⟩ x' ⟩]| =   | _w [⟨⟨ w , v ⟩ v, ⟨ w , x ⟩ x ⟩ - ⟨⟨ w , v ⟩ v, ⟨ w , x' ⟩ x' ⟩] | where the 1st step is because of the definition of H^cts_i, j, the 2nd step is due to the property of expectation. Let A := v v^⊤ (x x^⊤ - x' x'^⊤) ∈ℝ^d × d. Let A(s, t) ∈ℝ denotes the (s, t)-th entry of A. Let w(s) ∈ℝ denote the s-th entry of w ∈ℝ^d. Further, we can calculate | _w [⟨⟨ w , v ⟩ v, ⟨ w , x ⟩ x ⟩ - ⟨⟨ w , v ⟩ v, ⟨ w , x' ⟩ x' ⟩] | =   | _w [ w^⊤ v v^⊤ (x x^⊤ - x' x'^⊤) w ] | =   | _w [ w^⊤ A w ] | =   | _w [∑_s=1^d w(s)^2 A(s, s)] + _w [∑_s ≠ t w(s) w(t) A(s, t)] | =   | _w [ ∑_s=1^d w(s)^2 A(s, s) ] | =   | _w [ ∑_s=1^d w(s)^2 (v v^⊤ (x x^⊤ - x' x'^⊤))(s, s) ] | where the 1st step is because of Lemma <ref> , the 2nd step is due to definition of A, the 3rd step is from we separate the diagonal term and off-diagonal term, the 4th step comes from [w(s) w(t)] = [w(s)] [w(t)] =0 (when w(s) and w(t) are independent), the fifth step follows from the definition of A. Let x(s) ∈ℝ denotes the s-th entry of x ∈ℝ^d. Let x(s)' ∈ℝ denotes the s-th entry of x' ∈ℝ^d. Let v(s) ∈ℝ denotes the s-th entry of v ∈ℝ^d. Then, we consider the _w [ ∑_s=1^d w(s)^2 (v v^⊤ x x^⊤)(s, s)] term. We have _w [ ∑_s=1^d w(s)^2 (v v^⊤ x x^⊤)(s, s)] =  ∑_s=1^d (v v^⊤ x x^⊤)(s, s) _w [ w(s)^2] =  σ^2 ∑_s=1^d (v v^⊤ x x^⊤)(s, s) =  σ^2 (v^⊤ x) ∑_s=1^d (v x^⊤)(s, s) =  σ^2 (v^⊤ x)^2 where the 1st step is because of the property of expectation, the 2nd step is due to _w [w_i^2] = σ^2, the 3rd step is from v^⊤ x ∈ℝ, the 4th step comes from ∑_s=1^d (v · x^⊤)(s, s) = v^⊤ x. Further, we can calculate | _w [ ∑_s=1^d w(s)^2 (v v^⊤ x x^⊤ - x' x'^⊤)(s, s)] | =  | _w [ ∑_s=1^d w(s)^2 (v v^⊤ x x^⊤ )(s, s)] - _w [ ∑_s=1^d w(s)^2 (v v^⊤ x' x'^⊤)(s, s)] | =  σ^2 | (v^⊤ x)^2 - (v^⊤ x')^2 | where the 1st step is because of linearity of expectation, the 2nd step is due to Eq. (<ref>). Further, we can calculate σ^2 | (v^⊤ x)^2 - (v^⊤ x')^2 | =  σ^2 | v^⊤ (x + x') | · | v^⊤ ( x - x') | ≤  σ^2 v _2 · x + x' _2 · v _2 · x - x' _2 ≤  σ^2 B · 2B · B · x - x' _2 =   2 σ^2 B^3 · x - x' _2 where the 1st step is because of |a^2 - b^2| = |a + b| · |a - b| for any a, b ∈ℝ, the 2nd step is due to | v^⊤ (x + x') | ≤ v _2 · x + x' _2, the 3rd step is from the property of inner product, the 4th step comes from x _2 ≤ B, the fifth step follows from basic algebra. Combining all components analysed above, we have | H^cts_i,j - H^cts_i,j' | ≤ 2 σ^2 B^3 · x - x' _2 Similar to off-diagonal entries, we consider the diagonal entries case. If we have the below conditions, * Let B > 0 be a constant. * Let n be the number of data points. * Let m be the number of neurons. * Let dataset D = (X, Y), where X ∈^n × d and Y ∈^n. * Let x_i ∈^d denotes X(i, *), and x_i _2 ≤ B, for any i ∈ [n]. * Let D' be the neighbor dataset. (See Definition <ref>) Without loss of generality, we assume D and D' only differ in the n-th item. * Let x := x_n ∈ D and x' := x_n' ∈ D'. * Let H^cts and H^cts' be defined as Definition <ref>, where H_i, j^cts = _w⟨⟨ w , x_i ⟩ x_i, ⟨ w , x_j ⟩ x_j ⟩. And H^cts' ∈^n × n is the kernel corresponding to D'. Then, we can show that | H^cts_n, n - H^cts_n, n' | ≤ 4 σ^2 B^3 · x - x' _2 We have the following | H^cts_n, n - H^cts_n, n' | =   |_w [⟨⟨ w , x ⟩ x, ⟨ w , x ⟩ x ⟩] - _w [⟨⟨ w , x' ⟩ x', ⟨ w , x' ⟩ x' ⟩]| =   |_w [ ⟨⟨ w , x ⟩ x, ⟨ w , x ⟩ x ⟩ - ⟨⟨ w , x' ⟩ x', ⟨ w , x' ⟩ x' ⟩] | where the 1st step is because of the definition of H^cts_i, j, the 2nd step is due to linearity of expectation. Then, we have | _w [ ⟨⟨ w , x ⟩ x, ⟨ w , x ⟩ x ⟩ - ⟨⟨ w , x' ⟩ x', ⟨ w , x' ⟩ x' ⟩ ] | =   | _w [ w^⊤ x x^⊤ x x^⊤ w - w^⊤ x' x'^⊤ x' x'^⊤ w ] | =   | _w [ w^⊤( x x^⊤ x x^⊤ - x' x'^⊤ x' x'^⊤) w ] | where the 1st step is because of Lemma <ref> , the 2nd step is due to the basic linear algebra. Let A := (x x^⊤ x x^⊤ - x' x'^⊤ x' x'^⊤) ∈ℝ^d × d. Let A(s, t) ∈ℝ denotes the (s, t)-th entry of A. Let w(s) ∈ℝ denotes the the s-th entry of w ∈ℝ^d. Further, we can calculate | _w [ w^⊤( x x^⊤ x x^⊤ - x' x'^⊤ x' x'^⊤) w ] | =  _w [ w^⊤ A w ] =   | _w [ ∑_s=1^d w(s)^2 A(s, s) ] + _w [ ∑_s ≠ t w(s) w(t) A(s, t) ] | =   | _w [∑_s=1^d w(s)^2 A(s, s) ] | =   | _w [ ∑_s=1^d w(s)^2 (x x^⊤ x x^⊤ - x' x'^⊤ x' x'^⊤)(s, s) ] | where the 1st step is because of definition of A, the 2nd step is due to we separate the diagonal and off-diagonal term, the 3rd step is from [w(s) w(t)] = [w(s)] [w(t)] =0 (when w(s) and w(t) are independent), the 4th step comes from definition of A. Let x(s) ∈ℝ denotes the s-th entry of x ∈ℝ^d. Let x(s)' ∈ℝ denotes the s-th entry of x' ∈ℝ^d. Then, we consider the _w [ ∑_s=1^d w(s)^2 (x x^⊤ x x^⊤)(s, s) ] term. We have _w [ ∑_s=1^d w(s)^2 (x x^⊤ x x^⊤)(s, s) ] =  ∑_s=1^d (x x^⊤ x x^⊤)(s, s) _w [ w(s)^2 ] =  σ^2 ∑_s=1^d (x x^⊤ x x^⊤)(s, s) =  σ^2 ( x^⊤ x ) ∑_s=1^d (x x^⊤)(s, s) =  σ^2 ( x^⊤ x )^2 where the 1st step is because of linearity of expectation, the 2nd step is due to _w [w(s)^2] = σ^2, the 3rd step is from x^⊤ x ∈ℝ, the 4th step comes from ∑_s=1^d (x · x^⊤)(s, s) = x^⊤ x. Further, we can calculate | _w [ ∑_s=1^d w(s)^2 (x x^⊤ x x^⊤ - x' x'^⊤ x' x'^⊤)(s, s) ] | =   | _w [ ∑_s=1^d w(s)^2 (x x^⊤ x x^⊤)(s, s) ] - _w [ ∑_s=1^d w(s)^2 ( x' x'^⊤ x' x'^⊤)(s, s) ] | =  σ^2 | ( x^⊤ x )^2 - ( x'^⊤ x' )^2 | =  σ^2 | x _2^4 - x' _2^4 | where the 1st step is because of linearity of expectation, the 2nd step is due to Eq. (<ref>), the 3rd step is from x^⊤ x = x _2^2. Further, we can calculate σ^2 | x _2^4 - x' _2^4 | =  σ^2 | x _2^2 + x' _2^2 | · | x _2 + x' _2 | · | x _2 - x' _2 | ≤  σ^2 | x _2^2 + x' _2^2 | · | x _2 + x' _2 | · x - x' _2 ≤  σ^2 · 2B^2 · 2B · x - x' _2 =   4 σ^2 B^3 · x - x' _2 where the 1st step is because of |a^2 - b^2| = |a + b| · |a - b| for any a, b ∈ℝ, the 2nd step is due to triangle inequality | x _2 - x' _2 | ≤ x - x' _2, the 3rd step is from x_i _2 ≤ B, the 4th step comes from basic algebra. Combining all components analysed above, we have | H^cts_n, n - H^cts_n, n' | ≤ 4 σ^2 B^3 · x - x' _2 Having established the Lipschitz properties for both the diagonal and off-diagonal entries, we are now equipped to demonstrate the sensitivity of the Continuous Quadratic NTK Matrix with respect to β-close neighboring datasets. If we have the below conditions, * Let B > 0 be a constant. * Let n be the number of data points. * We have dataset D = {(x_i, y_i)}_i=1^n, where x_i ∈^d and x_i_2 ≤ B for any i ∈ [n]. * Let the neighbor dataset D' be defined Definition <ref>. * Let the continuous quadratic kernel matrices H^cts and H^cts' be defined as Definition <ref>, where H^cts' ∈^n × n is the kernel corresponding to D'. Without loss of generality, we have D and D' only differ in the n-th item. Then, we can show that H^cts - H^cts' _F ≤ O( √(n)σ^2 B^3 β) By Lemma <ref>, we know that, for all { (i, j) : (i = n, j ≠ n) or (i ≠ n, j = n), i, j ∈ [n] }, we have | H^cts_i,j - H^cts_i,j' | ≤ 2 σ^2 B^3 · x - x' _2 By Lemma <ref>, we know that | H^cts_n, n - H^cts_n, n' | ≤ 4 σ^2 B^3 · x - x' _2 By Definition <ref>, we have x - x' _2 ≤β There are 2n - 2 entries satisfy { (i, j) : (i = n, j ≠ n) or (i ≠ n, j = n), i, j ∈ [n] }. And there is 1 entry satisfies (i, j) = (n, n). Therefore, we have H^cts - H^cts' ^2_F =   (2n - 2) ( H^cts_i,j - H^cts_i,j')^2 + ( H^cts_n, n - H^cts_n, n')^2 ≤   (2n - 2) · 4 σ^4 B^6 x - x' _2^2 + 16 σ^4 B^6 x - x' _2^2 ≤   (2n - 2) · 4 σ^4 B^6 ·β^2 + 16 σ^4 B^6 ·β^2 =   O(nσ^4 B^6 β^2) where the 1st step is because of definition of ·_F, the 2nd step is due to Eq. (<ref>) and Eq. (<ref>), the 3rd step is from Eq. (<ref>), the 4th step comes from basic algebra. Therefore, we have H^cts - H^cts' _F ≤ O( √(n)σ^2 B^3 β) §.§ Sensitivity of Discrete Quadratic NTK After calculating the sensitivity of the Continuous Quadratic NTK Matrix in the previous section, we are going to use Bernstein Inequality to bridge the gap between Continuous and Discrete Quadratic NTK Matrices, where eventually we are going to demonstrate the sensitivity of Discrete Quadratic NTK under β-close neighboring dataset. We first introduce a fundamental and useful property of a single entry of the Continuous Quadratic NTK Matrix. If we have the below conditions, * Let H^dis be define in Definition <ref>. Namely, H_i, j^dis = 1/m∑_r=1^m ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩. * Let H^cts be define in Definition <ref>. Namely, H_i, j^cts = _w ∼ N(0, σ^2 I_d× d)⟨⟨ w , x_i ⟩ x_i, ⟨ w , x_j ⟩ x_j ⟩. Then we have _w [H_i, j^dis] =   H_i, j^cts H_i, j^cts =  σ^2 (x_i^⊤ x_j)^2 We have the following equation _w [H_i, j^dis] =  _w [1/m∑_r=1^m ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩] =  1/m∑_r=1^m _w_r [ ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩] =  _w [ ⟨⟨ w , x_i ⟩ x_i, ⟨ w , x_j ⟩ x_j ⟩] =   H_i, j^cts where the 1st step is because of the definition of H_i, j^dis, the 2nd step is due to the linearity of expectation, the 3rd step is from w_r are i.i.d. , the 4th step comes from the definition of H_i, j^cts. H_i, j^cts≤σ^2 B^4 Then, we can calculate the following H_i, j^cts =  _w [⟨⟨ w, x_i ⟩ x_i, ⟨ w, x_j ⟩ x_j ⟩] =  _w[ w^⊤ x_i x_i^⊤ x_j x_j^⊤ w ] | =  _w[ ∑_s=1^d w(s)^2 (x_i x_i^⊤ x_j x_j^⊤)(s, s) ] + _w[ ∑_s ≠ t w(s)w(t) (x_i x_i^⊤ x_j x_j^⊤)(s, t) ] =  _w[ ∑_s=1^d w(s)^2 (x_i x_i^⊤ x_j x_j^⊤)(s, s) ] =  σ^2 ∑_s=1^d (x_i x_i^⊤ x_j x_j^⊤)(s, s) =  σ^2 ·⟨ x_i, x_j ⟩∑_s=1^d (x_i x_j^⊤)(s, s) =  σ^2 (⟨ x_i, x_j ⟩)^2 where the 1st step is because of the definition of H_i, j^cts, the 2nd step is due to Lemma <ref>, the 3rd step is from we separate the diagonal term and off-diagonal term, the 4th step comes from [w(s) w(t)] = [w(s)] [w(t)] =0 (when w(s) and w(t) are independent), the fifth step follows from _w [w(s)^2] = σ^2, the sixth step follows from x_i^⊤ x_j ∈ℝ, the seventh step follows from ∑_s=1^d (x_i x_j^⊤)(s, s) = x_i^⊤ x_j ∈ℝ. In order to use Bernstein Inequality, we need to bound the value ranges of random variables,as well as the variance of random variables. We first consider the value range of w_r _2^2. If we have the below conditions, * Let w_r ∼ N(0, σ^2 I_ d× d). * Let δ_1 ∈ (0, 1). * Let t ≥Ω( log (1 / δ_1)) Then, with probability 1 - δ_1, we have w_r _2^2 ≤ 3(t + d) σ^2 Let w_r(s) ∈ℝ denotes the s-th entry of w_r ∈ℝ^d. We consider w_r _2^2. We have w_r _2^2 = ∑_s=1^d w_r(s)^2 Since for each s ∈ [d], w_r(s) ∼ N(0, σ), then w_r _2^2 ∼χ^2_d, where each one has zero means and σ^2 variance. By Chi-Square tail bound (Lemma <ref>), we have [ w_r _2^2 - dσ^2 ≥ (2√(dt) + 2t) σ^2] ≤  exp(-t) Our goal is to choose t sufficiently large, such that the above quantity is upper bounded by δ_1. We choose t ≥Ω (log ( m n / δ_1)), and substitute t in Eq. (<ref>). Then we have [ w_r _2^2 ≥ (2√(dt) + 2t + d) σ^2] ≤  δ_1 / (m, n) which is equivalent to [ w_r _2^2 ≤ (2√(dt) + 2t + d) σ^2] ≥   1 -δ_1 / (m, n) Since (2√(dt) + 2t + d) ≤ 2(√(dt) + t + d) ≤ 3(t + d), then we have [ w_r _2^2 ≤ 3(t + d) σ^2] ≥   1 -δ_1 / (m) After calculating the value range of w_r _2^2, we are ready to prove the value range of random variables Z_r. If we have the below conditions, * Let H^dis∈ℝ^n × n be define in Definition <ref>. Namely, H_i, j^dis = 1/m∑_r=1^m ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩, for any i, j ∈ [n]. * Let Ξ_r := ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩, where Ξ_r ∈ℝ. * Let Z_r := Ξ_r - _w_r [Ξ_r], where [Z_r] = 0. * Let δ_1 ∈ (0, 1) * Let d ≥Ω (log (1 / δ_1)) Then, with probability 1 - δ_1, we have * Part 1. |Ξ_r| ≤ 6 d σ^2 B^4 * Part 2. |Z_r| ≤ 6 d σ^2 B^4 Proof of Part 1. For any r ∈ [m], we have Ξ_r =  ⟨ w_r , x_i ⟩⟨ w_r , x_j ⟩⟨ x_i, x_j ⟩ ≤   w_r _2^2 · x_i _2^2 · x_j _2^2 ≤   w_r _2^2 · B^4 where the 1st step is because of ⟨ a, b ⟩≤a _2 b _2 for any a, b ∈^d holds, the 2nd step is due to x_i _2 ≤ B. By Lemma <ref>,for any δ_1 ∈ (0, 1), if t ≥Ω(log (1 / δ_1)) we have the following holds [ w_r _2^2 ≤ 3 (t + d) σ^2 ] ≥   1 - δ_1 Proof of Part 2. Then, we have [ Ξ_r ≤ 3( t + d) σ^2 B^4 ] ≥   1 - δ_1 Since Ξ_r = ⟨ w_r , x_i ⟩⟨ w_r , x_j ⟩⟨ x_i, x_j ⟩, and ⟨ a , b ⟩≥ 0 holds for any a, b ∈ℝ^d, then we have Ξ_r ≥   0 [Ξ_r] ≥   0 Then, we consider |Z_r|. We have |Z_r| =   |Ξ_r - [Ξ_r]| ≤   |Ξ_r| where the 1st step is because of the definition of Z_r, the 2nd step is due to Ξ_r ≥ 0. Combining all analyses above, we have [ |Z_r| ≤ 3 (t + d) σ^2 B^4 ] ≥   1 - δ_1 We choose d ≥Ω (log (m n / δ_1)). We have [ |Z_r| ≤ 6 d σ^2 B^4 ] ≥ 1 - δ_1 So far, we have bounded the value range of random variables. We are going to bound the variance of random variables. Since we have the Expectation of random variables are 0, we only need to consider the second moment of random variables. If we have the below conditions, * Let H^dis∈ℝ^n × n be define in Definition <ref>. Namely, H_i, j^dis = 1/m∑_r=1^m ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩, for any i, j ∈ [n] * Let Ξ_r := ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩, where Ξ_r ∈ℝ. Then we have _w_r [Ξ_r^2] = (∑_s=1^d x_i(s) x_j(s))^2 · (3 σ^2 (∑_s=1^d x_i(s)^2 x_j(s)^2) + 2 σ^4 (∑_s ≠ t x_i(s)^2 x_j(t)^2 )) Let v(s) ∈ℝ denotes the s-th entry of any v ∈ℝ^d. Then, we have Ξ_r =  ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩ =   w_r^⊤ x_i x_i^⊤ x_j x_j^⊤ w =   (∑_s=1^d x_i(s) w_r(s)) · (∑_s=1^d x_i(s) x_j(s)) · (∑_s=1^d x_j(s) w_r(s)) where the 1st step is because of the definition of Ξ_r, the 2nd step is due to Lemma <ref>, the 3rd step is from ⟨ a, b⟩ = ∑_s=1^d a(s) b(s) holds for any a, b ∈ℝ^d. Then, we have Ξ_r^2 =   (∑_s=1^d x_i(s) w_r(s))^2 · (∑_s=1^d x_i(s) x_j(s))^2 · (∑_s=1^d x_j(s) w_r(s))^2 =   (∑_s=1^d x_i(s) x_j(s))^2  · (∑_s=1^d x_i(s)^2 w_r(s)^2 + ∑_s ≠ t x_i(s) x_i(t) w_r(s) w_r(t) )  · (∑_s=1^d x_j(s)^2 w_r(s)^2 + ∑_s ≠ t x_j(s) x_j(t) w_r(s) w_r(t) ) where the 1st step is because of Eq. (<ref>), the 2nd step is due to we separate the diagonal and off-diagonal term. We reorganize Eq. (<ref>), into the following format Ξ_r^2 = C_0 (T_1 + T_2 + T_3 + T_4) where C_0 =   (∑_s=1^d x_i(s) x_j(s))^2 T_1 =   (∑_s=1^d x_i(s)^2 w_r(s)^2) (∑_s=1^d x_j(s)^2 w_r(s)^2) T_2 =   (∑_s=1^d x_i(s)^2 w_r(s)^2) (∑_s ≠ t x_j(s) x_j(t) w_r(s) w_r(t)) T_3 =   (∑_s ≠ t x_i(s) x_i(t) w_r(s) w_r(t)) (∑_s=1^d x_j(s)^2 w_r(s)^2) T_4 =   (∑_s ≠ t x_i(s) x_i(t) w_r(s) w_r(t)) (∑_s ≠ t x_j(s) x_j(t) w_r(s) w_r(t)) We consider T_1. We have T_1 =  ∑_s=1^d x_i(s)^2 x_j(s)^2 w_r(s)^4 + ∑_s ≠ j x_i(s)^2 x_j(t)^2 w_r(s)^2 w_r(t)^2 Then, we have [T_1] =   3 σ^2 (∑_s=1^d x_i(s)^2 x_j(s)^2) + [∑_s ≠ j x_i(s)^2 x_j(t)^2 w_r(s)^2 w_r(t)^2] =   3 σ^2 (∑_s=1^d x_i(s)^2 x_j(s)^2) + σ^4 (∑_s ≠ j x_i(s)^2 x_j(t)^2 ) where the 1st step is because of [w_r(s)^4] = 2σ^2, the 2nd step is due to [W_r(s)^2] = σ^2. We consider T_2. We know that for each single term in T_2, it must contain one of the following terms * w_r(s)^3w_r(t) * w_r(s)w_r(t)^3 * w_r(s)w_r(t)w_r(k)^2 Since we have [w_r(s)] = 0, then we can have [T_2] = 0 The same analysis can be applied to T_3. Then we have [T_3] = 0 We consider T_4. We have [T_4] =   [(∑_s ≠ t x_i(s) x_i(t) w_r(s) w_r(t)) (∑_s ≠ t x_j(s) x_j(t) w_r(s) w_r(t))] =   [∑_s ≠ t x_i(s)^2 x_i(t)^2 w_r(s)^2 w_r(t)^2] =  σ^4 · (∑_s ≠ t x_i(s)^2 x_i(t)^2) where the 1st step is because of the definition of T_4, the 2nd step is due to similar analysis of T_2 and T_3, the 3rd step is from [w_r(s)] = σ^2. Then we consider Ξ_r^2. We have [Ξ_r^2] =   C_0 ( [T_1] + [T_2] + [T_3] + [T_4]) =   (∑_s=1^d x_i(s) x_j(s))^2 · (3 σ^2 (∑_s=1^d x_i(s)^2 x_j(s)^2) + 2 σ^4 (∑_s ≠ j x_i(s)^2 x_j(t)^2 )) where the 1st step is because of linearity of expectation, the 2nd step is due to above analysis about [T_1], [T_2], [T_3], [T_4]. Since we have Z_r := Ξ_r - _w_r [Ξ_r], then [Z_r] = 0. We can bound variance of Z_r by harnessing the power of the bound of second moment of Z_r, which can be formally expressed as follows. If we have the below conditions, * Let H^dis∈ℝ^n × n be define in Definition <ref>. Namely, H_i, j^dis = 1/m∑_r=1^m ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩, for any i, j ∈ [n]. * Let Ξ_r := ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩, where Ξ_r ∈ℝ. * Let Z_r := Ξ_r - _w_r [Ξ_r], where [Z_r] = 0. Then, we have Var [Z_r] ≤ 6 B^8 σ^4 We consider [Z_r^2]. We have [Z_r^2] =   [(Ξ_r - [Ξ_r])^2] =   [Ξ_r^2 - 2 [Ξ_r] Ξ_r + [Ξ_r]^2] =   [Ξ_r^2] - [Ξ_r] where the 1st step is because of the definition of Z_r, the 2nd step is due to basic algebra, the 3rd step is from linearity of expectation. By Lemma <ref>, we have [Ξ_r] = σ^2 (∑_s=1x_i(s) x_j(s))^2 By Lemma <ref>, we have [Ξ_r^2] = (∑_s=1^d x_i(s) x_j(s))^2 · (3 σ^2 (∑_s=1^d x_i(s)^2 x_j(s)^2) + 2 σ^4 (∑_s ≠ j x_i(s)^2 x_j(t)^2 )) Then, we have [Z_r^2] =   [Ξ_r^2] - [Ξ_r] =   (∑_s=1^d x_i(s) x_j(s))^2 · (3 σ^2 (∑_s=1^d x_i(s)^2 x_j(s)^2) + 2 σ^4 (∑_s ≠ j x_i(s)^2 x_j(t)^2 ) - σ^2) Then, we have Var [Z_r] =   [Z_r^2] - ( [Z_r])^2 =   [Z_r^2] =   (∑_s=1^d x_i(s) x_j(s))^2 · (3 σ^2 (∑_s=1^d x_i(s)^2 x_j(s)^2) + 2 σ^4 (∑_s ≠ j x_i(s)^2 x_j(t)^2 ) - σ^2) where the 1st step is because of Var [X] = [X^2] - [X]^2 holds for any random variable X, the 2nd step is due to [Z_r] = 0, the 3rd step is from Eq. (<ref>). Then, we have Var [Z_r] ≤   3 (∑_s=1^d x_i(s) x_j(s))^2 · ( σ^2 (∑_s=1^d x_i(s)^2 x_j(s)^2) + σ^4 (∑_s ≠ j x_i(s)^2 x_j(t)^2 )) ≤   3 x_i _2^2 · x_j _2^2 · ( σ^2 (∑_s=1^d x_i(s)^2 x_j(s)^2) + σ^4 (∑_s ≠ j x_i(s)^2 x_j(t)^2 )) ≤   3 B^4 · ( σ^2 (∑_s=1^d x_i(s)^2 x_j(s)^2) + σ^4 (∑_s ≠ j x_i(s)^2 x_j(t)^2 )) ≤   3 B^4 · ( σ^2 (∑_s=1^d x_i(s)^2 ) (∑_s=1^d x_j(s)^2) + σ^4 (∑_s ≠ j x_i(s)^2 x_j(t)^2 )) ≤   3 B^4 · ( σ^2 (∑_s=1^d x_i(s)^2 ) (∑_s=1^d x_j(s)^2) + σ^4 (∑_s=1^d x_i(s)^2 ) (∑_s=1^d x_j(s)^2)) ≤   3 B^4 · ( σ^2 B^4 + σ^4 B^4) =   3 B^8 (σ^2 + σ^4) where the 1st step is because of basic inequality, the 2nd step is due to x_i^⊤ x_j ≤ x_i _2 x_j _2, the 3rd step is from x_i _2 ≤ B, the 4th step comes from ∑_s=1^d x_i(s)^2 x_j(s)^2 ≤ (∑_s=1^d x_i(s)^2 ) (∑_s=1^d x_j(s)^2), the fifth step follows from ∑_s ≠ j x_i(s)^2 x_j(t)^2 ≤ (∑_s=1^d x_i(s)^2 ) (∑_s=1^d x_j(s)^2), the sixth step follows from x_i _2 ≤ B, the seventh step follows from basic algebra. We assume δ^2 ≥ 1. Then we have Var [Z_r] ≤ 6 B^8 σ^4 Since we have bounded the value range and variance of Z_r, we are ready to adapt Bernstein inequality to each entry of the Discrete Quadratic NTK Matrix. Therefore, we have the following Lemma. If we have the below conditions, * Let H^dis∈ℝ^n × n be define in Definition <ref>. Namely, H_i, j^dis = 1/m∑_r=1^m ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩, for any i, j ∈ [n]. * Let H^cts∈ℝ^n × n be define in Definition <ref>. Namely, H_i, j^cts = _w ∼ N(0, σ^2 I_d× d)⟨⟨ w , x_i ⟩ x_i, ⟨ w , x_j ⟩ x_j ⟩, for any i, j ∈ [n]. * Let Ξ_r := ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩, where Ξ_r ∈ℝ. * Let Z_r := Ξ_r - _w_r [Ξ_r], where [Z_r] = 0. * Let δ_1, δ_2 ∈ (0, 1). Let δ_1 = δ_2 / (m). * Let d = Ω (log (1 / δ_1)) * Let m = Ω (ϵ_1^-2 d B^8 σ^4 log (1 / δ_2)). Then, with probability 1 - δ_2, we have |H_i, j^dis - H_i, j^cts| ≤ϵ_1 Let X_r := 1/m Z_r. Then we have [X_r] = 0. Let X := ∑_r=1^m X_r. By Lemma <ref>, with probability 1 - δ_1, we have |Z_r| ≤ 6 d σ^2 B^4 which implies |X_r| ≤ 6 d σ^2 B^4 m^-1 By Lemma <ref>, we have Var [Z_r] ≤   6 B^8 σ^4 which implies Var [X_r] ≤   6 B^8 σ^4 m^-2 Applying Lemma <ref>, we have [ X > t ] ≤exp ( - t^2/2 / m [X_r^2] + M t /3 ) where [X_r] =   0 [X_r^2] =   6 B^8 σ^4 m^-2 M =   6 d σ^2 B^4 m^-1 where the first equation follows from the definition of Z_r, the second equation follows from Eq. (<ref>), the third equation follows from Eq. (<ref>). Our goal is to choose m sufficiently large such that Eq. (<ref>) satisfies [ X > ϵ_1 ] ≤δ_2 / 2 We will achieve this in two steps. * Firstly, we will find t such that the probability is no larger than δ_2 / (n). * Secondly, we will choose m such that t ≤ϵ_1. We need to choose t such that Eq. (<ref>) satisfies [ X > t ] ≤δ_2 / 2 Firstly, we need t^2/m [X_r^2] =  mt^2/6 B^8 σ^4 ≥  log (1 / δ_2) This requires t ≥ (6 B^8 σ^4 m^-1log (1 / δ_2))^1/2 Secondly, we need t^2/Mt / 3 =  3t/M =  m t/6 d σ^2 B^4 ≥  log (1 / δ_2 ) This requires t ≥ 6 d σ^2 B^4 m^-1log (1 / δ_2) We define A_1 =   (6 B^8 σ^4 m^-1log (1 / δ_2))^1/2 A_2 =   6 d σ^2 B^4 m^-1log (1 / δ_2) We should choose t ≥ A_1 + A_2 Then, we need to choose m sufficiently large, such that t ≤ϵ_1. Namely, we need A_1 ≤  ϵ_1 / 2 A_2 ≤  ϵ_1 / 2 Firstly, we consider A_1. We have A_1 =   (6 B^8 σ^4 m^-1log (1 / δ_2))^1/2 ≤  ϵ_1 / 2 This requires m ≥ C_1 B^8 σ^4 ϵ_1^-2log(1 / δ_2) Secondly, we consider A_2. We have A_2 =   6 d σ^2 B^4 m^-1log (1 / δ_2) ≤  ϵ_1 / 2 This requires m ≥ C_2 d σ^2 B^4 ϵ_1^-1log (1 / δ_2 ) Finally, we choose m ≥   C_1 B^8 σ^4 ϵ_1^-2log(1 / δ_2 ) + C_2 d σ^2 B^4 ϵ_1^-1log (1 / δ_2 ) Therefore, By Eq. (<ref>), we choose m = Ω (ϵ_1^-2 d B^8 σ^4 log (1 / δ_2 )), then we have [ |X| ≤ϵ_1] ≥ 1 - δ_2 / 2 We also need to consider the possibility that for each r ∈ [m], |X_r| ≤ M holds. We choose δ_1 = δ_2 / (m) in Eq. (<ref>). Then We take union bound over m entries, we have for all r ∈ [m] [ |X_r| ≤ 6 d σ^2 B^4 m^-1 ] ≥   1 - m δ_1 =   1 - m δ_2 / (m) ≥   1 - δ_2 / 2 We take union bound over Eq. (<ref>) and Eq. (<ref>). We have [ |H_i, j^dis - H_i, j^cts| ≤ϵ_1] ≥   1 - (δ_2 / 2 + δ_2 / 2) =   1 - δ_2 As we have already proven the gap between single entry between Continuous and Discrete Quadratic NTK Matrices in Lemma <ref>, we further take union bound over all n^2 entries, to get the gap between Continuous and Discrete Quadratic NTK Matrices. If we have the below conditions, * Let H^dis∈ℝ^n × n be define in Definition <ref>. Namely, H_i, j^dis = 1/m∑_r=1^m ⟨⟨ w_r , x_i ⟩ x_i, ⟨ w_r , x_j ⟩ x_j ⟩, for any i, j ∈ [n]. * Let H^cst∈ℝ^n × n be define in Definition <ref>. Namely, H_i, j^cts = _w ∼ N(0, σ^2 I_d× d)⟨⟨ w , x_i ⟩ x_i, ⟨ w , x_j ⟩ x_j ⟩, for any i, j ∈ [n]. * Let δ_1, δ_2 , δ_3 ∈ (0, 1). Let δ_1 = δ_2 / (m). Let δ_2 = δ_3 / (n). * Let d = Ω (log (1 / δ_1)) * Let m = Ω ( ϵ_1^-2 d B^8 σ^4 log (1 / δ_2)) Then, with probability 1 - δ_3, we have H^dis - H^cts_F ≤ n ϵ_1 By Lemma <ref>, if we choose m = Ω ( ϵ_1^-2 d B^8 σ^4 log (1 / δ_2)). Then, with probability 1 - δ_2 we have |H_i, j^dis - H_i, j^cts| ≤ϵ_1 We choose δ_2 = δ_3 / (n). Then, we take union bound over n^2 entries. We have [H^dis - H^cts_F ≤ n ·ϵ_1 ] ≥   1 - n^2 δ_2 =   1 - n^2 δ_3 / (n) ≥   1 - δ_3 With the help of Bernstein Inequality and the sensitivity of Continuous Quadratic NTK Matrix, we are ready to present the sensitivity of Discrete Quadratic NTK Matrix. If we have the below conditions, * Let B > 0 be a constant. * Let n be the number of data points. * We have dataset D = {(x_i, y_i)}_i=1^n, where x_i ∈^d and x_i_2 ≤ B for any i ∈ [n]. * Let the neighbor dataset D' be defined Definition <ref>. * Let the discrete quadratic kernel matrices H^dis∈ℝ^n × n and H^dis' ∈ℝ^n × n be defined as Definition <ref>, where H^dis' ∈^n × n is the kernel corresponding to D'. Without loss of generality, we have D and D' only differ in the n-th item. * Let δ_1, δ_2 , δ_3 ∈ (0, 1). Let δ_1 = δ_2 / (m). Let δ_2 = δ_3 / (n). * Let d = Ω (log (1 / δ_1)) * Let m = Ω (ϵ_1^-2 d B^8 σ^4 log (1 / δ_2)) Then, with probability 1 - δ_3, we can show that H^dis - H^dis' _F ≤ O (n ϵ_1 + √(n)σ^2 B^3 β) Recall in Lemma <ref>, Let δ_2 , δ_3 ∈ (0, 1). Let δ_2 = δ_3 / (n). Then, if we choose m = Ω (ϵ_1^-2 d B^8 σ^4 log (1 / δ_2)), then, with probability 1 - δ_3, we have H^dis - H^cts_F ≤ n ϵ_1 By Lemma <ref>, we have H^cts - H^cts' _F ≤ C √(n)σ^2 B^3 β Then, we have the following H^dis - H^dis' _F =   ( H^dis - H^cts ) + ( H^cts - H^cts' ) + ( H^cts' - H^dis' ) _F ≤   H^dis - H^cts_F + H^cts - H^cts' _F + H^cts' - H^dis' _F ≤   2nϵ_1 + H^cts - H^cts' _F ≤   C (n ϵ_1 + √(n)σ^2 B^3 β) where the 1st step is because of basic algebra, the 2nd step is due to a + b _F ≤ a _F + b _F holds for any a, b ∈ℝ^d × d, the 3rd step is from Eq. (<ref>), the 4th step comes from Eq. (<ref>). We choose an appropriate ϵ_1 to have the final sensitivity of the Discrete Quadratic NTK Matrix. If we have the below conditions, * Let B > 0 be a constant. * Let n be the number of data points. * We have dataset D = {(x_i, y_i)}_i=1^n, where x_i ∈^d and x_i_2 ≤ B for any i ∈ [n]. * Let the neighbor dataset D' be defined Definition <ref>. * Let the discrete quadratic kernel matrices H^dis∈ℝ^n × n and H^dis' ∈ℝ^n × n be defined as Definition <ref>, where H^dis' ∈^n × n is the kernel corresponding to D'. Without loss of generality, we have D and D' only differ in the n-th item. * Let δ_1, δ_2 , δ_3 ∈ (0, 1). Let δ_1 = δ_2 / (m). Let δ_2 = δ_3 / (n). * Let d = Ω (log (1 / δ_1)). * Let m = Ω(n · d B^2 β^-2log (1 / δ_2)). * Let ϵ_1 = O(n^-1/2σ^2 B^3 β). Then, with probability 1 - δ_3, we can show that H^dis - H^dis' _F ≤ O (√(n)σ^2 B^3 β) By Lemma <ref>, with probability 1 - δ_3, we have H^dis - H^dis' _F ≤ O (n ϵ_1 + √(n)σ^2 B^3 β) Since we choose ϵ_1 = O(n^-1/2σ^2 B^3 β), we have O(n ϵ_1) = O ( √(n)σ^2 B^3 β) Also, since we choose ϵ_1 = O(n^-1/2σ^2 B^3 β), we have m = Ω(n · d B^2 β^-2log (1 / δ_2)) Then, we are done. §.§ Sensitivity for Privacy Guarantees Recall that, we have defined M in Definition <ref>. And we have defined neighboring dataset 𝒟 and 𝒟' in Definition <ref>. Then, we have M := ℳ( D)^1/2ℳ( D')^-1ℳ( D)^1/2 - I_n × n_F In this section, we demonstrate that M(D) = H^dis satisfies the assumption specified in Condition 5 of Theorem <ref> for M(D). Namely, we want to prove that M ≤Δ, for some Δ. In order to calculate M, we need the PSD inequality tools, which is proved in the following Lemma. If we have the below conditions, * Let D∈^n × d and D' ∈^n × d are neighboring dataset (see Definition <ref>) * Let H^dis denotes the discrete NTK kernel matrix generated by D, and H^dis' denotes the discrete NTK kernel matrix generated by neighboring dataset D'. * Let H^dis≽η_min I_n × n, for some fixed η_min∈. * Let β = O(η_min / (n, σ, B)), where β is defined in Definition <ref>. * Let ψ := O (√(n)σ^2 B^3 β). * Let δ_1, δ_2 , δ_3 ∈ (0, 1). Let δ_1 = δ_2 / (m). Let δ_2 = δ_3 / (n). * Let d = Ω (log (1 / δ_1)). * Let m = Ω(n · d B^2 β^-2log (1 / δ_2)). Then, with probability 1 - δ_3, we have (1 - ψ / η_min) H^dis≼H^dis' ≼ (1 + ψ / η_min) H^dis We need to calculate the spectral norm. Namely, we need H^dis' - H^dis. By Fact <ref>, we have H^dis' - H^dis≤H^dis' - H^dis_F By Lemma <ref>, we have H^dis - H^dis' _F ≤ O (√(n)σ^2 B^3 β) Combining the two analysis mentioned above, we have H^dis - H^dis' ≤ O (√(n)σ^2 B^3 β) Since we have ψ = O (√(n)σ^2 B^3 β) and β = O(η_min / (n, σ, B)). Therefore, we have ψ / η_min < 1 Then, we have H^dis' ≽   H^dis - ψ I_n × n ≽   (1 - ψ / η_min )H^dis where the 1st step is because of Eq. (<ref>), the 2nd step is due to H^dis≽η_min I_n × n. Then, we are ready to calculate the exact value of M based on the Lemma proved above. If we have the below conditions, * If D∈^n × d and D' ∈^n × d are neighboring dataset (see Definition <ref>) * Let H^dis denotes the discrete NTK kernel matrix generated by D, and H^dis' denotes the discrete NTK kernel matrix generated by neighboring dataset D'. * Let H^dis≽η_min I_n × n, for some η_min∈. * Let β = O(η_min / (n, σ, B)), where β is defined in Definition <ref>. * Let ψ := O (√(n)σ^2 B^3 β ). * Let δ_1, δ_2 , δ_3 ∈ (0, 1). Let δ_1 = δ_2 / (m). Let δ_2 = δ_3 / (n). * Let d = Ω (log (1 / δ_1)). * Let m = Ω(n · d B^2 β^-2log (1 / δ_2)). Then, with probability 1 - δ_3, we have * Part 1. (H^dis)^-1/2H^dis' (H^dis)^-1/2 - I ≤ψ / η_min * Part 2. (H^dis)^-1/2H^dis' (H^dis)^-1/2 - I _F ≤√(n)ψ / η_min Proof of Part 1. By Lemma <ref>, we have (1 - ψ / η_min) H^dis≼H^dis' ≼ (1 + ψ / η_min) H^dis which implies (1 - ψ / η_min) I_n × n≼ (H^dis)^1/2H^dis' (H^dis)^1/2≼ (1 + ψ / η_min) I_n × n which implies - ψ / η_min I_n × n≼ (H^dis)^1/2H^dis' (H^dis)^1/2 - I_n × n≼ψ / η_min I_n × n which implies (H^dis)^-1/2H^dis' (H^dis)^-1/2 - I ≤ψ / η_min Proof of Part 2. By Fact <ref>, for any A ∈^n × n, we have A _F ≤√(n) A Combining Eq. (<ref>) and Eq. (<ref>), we have (H^dis)^-1/2H^dis' (H^dis)^-1/2 - I _F ≤√(n)ψ / η_min We can choose β sufficiently small, such that ψ / η_min < 1. § UTILITY GUARANTEES In this section, we establish the utility guarantees for our algorithm. Our proof involves employing the spectral norm of H^dis - H^dis as a pivotal element. Initially, we calculate the spectral norm of (K + λ I)^-1. Subsequently, we determine the ℒ_2 norms for 𝖪(x,X) and Y, ultimately leading to the assessment of our algorithm's utility. We begin by calculating the spectral norm of H^dis - H^dis. If we have the below conditions, * If D∈^n × d and D' ∈^n × d are neighboring dataset (see Definition <ref>) * Let H^dis denotes the discrete NTK kernel matrix generated by D (see Definition <ref>). * Let η_max I_n × n≽ H^dis≽η_min I_n × n, for some η_max, η_min∈. * Let H^dis denote the private H^dis generated by Algorithm <ref> with H^dis as input. * Let ρ = O( √( ( n^2+log(1/γ) ) / k )+ ( n^2+log(1/γ) ) /k ). * Let γ∈ (0, 1). Then, with probability 1 - γ, we have H^dis - H^dis≤ρ·η_max By Part 3 of Theorem <ref>, with probability 1 - γ, we have (1-ρ) H^dis≼H^dis≼ (1+ρ) H^dis which implies - ρ H^dis≼H^dis - H^dis≼ρ H^dis Then, we have H^dis - H^dis≤ρ·η_max To calculate the spectral norm of the inverse of the difference between two matrices, we need to involve a classical technique from the literature, where <cit.> presented a perturbation bound of Moore-Penrose inverse the spectral norm, Given two matrices A,B ∈^d_1 × d_2 with full column rank, we have A^† - B^†≲max ( A^†^2, B^†^2 ) · A - B . With the help of the perturbation bound, we can now reach the spectral norm of (K + λ I)^-1. If we have the below conditions, * If D∈^n × d and D' ∈^n × d are neighboring dataset (see Definition <ref>) * Let H^dis denotes the discrete NTK kernel matrix generated by D (see Definition <ref>). * Let H^dis denotes the private H^dis generated by Algorithm <ref> with H^dis as the input. * Let η_max I_n × n≽ H^dis≽η_min I_n × n, for some η_max, η_min∈. * Let K = H^dis, K = H^dis. * Let λ∈. * Let ρ = O( √( ( n^2+log(1/γ) ) / k )+ ( n^2+log(1/γ) ) /k ). * Let γ∈ (0, 1). Then, with probability 1 - γ, we have (K + λ I)^-1 - (K + λ I)^-1≤ O( ρ·η_max/(η_min + λ)^2) We consider the (K + λ I)^-1 term. We have (K + λ I)^-1 =  σ_max ((K + λ I)^-1) =  1/σ_min ((K + λ I)) ≤  1/η_min + λ where the 1st step is because of definition of spectral norm, the 2nd step is due to σ_max(A^-1) = 1 / σ_min(A) holds for any matrix A, the 3rd step is from K ≽η_min I_n × n. Similarly, we can have (K + λ I)^-1≤1/η_min + λ Recall in Lemma <ref>, we have H^dis - H^dis≤ρ·η_max Then, by Lemma <ref>, we have (K + λ I)^-1 - (K + λ I)^-1≤   O( max ( (K + λ I)^-1^2, (K + λ I)^-1^2 ) · K - K) ≤   O( 1/(η_min + λ)^2· K - K) ≤   O( ρ·η_max/(η_min + λ)^2) where the 1st step is because of Lemma <ref>, the 2nd step is due to Eq. (<ref>) and Eq. (<ref>), the 3rd step is from Eq. (<ref>). Then, we can finally reach the utility guarantees for Private NTK Regression, which is discussed in the following Lemma. If we have the below conditions, * If D∈^n × d and D' ∈^n × d are neighboring dataset (see Definition <ref>) * Let H^dis denotes the discrete NTK kernel matrix generated by D (see Definition <ref>). * Let η_max I_n × n≽ H^dis≽η_min I_n × n, for some η_max, η_min∈. * Let H^dis denotes the private H^dis generated by Algorithm <ref> with H^dis as the input. * Let K = H^dis, K = H^dis in Definition <ref>. Then we have f_K^*(x) and f_K^*(x). * Let √(n)ψ / η_min < Δ, where Δ is defined in Definition <ref>. * Let ρ = O( √( ( n^2+log(1/γ) ) / k )+ ( n^2+log(1/γ) ) /k ). * Let ω := 6 d σ^2 B^4. * Let γ∈ (0, 1). Then, with probability 1 - γ, we have | f_K^*(x) - f_K^*(x) | ≤ O( ρ·η_max·ω/(η_min + λ)^2 ) We consider the 𝖪(x,X)_2 term. Let 𝖪(x,X)(s) ∈ denote the s -th entry of 𝖪(x,X) ∈^n. Let Ξ_r := ⟨⟨ w_r , x ⟩ x, ⟨ w_r , x_i ⟩ x_i ⟩. Then, for any s ∈ [n], we have | 𝖪(x,X)(s) | =   | 1/m∑_s=1^m Ξ_r | ≤   6 d σ^2 B^4 =  ω where the 1st step is because of the definition of 𝖪(x,X), the 2nd step is due to Lemma <ref>, the 3rd step is from definition of ω. Then we have 𝖪(x,X) _2 ≤  √(n) | 𝖪(x,X)(s) | ≤  √(n)ω where the 1st step is because of the definition of ·_2, the 2nd step is due to Eq. (<ref>). Then, we consider the Y_2. Since we have Y ∈{ 0, 1 }^n, then we have Y_2 ≤√(n) Let ρ = O( √( ( n^2+log(1/γ) ) / k )+ ( n^2+log(1/γ) ) /k ). By Lemma <ref>, with probability 1 - γ, we have (K + λ I)^-1 - (K + λ I)^-1≤ O( ρ·η_max/(η_min + λ)^2) Then, we consider the | f_K^*(x) - f_K^*(x) | term. We have the following | f_K^*(x) - f_K^*(x) | =  1/n | 𝖪(x,X)^⊤ (K + λ I)^-1 Y - 𝖪(x,X)^⊤ (K + λ I)^-1 Y| ≤  1/n𝖪(x,X)_2 (K + λ I)^-1 - (K + λ I)^-1Y_2 ≤  1/n√(n)ω· O( ρ·η_max/(η_min + λ)^2 ) ·√(n) =   O( ρ·η_max·ω/(η_min + λ)^2 ) where the 1st step is because of Definition <ref>, the 2nd step is due to | b^⊤ A c | ≤ b _2 A c _2 holds for any b, c ∈^d , A ∈^d × d, the 3rd step is from Eq. (<ref>), Eq. (<ref>), and Eq. (<ref>), the 4th step comes from basic algebra. alpha
http://arxiv.org/abs/2407.12080v1
20240716180000
Isospectrality in Effective Field Theory Extensions of General Relativity
[ "Pablo A. Cano", "Marina David" ]
hep-th
[ "hep-th", "astro-ph.HE", "gr-qc" ]
=1 [4] =.9theoremTheorem corollaryCorollary lemma[theorem]Lemma defiDefinition conjectureConjecture ⌈⌉ OMSzplmmn
http://arxiv.org/abs/2407.13449v1
20240718122357
All Roads Lead to Rome? Exploring Representational Similarities Between Latent Spaces of Generative Image Models
[ "Charumathi Badrinath", "Usha Bhalla", "Alex Oesterling", "Suraj Srinivas", "Himabindu Lakkaraju" ]
cs.LG
[ "cs.LG" ]
[ All Roads Lead to Rome? Investigating Representational Similarities in Generative Image Models Charumathi BadrinathHarvard Usha BhallaHarvard Alex OesterlingHarvard Suraj SrinivasHarvard Himabindu LakkarajuHarvard HarvardHarvard University Charumathi Badrinathcharumathi.badrinath@gmail.com Machine Learning, ICML 0.3in ] § ABSTRACT Do different generative image models secretly learn similar underlying representations? We investigate this by measuring the latent space similarity of four different models: VAEs, GANs, Normalizing Flows (NFs), and Diffusion Models (DMs). Our methodology involves training linear maps between frozen latent spaces to “stitch" arbitrary pairs of encoders and decoders and measuring output-based and probe-based metrics on the resulting “stitched” models. Our main findings are that linear maps between the latent spaces of performant models preserve most visual information even when latent space sizes differ; for CelebA models, gender is the most similarly represented probe-able attribute. Finally we show on a Normalizing Flow that latent space representations converge early in training. § INTRODUCTION Recent literature on deep networks has suggested that all models are converging to the same representation of “reality". Huh et al. () show that the embedding spaces of performant vision and language models learn similar notions of “distance" between datapoints of the appropriate modalities. Bansal et al. () “stitch" together layers of pairs of powerful vision networks via a simple map and show that the combined network has minimal loss in performance. Asperti and Tonelli () investigate whether a similar conclusion holds for latent spaces of generative image models by extending the model stitching framework. They train a linear map between the latent spaces of a Variational Autoencoder (VAE) and Generative Adversarial Network (GAN) and show that this stitched VAE-encoder-GAN-decoder model is able to produce reconstructions of images that look similar to the originals and have a low RMSE. However, despite previous work, it remains unclear how well these conclusions hold for other families of generative image models, especially those with vastly different latent sizes. Further, it is not clear which specific visual attributes are represented similarly by models and which aren't. In our work, we address these drawbacks by (1) extending the study to include NFs and DMs, (2) including probe-based metrics to capture semantics of representations. On CelebA we find that powerful models can be stitched via their latent spaces to reconstruct an image with little loss in quality. In addition to color, pose, and lighting, attributes related to gender are most similarly represented. Finally, we explore the question of how quickly the local minimum corresponding to this shared latent space is found, by seeing how early in training latent space probe accuracy plateaus. On an NF, we find that this happens less than 20% of the way into training. Our overall contributions are: 0em * In <ref>, we propose two sets of metrics (reconstruction-based, and probe-based metrics) grounded in a model “stitching"-based procedure to measure the similarity of generative image model latent spaces. * In <ref>, we perform extensive experiments across four generative image model classes trained on CelebA (VAEs, GANs, NFs and DMs), and find significant similarity among their latent spaces on our metrics, and especially in their representation of gender. On the Normalizing Flow model, we find that this common latent space structure is learned early in training. § RELATED WORKS §.§ Exploring Latent Spaces Discovering latent directions. Several approaches have been proposed to discover vectors corresponding to high-level image concepts (e.g. hair color) in the GAN latent space. Supervised methods include subtracting the average latent vectors corresponding to images with and without a binary attribute <cit.> and finding hyperplanes in the latent space separating images with different attribute values <cit.>. One unsupervised method uses PCA on the GAN latent space to discover the most salient edit directions <cit.>. These techniques have been applied to find edit directions in the latent spaces of VAEs <cit.> and NFs <cit.> as well. Our work compares the structure of different models' latent spaces and is thus orthogonal to the exploration of latent spaces in isolation. §.§ Comparing Model Representations Metrics. Canonical correlation analysis <cit.> and centered kernel alignment <cit.> are metrics that have been used to measure representational similarity between layers of different models. Another work compared distances between sets of datapoints in different models' embedding spaces <cit.>. Model Stitching. One branch of work compares representations via “stitching" where a simple map is learned between layers of two different models <cit.>. One work stitched together neural networks of different strengths at an intermediate layer, concluding that good models tended to converge on similar representations and that stitching strong models to weak models improved their performance <cit.>. Work on stitching the latent spaces of generative models has also been explored. One paper showed that it is possible to construct a linear map between the latent space of image encoders and text decoders to obtain text describing an image, demonstrating that models learn similar representations of multi-modal data <cit.>. The work most closely related to ours showed that it is possible to linearly stitch the latent spaces of a VAE and GAN and reconstruct an image with little loss in quality <cit.>. Our work extends this idea to more model architectures, performing a thorough pixel-space and latent-space analysis to characterize what types of models learn similar representations and what specific concepts they represent similarly. § METHODOLOGY §.§ Models and Datasets We experiment on 5 generative image models trained on CelebA <cit.> – a dataset of celebrity faces annotated with 40 binary attributes. Models (summarized in Table <ref>) output images of size 64 × 64 with slightly different crops. We use a GAN-generated dataset CelebA-Synthetic to train linear maps to and from the GAN latent space, since GANs lack an encoder. The “latent space" of the DM is taken to be the noised image after running the forward process using the DDIM scheduler with 50 timesteps <cit.>. As a baseline, we use a “random" encoder which maps images deterministically to arbitrary 512-dimensional standard Gaussian vectors. §.§ Proposed Metrics Model latent spaces are first stitched via linear maps (see Appendix <ref>). Reconstruction-Based Metrics. We qualitatively assess the image reconstructions of stitched models. We also compute the latent space MSE of the mapping, pixel-space RMSE of the reconstructed image, LPIPS score <cit.> which measures the perceptual similarity between a pair of images, and the FID <cit.> which measures how similar the distribution of reconstructed images is to the distribution of target images. Lower is better for all metrics. Probe-Based Metrics. We train linear probes (see Appendix <ref>) on the latent spaces of encoder-decoder models to detect the presence of binary attributes and report test set accuracy. We then measure the percentage of times probe predictions on latents encoded by the model whose latent space it was trained on and those mapped from other latent spaces match (higher is better). We also measure the percent change in probe accuracy on these sets of latents (near 0 is better). § RESULTS §.§ Assessing Stitched Model Reconstructions We stitch the latent spaces of a VAE, VQVAE, NF, DM and GAN pairwise and compute the reconstruction-based metrics described in Section <ref>. From Figure <ref> we see that stitching the NF and VQVAE encoders to any model 𝕏's decoder yields reconstructions resembling those generated by 𝕏 (boxed in red) and obtain relatively low latent MSE, RMSE, LPIPS and FID scores (Figure <ref>). Additional examples are in Appendix <ref>. Conversely, the DM and VAE encoders barely outperform the random encoder in terms of stitched reconstruction quality. Poor VAE performance can be attributed to significant information loss during encoding which is reflected in lossy reconstructions even with its own decoder. For the DM, we note that the “latent vectors" we use to construct our mappings are heavily noised versions of the image rather than true encodings. The consequence of this is poor linear probe performance (Figure <ref>). Latent space maps pick up on remaining signal, which is sometimes enough to map to an area of the decoder model's latent space producing images in the visual vicinity of an target image (Figure <ref>). The spatial nature of DM “latents" (the“representation" of a feature is a function of the pixel values in the region of the image that the feature manifests) makes the ability to stitch another model's encoder to its decoder at all unexpected. However, it explains why models stitched to the DM decoder often produce reconstructions preserving the geometry of the target image but not the colors. Finally, Figure <ref> shows reconstructions of a test image from CelebA-Synthetic obtained by stitching various models' encoders to the GAN decoder. Due to the strength of the GAN decoder, all reconstructions resemble a face. Furthermore, mapping to the vicinity of the true GAN latent is sufficient to produce a reconstruction preserving most characteristics of the target image (e.g. age, hair and skin color, expression) supporting results from literature <cit.>. §.§ Identifying Similarly Represented Attributes We stitch the latent spaces of a VAE, VQVAE, NF, DM and GAN pairwise and compute the probe-based metrics described in Section <ref> on 10 interesting CelebA attributes. Results for all 40 attributes are in Appendix <ref>. From Figure <ref> we see that linear probes for the gender-related attributes and achieve 80% accuracy on the NF, VAE and VQVAE latent spaces. In Figure <ref> we see that NF probes produce similar predictions on latents produced by its own decoder and latents encoded and mapped from the VQVAE latent space (and vice versa), in agreement with the results from Section <ref>. Figure <ref> also shows a relatively high percentage of matches between probes on NF, VAE, and VQVAE latents, and their counterparts mapped from the GAN latent space. Since the GAN latent space is also known to be linearly separable <cit.>, we conclude that latent space “stitchability" of two models depends on the linearity of both models' latent representations. We hypothesize that attributes that are represented similarly in all models' latent spaces are either uniformly represented in the training dataset (e.g. ), or explain much of the variance in an image (e.g. and related attributes). Interestingly, post-map probe match percentages are high for gender-related attributes, particularly and , even for for maps originating from the non-linear DM latent space indicating that signal corresponding to these attributes is least likely to be lost in the noising process. Finally, we note in Figure <ref> that probes trained on less linearly separable latent spaces are often more accurate when applied on latents mapped from a more linearly separable latent space. This explains the phenomenon of the corresponding reconstructions having more “extreme" versions of the attributes present in the target image. We see an example of this in Figure <ref> where the reconstructions produced by the NF and VQVAE stitched to the DM have very distinct, block-like bangs. §.§ Observing Latent Space Structure Over Training We train the Glow NF <cit.> on CelebA for 26 epochs checkpointing every 5 epochs. We train linear probes for 10 interesting attributes on the frozen latent spaces and measure how their accuracy changes as model training progresses. Figure <ref> shows that probe accuracy begins to plateau for all attributes by epoch 6 of 26. Attributes correlated with gender as well as (the similarly represented attributes identified in Section <ref>) change minimally in their representations after epoch 1 of 26. Figure <ref> shows that there is slight improvement in images reconstructed from latents encoded by increasingly trained NF models, indicating that the latent space still changes over time, albeit minimally after epoch 11. Thus, we hypothesize that a “universal" representation is natural for models to learn and that later training epochs improve the decoder given a somewhat fixed latent representation. § DISCUSSION AND FUTURE WORK This work advances the hypothesis that as (unconditional) generative image models become more expressive, their latent space representations (given that they are meaningful) become more similar to one another. We find that this shared representation is learned early on in model training, indicating that it is quite “natural" in describing the data. This suggests that the latent spaces of increasingly expressive models may be converging to an “optimal" data representation in the limit, which has deep implications for how we construct world models. Similarity of latent space representations also means that edit directions, unsafe regions, or areas of bias found in the latent space of one model, can be mapped onto any other model via a simple linear map. It also allows for more powerful image editing by stitching models with strong encoders to models with strong decoders via their latent spaces. Future work can investigate whether similar results hold for models trained on datasets containing images from multiple classes with more variance, and can probe similarity of representation of non-binary attributes. We expect to see similar results given that the models used are able to produce realistic samples from the data distribution. We also wish to extend our analysis to pipelines composed of multiple foundation models, for example, the latent diffusion model <cit.> which trains a DM on the VQVAE latent space. Since it is the first foundation model in this pipeline that learns a latent representation of the data, we expect to see similar results to this paper. Finally, work has shown that it is possible to find a linearly separable latent space for DMs <cit.>; we can validate that this latent space is similarly structured to those studied in our paper. § ACKNOWLEDGEMENTS We would like to thank the GRaM workshop's ELLIS Mobility Grant as well as the SPIGM workshop for supporting CB's travel and attendance at ICML. icml2024 § APPENDIX §.§ Source Code All code for this paper is available in the following GitHub repository: https://github.com/charumathib/thesis-latent-spaceshttps://github.com/charumathib/thesis-latent-spaces. §.§ Background on Models All generative image models learn to “sample" images following their the distribution of their training set. However, the four main families of models – Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), Normalizing Flows (NFs) and Diffusion Models (DMs) – differ drastically in how they do so. A VAE <cit.> consists of an encoder network mapping images to latent vectors of lower dimension, and a decoder neural network that maps latent vectors to images with the objective of minimizing image reconstruction error. A KL-Divergence term is added to the objective to encourage a marginally Gaussian latent space distribution. A GAN <cit.> is comprised of a generator network (decoder) that learns to generate images from marginally Gaussian latent samples and a discriminator network whose objective is to distinguish images from the training set from generated images. Both networks improve by playing a minimax game, ultimately yielding a generator capable of producing realistic images. An NF <cit.> samples a latent variable from a simple distribution and repeatedly applies a series of invertible functions to this variable until it is transformed into an image. The marginal likelihood of each datapoint can be derived via the change of variables theorem for distributions and the objective is the negative data log likelihood. Finally, a DM <cit.> progressively adds Gaussian noise to an image producing a sequence noisy samples that become isotropic to a Gaussian in the limit (“forward process"). It then learns how to produce samples from the “reverse process" to gradually transform pure noise into an image. DMs dont have an obvious latent space in the same way as VAEs, GANs and NFs since marginally Gaussian “latents" are created via progressive noising rather then by a transformation that would force a representation to be learned. §.§ Models Used The pre-trained GAN we use is from the Diffusion-GAN codebase <cit.> and follows the StyleGAN2-ADA architecture <cit.>. The pre-trained VAE is from the DiffuseVAE codebase <cit.>. The pre-trained VQVAE <cit.> from the Hugging Face library <cit.>. The NF implementation is from an independent source <cit.> follows the Glow architecture <cit.>. The DM is from the DDIM codebase <cit.>. §.§ Training Latent Space Linear Maps Many of our experiments rely on the construction of linear maps between model latent spaces. For each pair of encoder-decoder models (VAE, VQVAE, NF, DM), we encode the first 9000 images from the CelebA test split into their respective latent spaces and use a closed-form linear regression solver from to predict one set of latents from the other <cit.>. To prevent overfitting in maps originating from the NF and DM latent spaces, we use ridge regression with manually tuned values of α as summarized in Table <ref>. Our training procedure differs slightly for maps to and from the GAN latent space. We use the “style space" as our latent space (a layer connected to the base latent space by an 8 layer MLP) <cit.>. We use the first 9000 images in the CelebA-Synthetic dataset to train maps in the way described earlier; the latent used by the GAN to generate each image is treated as the image's “encoding" in the GAN latent space. All maps are tested on a holdout set of 100 images from either CelebA or CelebA-Synthetic as appropriate. §.§ Training Latent Space Linear Probes We train linear probes to predict the value of each of the 40 binary attributes (e.g. ) CelebA images are annotated with. To construct a balanced training set for an attribute's probe, we first compute the number of images with and without that attribute in the first 9000 CelebA images. 80% of the minimum of these two numbers is how many positive and negative examples are included. Probes are trained using lasso regression in where α = 0.005 for probes on the VAE and VQVAE latent spaces, 0.02 for probes on the DM latent space and 0.1 for probes on the NF latent space. <cit.>. We use a holdout set of 200 latents (100 with attribute, 100 without attribute) to evaluate probe performance. §.§ Full Probe Results §.§ Additional Examples of Stitched Model Reconstructions
http://arxiv.org/abs/2407.12348v1
20240717064802
MM Algorithms for Statistical Estimation in Quantile Regression
[ "Yifan Cheng", "Anthony Yung Cheung Kuk" ]
stat.ME
[ "stat.ME", "stat.AP", "stat.CO" ]
Self-Adaptive Robust Motion Planning for High DoF Robot Manipulator using Deep MPC Ye Zhang^1,*, Kangtong Mo^1, Fangzhou Shen^1, Xuanzhen Xu^1, Xingyu Zhang^2, Jiayue Yu^2 and Chang Yu^2 ^1,*University of Pittsburgh, PA 15213, USA ^1University of Illinois Urbana-Champaign, IL 61820, USA ^1San Jose State University at San Jose, CA 95192, USA ^1Snap Inc. at Seattle, WA 98121, USA ^2George Washington University, DC 20052, USA ^2Warner Bro. Discovery at Culver City, CA 90232, USA ^2Northeastern University at Boston, MA 02115, USA ^1,*yez12@pitt.edu, ^1mokangtong@gmail.com, ^1fangzhou.shen0922@gmail.com ^1xuanzhenxu@gmail.com, ^2xingyu_zhang@gwmail.gwu.edu, ^2jiy048@gmail.com, ^2chang.yu@northeastern.edu July 22, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Quantile regression Koenker1978 is a robust and practically useful way to efficiently model quantile varying correlation and predict varied response quantiles of interest. This article constructs and tests MM algorithms, which are simple to code and have been suggested superior to some other prominent quantile regression methods in nonregularized problems Pietrosanu2017, in an array of quantile regression settings including linear (modeling different quantile coefficients both separately and simultaneously), nonparametric, regularized, and monotone quantile regression. Applications to various real data sets and two simulation studies comparing MM to existing tested methods have corroborated our algorithms' effectiveness. We have made one key advance by generalizing our MM algorithm to efficiently fit easy-to-predict-and-interpret parametric quantile regression models for data sets exhibiting manifest complicated nonlinear correlation patterns, which has not yet been covered by current literature to the best of our knowledge. § INTRODUCTION Predicting varied quantiles of the response variable is desirable in diverse domains,[e.g., <cit.>; <cit.>; <cit.>; <cit.>] as quantiles closer to the tails are of more interest (compared to 0.5) in some applications. Various quantile estimation approaches (e.g., <cit.>; <cit.>; <cit.>) have been proposed to model non-constant correlation patterns between the dependent and independent variables across different outcome quantiles, of which <cit.>'s method remains foremost. Quantile regression introduced by Koenker1978 not only excels in efficiently portraying a comprehensive picture of the regressor(s)-regressand's dependency structure, but also stays more robust against outliers than ordinary regression of the mean. Denote n,p as the sample size, covariate dimension (including the intercept) respectively and suppose we have observed a sample (x_i,y_i), where x_i=(x_i1,x_i2,…, x_ip)^T and i=1,2,…, n (We usually let x_i1s=1 to denote intercept terms). For any given quantile level q∈ (0,1) and any specified regression function f(X, θ),[Q_q(Y|X_i)=f(X_i, θ),i=1,2,…,n, where θ=(θ_1,θ_2,…,θ_p)^T is the parameter vector (corresponding to quantile level q) to be estimated.] let f_i(θ):ℝ^p →ℝ =f(x_i,θ) and r_i(θ)[The i^th residual.]=y_i-f_i(θ), i=1,2,…,n. Koenker1978 defined the regression quantile at q as the p-dimensional vector θ̂ that minimizes the asymmetric L_1 loss function L(θ)=∑_i=1^n ρ_q (r_i(θ)), where ρ_q (r_i) = |r_i|[q1_{r_i≥0}+(1-q)1_{r_i<0}]=qr_i-r_i1_{r_i<0}, i=1,2,…,n. Compared to traditional least-squares estimators, the minimizers of <Ref> are much more robust to an array of non-normal error distributions while performing similarly well for Gaussian linear models Koenker1978. The prevailing most well-adopted quantile regression package in R is probably still developed by Koenker2010, which estimates empirical regression coefficients for specified quantile order(s) via a default simplex algorithmKoenker1987 and an interior point methodPortnoy1997 triggering the Frisch-Newton algorithm for huge data sets.[For data sets with at least several thousand observations, specifying or (Frisch-Newton Interior Point Method with Preprocessing) may save computation time significantly.] Pietrosanu2017 advanced in questing robust accelerated quantile regression methods Portnoy1997 by conducting simulation studies on four prominent approaches[The four approaches are Alternating Direction Method of Multipliers (ADMM) algorithms, Majorize-Minimize/Minorize-Maximize (MM) algorithms, Coordinate Descent (CD) algorithms, and algorithms adopted by .] including MM algorithms[As proposed by Hunter2000 for non-regularized quantile regression and by May2005 for adaptive lasso-regularized quantile regression.] and methods and finding out that MM produced comparably accurate parameter estimates markedly faster than the other three methods in non-regularized quantile and composite[In composite quantile regression, we estimate coefficients for a sequence of quantile orders simultaneously.] quantile regression. Indeed, MM offers much more than speeding up empirical quantile regression coefficient estimation. As will be illustrated by this article, MM can be conveniently extended to parametrically model multiple quantiles simultaneously for nonparametric quantile regression problems with complicated nonlinear correlation, which has not yet been discussed by current literature to the best of our knowledge. After introducing MM algorithms and the pioneering framework put forward by Hunter2000 applying MM algorithms to quantile regression in <Ref>, we start off with linear quantile regression in <Ref>, modeling the regression coefficients for different quantile orders separately and simultaneously by various parametric bases transformations including logistic and natural spline. <Ref> generalizes our MM algorithms to “nonparametric quantile regression" and implements existing widely adopted nonparametric counterpart methods to corroborate our MM algorithms' effectiveness. <Ref> applies MM to quantile regression with special requirements including variable selection and monotonicity preservation. Finally, <Ref> summarizes our achievements and points toward potential future work. § MM ALGORITHMS §.§ An Overview of the MM Algorithm Although termed “algorithm", MM, which stands for Majorize-Minimize for minimization problems and Minorize-Maximize for maximization problems, is in fact a strategy for creating an easy-to-minimize/maximize surrogate function that majorizes/minorizes the objective function. Given an objective function L(θ):ℝ^p →ℝ that is difficult to be minimized directly and a current iteration's parameter estimate θ^(t) (t∈ℕ), a surrogate function Q(θ | θ^(t)):ℝ^p →ℝ is called a majorizer of L(θ) at θ^(t) if it satisfies Q(θ | θ^(t))≥ L(θ) for all θ and in particular Q(θ^(t) | θ^(t)) = L(θ^(t)). If the value θ^(t+1) at which Q(θ | θ^(t)) is minimized can be conveniently calculated, then Q(θ | θ^(t)) yields a successful MM algorithm which drives the objective function L(θ) further downhill each iteration thanks to the wonderful descent property L(θ^(t+1))≤ Q(θ^(t+1)|θ^(t))≤ Q(θ^(t)|θ^(t))=L(θ^(t)). To construct suitable majorizing/minorizing surrogate functions and thus significantly simplify difficult optimization problems, MM typically exploits convexity/concavity and miscellaneous inequalities Hunter2004 such as Jensen's Inequality (which brings about the well-known EM algorithm, a special case of MM), Supporting Hyperplanes, Quadratic Upper Bound Bohning1988, the Arithmetic-Geometric Mean Inequality, and the Cauchy-Schwartz Inequality. The MM algorithm enjoys implementation simplicity and remarkable stability guaranteed by <Ref> while possessing complementary computation speed merits Hunter2004 compared to the widely-adopted Newton-Raphson algorithm, and has hence gained application attention in diversified fields.[e.g., <cit.>; <cit.>; <cit.> ;<cit.>.] §.§ An MM Algorithm in Quantile Regression For any given quantile level q∈(0,1), the loss function L(θ) in <Ref> is hard to directly minimize since it may accommodate multiple minima and its components <Ref> are non-differentiable when r_i=0 Hunter2000. An MM algorithm may be constructed to bypass the non-differentiability problem by setting (<cit.>; <cit.>) Q(θ|θ^(t))=∑_i=1^n ζ_q(y_i-f_i(θ)|y_i-f_i(θ^(t)))=∑_i=1^n ζ_q(r_i(θ)|r_i(θ^(t))), where ζ_q(r_i(θ)|r_i(θ^(t)))=1/4{r_i^2(θ)/|r_i(θ^(t))|+(4q-2)r_i(θ)+|r_i(θ^(t))|}. It can be shown that the surrogate function in <Ref> is a majorizer of L(θ) at θ^(t) (We leave the proof to Proposition A.1. in <Ref>), yet this MM method suffers from the vexing drawback that Q(θ|θ^(t)) is not defined when one r_i(θ^(t))=0, a realistic concern that should be settled as some f_i(θ^(t)), i∈{1,2,…,n} may well converge to y_i as t→∞.[e.g., in the simplest case when all f_i(θ^(t)), i=1,2,...,n denote the sample mean μ^(t), μ^(t) should → one y_i as t→∞ if nq ∉ℕ Hunter2000.] Hunter2000 addressed this issue by introducing a close-approximate function L_ϵ(θ)[We found out that a perturbation ϵ in the order of 10^-4 or smaller usually generates good results.] (ϵ>0 denotes a perturbation) of the actual loss function <Ref> L_ϵ(θ)=∑_i=1^n ρ_q^ϵ(r_i(θ))=∑_i=1^n {ρ_q(r_i(θ))-ϵ/2ln(ϵ+|r_i(θ)|) } and inventing a corresponding majorizer Q_ϵ(θ|θ^(t))=∑_i=1^n ζ_q^ϵ(r_i(θ)|r_i(θ^(t)))=1/4∑_i=1^n{r_i^2(θ)/ϵ+|r_i(θ^(t))|+(4q-2)r_i(θ)+_i }, where _is are chosen to ensure that ζ_q^ϵ(r_i(θ^(t))|r_i(θ^t))=1/4{r_i^2(θ^(t))/ϵ+|r_i(θ^(t))|+(4q-2)r_i(θ^(t))+_i} =ρ_q^ϵ(r_i(θ^(t))), i=1,2,…,n. By definition, Q_ϵ(θ|θ^(t)) satisfies <Ref> as ζ_q^ϵ(r_i(θ^(t))|r_i(θ^(t)))=ρ_q^ϵ(r_i(θ^(t))) for all i∈{1,2,…,n}. Given current residual value r^(t)=r(θ^(t)) at iteration t, we let h(r)=ζ_q^ϵ(r|r^(t))-ρ_q^ϵ(r)=1/4{r^2/ϵ+|r^(t)|-2r++2ϵ·ln(ϵ+|r|) } +r1_{r<0}. Then For r≥0, h'(r)=1/2( r/ϵ+|r^(t)|-1+ϵ/ϵ+r)=1/2·r(r-|r^(t)|)/(ϵ+|r^(t)|)(ϵ+r); For r<0, h'(r)=1/2( r/ϵ+|r^(t)|+1-ϵ/ϵ-r)=-1/2·r(r+|r^(t)|)/(ϵ+|r^(t)|)(ϵ-r). Hence h'(r)=0 r=0 or r=± |r^(t)| r=0 or r=± r^(t) since the sets {|r^(t)|,-|r^(t)|} and {r^(t),-r^(t)} are the same. We can see from <Ref> that h'(r)≤0 for r∈ (-∞,-|r^(t)|]∪[0,|r^(t)|] and ≥0 for r∈ [-|r^(t)|,0]∪[|r^(t)|,∞), i.e., h(r) is non-increasing for r∈ (-∞,-|r^(t)|]∪[0,|r^(t)|] and non-decreasing for r∈ [-|r^(t)|,0]∪[|r^(t)|,∞). Observe that h(r) is an even function and thus by <Ref>, h(-r^(t))=h(r^(t))=0 h(-|r^(t)|)=h(|r^(t)|)=0⇒ h(r)≥ 0 for all r. Hence, ζ_q^ϵ(r|r^(t))≥ρ_q^ϵ(r) for all r ⇒ Q_ϵ(θ|θ^(t))≥ L_ϵ(θ) for all θ . By the above reasoning, Q_ϵ(θ|θ^(t)) satisfies <Ref> as well and thus indeed majorizes L_ϵ(θ) at θ^(t). When f_i(θ)s are linear, Q_ϵ(θ|θ^(t)) would be quadratic in θ and the MM algorithm can proceed in simplicity by explicitly solving for θ^(t+1) that minimizes Q_ϵ(θ|θ^(t)) and updating the current parameter estimate θ^(t) to θ^(t+1). When f_i(θ)s are nonlinear, a Gauss-Newton approach may be adopted to drive Q_ϵ(θ|θ^(t)) downhill[In which case the descent property in <Ref> still holds and guarantees the algorithm's stability.] Hunter2000. § LINEAR QUANTILE REGRESSION §.§ Estimating Different Quantile Coefficients Separately We first analyze the most extensively discussed and interpretation-wise simplest linear case in quantile regression, a scenario under which the r_i(θ)s in <Ref> can be straightforwardly written as r(θ)=y-Xθ (y, X contains observed sample values) for any given quantile level q∈(0,1), where r(θ)= [ r_1(θ); r_2(θ); ⋮; r_n(θ) ] , y= [ y_1; y_2; ⋮; y_n ] , X= [ x_1^T; x_2^T; ⋮; x_n^T ] = [ x_11 x_12 … x_1p; x_21 x_22 … x_2p; ⋮ ⋮ ⋱ ⋮; x_n1 x_n2 … x_np ] since f_i(θ)=x_i^Tθ for i=1,2,…,n. Typically, the first column of X is a column of 1s denoting the intercept. Given q∈(0,1) and current parameter estimate θ^(t), let W^(t) = [ 1/ϵ+|r_1(θ^(t))| ; 1/ϵ+|r_2(θ^(t))| ; ⋱ ; 1/ϵ+|r_n(θ^(t))| ], c_n× 1= [ 4q-2; 4q-2; ⋮; 4q-2 ]. Then <Ref> becomes Q_ϵ(θ|θ^(t))= 1/4( r(θ)^TW^(t)r(θ)+c^Tr(θ))+ = 1/4( (y-Xθ)^TW^(t)(y-Xθ)+c^T(y-Xθ))+. Thus minimizing Q_ϵ(θ|θ^(t)) is equivalent to minimizing g(θ|θ^(t))=(y-Xθ)^TW^(t)(y-Xθ)+c^T(y-Xθ), which can be done directly as g(θ|θ^(t)) is quadratic in θ. Setting ∂/∂θg(θ|θ^(t))=2X^TW^(t)Xθ-2X^TW^(t)y-X^Tc=0 yields θ^(t+1)=(X^TW^(t)X)^-1(X^TW^(t)y+1/2X^Tc). (X^TW^(t)X)^-1 exists for a full rank X_n× p (n>p), as W^(t) is diagonal whose entries are all strictly positive ⇒ for all p× 1 vector v≠0_p× 1, v^TX^TW^(t)Xv=(Xv)^TW^(t)(Xv)>0 since Xv≠0_n× 1 for a full rank X ⇒ X^TW^(t)X is positive definite for a full rank X ⇒ X^TW^(t)X is invertible for a full rank X. Since for a full rank X, the Hessian matrix ∂^2/∂θ∂θ^Tg(θ|θ^(t))=2X^TW^(t)X is positive definite for all θ and in particular θ^(t+1), θ^(t+1) given in <Ref> indeed minimizes <Ref> and gives the next iteration's parameter estimate for the MM algorithm. We implemented this MM algorithm on the “pollution" data set McDonald1973 in R's package and compared our results with the ones produced by the function in the package Koenker2010. The data set consists of 60 observations and conveys information about a mortality measure (y) with 15 explanatory variables, out of which we considered x_1 (precipitation, or “prec"), x_9 (percentage of non-white, or “nonw"), x_10 (percentage of white collar, or “wwdrk"), and x_14 (SO_2 potential, or “so"), as these four variables have been demonstrated by Peng2014 to possess either fairly constant significant (x_1, x_9) or manifest varying (x_10,x_14) effects across different quantile orders. Reassuringly, we arrived at extremely close regression coefficient estimates for various quantile order choices. A comparison of the two methods' estimated coefficients for quantile orders q=0.1, 0.3, 0.5, 0.7, 0.9 is presented in <Ref>. This lends us considerable confidence in the effectiveness of this aforementioned MM algorithm (which also turned out to be fast), and we are equipped to extend our MM algorithm to model linear regression coefficients for different quantile orders concurrently rather than separately. §.§ Estimating Different Quantile Coefficients Simultaneously It is of tremendous interest as well as utmost importance to efficiently discern informative signals from and discard unwanted noises in the empirical sample quantile functions. To date, most relevant efforts either resort to varied smoothing techniques on the empirical quantile functions (e.g.,<cit.>; <cit.>) or adjust nonparametric fitting techniques to quantile regression (e.g.,<cit.>), which more or less suffer from interpretation simplicity and prediction efficiency. Frumento2016 addressed these concerns by bringing forward a parsimonious parametric approach that directly models the linear regression coefficients as smooth functions of q,[With the aid of the Newton-Raphson algorithm to fit models.] which succeeds in effectively pooling information across quantile levels. The method was later built into R via the package Package2020. Rather than estimating the regression coefficients directly but only one q a time as in the previous subsection, Frumento2016 introduced a basis of h∈ℕ[h is small and in most cases less than 10] known functions of the quantile order – b(q)=(b_1(q),b_2(q),…,b_h(q))^T,[b(q) is required to induce a monotonically non-decreasing Q(q|x,θ) for some θ.] wrote the regression coefficient as β_q=A_p× hb(q) for all q,[This way, β_q's are represented as linear combinations of basis functions of q and are no longer estimated separately.] and estimated instead the p× h parameter matrix A= [ θ_11 θ_12 … θ_1h; θ_21 θ_22 … θ_2h; ⋮ ⋮ ⋱ ⋮; θ_p1 θ_p2 … θ_ph ]. Given A, the residual vector can now be expressed as a function of q as well: r(q)=(r_1(q),r_2(q),…,r_n(q))^T=y-XAb(q) , where y,X are as in <Ref>. Our MM algorithm described in <Ref> can be conveniently generalized to fit the model in this section by rewriting the parameter matrix A_p× h in <Ref> as θ_hp× 1=(θ_11,θ_21,…,θ_p1,……,θ_1h,θ_2h,…,θ_ph)^T to form the parameter vector to be estimated and replace the X_n× p in <Ref> by new design matrices D(q)_n× hp for any q∈(0,1): D(q)_n× hp =b(q)^T_1× h⊗X_n× p =[ x_11b_1(q) x_12b_1(q) … x_1pb_1(q) …… x_11b_h(q) x_12b_h(q) … x_1pb_h(q); x_21b_1(q) x_22b_1(q) … x_2pb_1(q) …… x_21b_h(q) x_22b_h(q) … x_2pb_h(q); ⋮ ⋮ ⋱ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮; ⋮ ⋮ ⋱ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮; x_n1b_1(q) x_n2b_1(q) … x_npb_1(q) …… x_n1b_h(q) x_n2b_h(q) … x_npb_h(q) ]. Then r(q,θ)=(r_1(q,θ),r_2(q,θ),…,r_n(q,θ))^T=y-XAb(q)=y-D(q)θ. Given k∈ℕ quantile orders q_1,q_2,…,q_k that we wish to simultaneous simultaneously model, where k is typically set large (e.g., 100, 1000) for better performances, the surrogate function with perturbation becomes Q_ϵ(θ|θ^(t))= ∑_a=1^k∑_i=1^n ζ_q_a^ϵ(r_i(q_a,θ)|r_i(q_a,θ^(t))) = 1/4∑_a=1^k∑_i=1^n{r_i^2(q_a,θ)/ϵ+|r_i(q_a,θ^(t))|+(4q_a-2)r_i(q_a,θ)+_ia}. Given current parameter estimate θ^(t), for all a∈{1,2,…,k}, let (W_a)_n× n^(t) = [ 1/ϵ+|r_1(q_a,θ^(t))| ; 1/ϵ+|r_2(q_a,θ^(t))| ; ⋱ ; 1/ϵ+|r_n(q_a,θ^(t))| ], (c_a)_n× 1= [ 4q_a-2; 4q_a-2; ⋮; 4q_a-2 ]. Then <Ref> can be written as Q_ϵ(θ|θ^(t))= 1/4∑_a=1^k{r(q_a,θ)^TW_a^(t)r(q_a,θ)+c_a^Tr(q_a,θ)+_a} = 1/4∑_a=1^k{(y-D(q_a)θ)^TW_a^(t)(y-D(q_a)θ)+c_a^T(y-D(q_a)θ)+_a}, which is again quadratic in θ and thus can be solved directly for θ^(t+1). Let g(θ|θ^(t))=∑_a=1^k{(y-D(q_a)θ)^TW_a^(t)(y-D(q_a)θ)+c_a^T(y-D(q_a)θ)} . Then minimizing g(θ|θ^(t)) is equivalent to minimizing Q_ϵ(θ|θ^(t)). ∂/∂θg(θ|θ^(t))|_θ=θ^(t+1)=∑_a=1^k { 2D^T(q_a)W_a^(t)D(q_a)θ^(t+1)-2D^T(q_a)W_a^(t)y-D^T(q_a)c_a } =0 ⇒θ^(t+1) = [ ∑_a=1^k D^T(q_a)W_a^(t)D(q_a)]^-1[ ∑_a=1^k (D^T(q_a)W_a^(t)y+1/2D^T(q_a)c_a ) ], which is well-defined for a full rank X_n× p (n>p), a small h∈ℕ, and a big k∈ℕ. To see this, we note that ∑_a=1^k D^T(q_a)_hp× n(W_a)^(t)_n× nD(q_a)_n× hp can be written as D^T_hp× nkW_nk× nk^(t)D_nk× hp, where D_nk× hp = [ D(q_1)_n× hp; D(q_2)_n× hp; ⋮; D(q_k)_n× hp ] = [ b(q_1)^T_1× h⊗X_n× p; b(q_2)^T_1× h⊗X_n× p; ⋮; b(q_k)^T_1× h⊗X_n× p ] = [ b(q_1)^T; b(q_2)^T; ⋮; b(q_k)^T ]_k× h⊗X_n× p with (D) = ([ b(q_1)^T; b(q_2)^T; ⋮; b(q_k)^T ]) ×(X) = h× p = hp ⇒D_nk× hp is full-rank, and W_nk× nk^(t)= [ (W_1)^(t)_n× n ; (W_2)^(t)_n× n ; ⋱; (W_k)^(t)_n× n ] is diagonal and positive definite . Hence, ∑_a=1^k D^T(q_a)W_a^(t)D(q_a) = D^TW^(t)D is positive definite ⇒ ∑_a=1^k D^T(q_a)W_a^(t)D(q_a)= D^TW^(t)D is invertible. By <Ref>, θ^(t+1) indeed minimizes <Ref> since ∂^2/∂θ∂θ^Tg(θ|θ^(t))=2∑_a=1^k D^T(q_a)W_a^(t)D(q_a) is positive definite for all θ and in particular θ^(t+1). With no additional assumptions, the parametric logistic basis b(q)=(1,ln(q), ln(1-q))^T has been suggested to perform well generally by Frumento2016 and was thus adopted by us. Since there are situations, though not frequent, when linear combinations of <Ref> drastically deviate from the actual coefficient functions,[See <Ref> when the errors follow a shifted rescaled Β(0.5,0.5) distribution for instance.] we also resorted to flexible nonparametric bases. Natural splines (cubic splines that restrict both tails to be linear, abbreviated as “ns"), which encompass both flexibility retained from ordinary cubic splines and stability achieved by imposing the tail linearity restriction Perperoglou2019 and have thus been extensively utilized in regression analysis, were considered. We followed the truncated power basis form presented by Hess1994 and defined b(q)_h× 1=(1,q,S_1(q),S_2(q),…,S_h-2(q))^T, where for all l∈{1,2,…,h-2}, S_l(q)=(q-q_l)_+^3 - (q-q_h-1)_+^3(q_h-q_l)/q_h-q_h-1 + (q-q_h)_+^3(q_h-1-q_l)/q_h-q_h-1 for ns bases with h>2 (h∈ℕ) knots q_1,q_2,…,q_h∈ (0,1). A variety of knot choices were considered, some equally spaced and some with shorter tails.[Sensible as the tails are restricted to be linear.] We implemented this section's MM algorithm on the “pollution" data set with variables y,x_1,x_9,x_10,x_14 again and carried out the corresponding , Package2020 routines. The results validated our MM algorithm's appropriateness once more. <Ref> and <Ref> depict the estimated linear regression coefficients β̂(q)_p× 1 as functions of the quantile order q∈ (0,1), from which we can see that MM's estimates capture efficiently empirical estimates' trends and might be slightly better than their counterparts for all models' five coefficients. <Ref> presents estimated response values ŷ(q|x)=Q̂_q(Y|x) at 1000 quantile levels q∈(0,1), taking x=(x_1,x_9,x_10,x_14) as the observed sample mean vector. For all models here, MM effectively smooths the empirical functions produced by and agrees well to . We also made such conditional ŷ(q|x) quantile plots for each of the n=60 sample observations and got similar corroborative results. Those plots generated assuming the logistic basis b(q)=(1,ln (q), ln(1-q)) ^T and natural spline basis with knots 0.2,0.4,0.6,0.8 are displayed in <Ref>. §.§ Simulation Studies We now carry out simulation studies to answer the following questions: * Are our MM estimates close to those obtained from , ? * Does estimating quantile coefficients simultaneously (the idea in <Ref>) improve model fit (compared to modeling quantile coefficients separately as in <Ref>)? * When modeling quantile coefficients as linear combinations of basis functions, do parametric (the logistic basis – <Ref>) or nonparametric (ns bases – <Ref>) basis functions perform better? We adopted the heterogeneous linear regression model Y_i = 1+3X_i+(1+2X_i)e_i, i=1,2,…,n= sample size, where the regressors X_i∼(0,1) are one-dimensional uniform random variables independent of the e_is, and the errors e_i are IID following an array of distributions: * e_i ∼ N(0,1); * e_i ∼ t(10) (heavy-tailed); * e_i = 5(B_i-1/2), where B_i∼Β(α,β) with different representative shape parameters α,β (short-tailed): * B_i∼Β(2,2); * B_i∼Β(1/2,1/2) (bathtub-shaped); * B_i∼Β(2,5) (asymmetric); * e_i=ln(ϵ_i), where ϵ_i ∼exp(1) (asymmetric). The above distributions were chosen to represent differing theoretical quantile coefficient functions (<Ref>) and conditional quantile curves (<Ref>), so as to answer our aforementioned three questions more comprehensively. With the exact relationships between Y_i and X_i, e_i (<Ref>) and theoretical distributions of X_i and e_i for all i∈{1,…,n}, we can derive the theoretical conditional quantile function Q_q(Y|x)=F_y^-1(q|x) =1+3x+(1+2x)Q_q(e) since Q_q(e|x)=Q_q(e) by independence =[1+Q_q(e)]+[3+2Q_q(e)]x, q∈(0,1). In the special case where e_i∼standard logistic, Q_q(e)=ln(q)-ln(1-q) and hence Q_q(Y|x)=[1+ln(q)-ln(1-q)]+[3+2(ln(q)-ln(1-q))]x⇒ the theoretical coefficients are exact linear combinations of logistic basis functions. Since the normal distribution, which is largely similar to t distributions despite being less heavy-tailed, is very close to the logistic distribution, we expect the logistic basis to work well for normal and t(10) errors. This indeed turned out to be the case (<Ref>, <Ref>). We calculated our deviation statistic – the Integrated Mean Squared Error (IMSE) = 1/N∑_l=1^N∑_i=1^n( Q_q(Y|x_i)-Q̂_q^(l)(Y|x_i))^2 using N random samples, each of size n, at quantile levels q=0.1, 0.2, …, 0.9 for a variety of linear quantile regression approaches. These methods include from , (our MM algorithm described in <Ref>), from , and (our MM algorithm detailed in <Ref>) with the logistic basis and natural spline bases. As shown in <Ref>, IMSEs correspond extremely closely to their counterparts at all tested quantile levels for all six aforementioned e_i distributions. and are also comparable in general, especially at non-extreme quantiles.[Some bigger deviations at the tails may be due to different model fitting quantile levels. We only used the 9 test quantiles q=0.1,0.2,…,0.9 to fit all our MM models, whereas Frumento2016 typically used 250 quantile levels.] Hence, we now focus on analyzing MM's results. As expected, with the logistic basis significantly outperforms as well as with nonparametric bases for normal, t(10), and log exponential errors. For the short-tailed Β(0.5,0.5) error distribution, however, with the logistic basis loses out. As <Ref>E depicts, with the logistic basis is indeed inadequate for short-tailed Β(0.5,0.5) errors.[Probably because linear combinations of functions in the logistic basis concave up whereas the theoretical quantile coefficient functions for short-tailed Β(0.5,0.5) errors concave down.] Under such scenarios when the parametric logistic basis severely misspecifies the actual correlation patterns, flexible nonparametric ns bases with more than three knots closely estimate theoretical quantile coefficient functions (<Ref>) and do clearly improve upon separate quantile estimates (<Ref>), gaining us assurance in the effectiveness of modeling quantile coefficients as linear combinations of basis functions of q. § NONPARAMETRIC QUANTILE REGRESSION Despite its popularity and general competence, the typical linear quantile regression is inappropriate for data sets exhibiting manifest complex nonlinear patterns. Quantile regression for universal nonlinear data, however, has rarely been discussed. The handful of existing approaches in current literature (e.g., <cit.>) rely on broader nonparametric methods like Kernel Estimation adapted to quantile regression instead of parametric methods convenient for interpretation and prediction. This section attempts to bridge this gap by extending our MM algorithm constructed in <Ref> to parametrically model this type of quantile regression problem, which and algorithms are unable to handle. We start with two real bivariate data sets displaying apparent complicated nonlinear correlations (<Ref>). §.§ MM and Cross Validation It is in fact pretty straightforward to achieve the desired generalization via MM. The first step is fairly customary in ordinary regression settings – replace the original sample observation matrix X_n× p (p=1+1=2 here for the two bivariate data sets) as in <Ref> by some transformations of the original variables, i.e., X=(1_n,x)= [ 1 x_1; 1 x_2; ⋮ ⋮; 1 x_n ]f_1,f_2,…,f_p'-1⟶X_new= [ 1 f_1(x_1) f_2(x_1) … f_p'-1(x_1); 1 f_1(x_2) f_2(x_2) … f_p'-1(x_2); ⋮ ⋮ ⋮ ⋱ ⋮; 1 f_1(x_n) f_2(x_n) … f_p'-1(x_n) ]_n× p', where f_1,f_2,…,f_p'-1 are a set of chosen functions. With this new design matrix X_new, we can proceed to calculate the corresponding D_new(q)_n× hp's in <Ref> and carry out our MM algorithm as delineated in <Ref>. As the correlation types are not specified and the two scatter plots displayed in <Ref> do not suggest any particular simple nonlinear patterns, we would need highly flexible transformations applicable to universal nonlinear data sets. We hence considered the flexible, stable, and extensively utilized natural spline transformations Perperoglou2019 again, which were applied to covariates instead of quantile orders this time. Since we first flexibly transform our covariate vector x via the nonparametric natural spline transformation, we name the subject for this section “nonparametric quantile regression". We conducted our MM algorithms on the two data sets with diverse combinations of explanatory variable knots and quantile order bases. MM did appear to smooth out empirical estimates reasonably well in all quantile plots of ŷ(q|x)=Q̂_q(Y|x) versus q that take x as the sample mean (<Ref>). Since the two data sets are bivariate, we can also efficiently visualize MM's prediction accuracy and smoothing effects via estimated conditional quantile curves. As shown in <Ref> and <Ref> plotting ŷ(x|q) against the concentrated x values at selected representative quantile levels q=0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95, our MM algorithm did a good job in efficiently capturing the scatters' trends and smoothing the empirical estimates produced by for all combination choices. To gain a better picture of the relevant out-of-sample prediction powers for different covariate knots and quantile bases choices, we performed 10-fold cross-validation (CV) by randomly splitting the data sets into 10 approximately equal-sized subgroups G_1,G_2,…,G_10⊂{1,2,…,n}, picking J=9 validation quantiles q_1=0.1,q_2=0.2,…,q_J=0.9, and calculating the loss l_CV=∑_g=1^10∑_i∈ G_g∑_j=1^Jρ_q_j(y_i-x_i^Tβ̂^(-G_g)(q_j)), where each x_i denotes the i^th row of X_new in <Ref> and each β̂^(-G_g)(q_j) represents the estimated coefficient vector based on the other 9 subgroups for spline-transformed covariates at quantile level q_j. The smaller l_CV, the better the model. As <Ref> suggests, basis choice for q does not seem to matter as much as ns knots choice to transform x does. §.§ Double Kernel and Likelihood Cross Validation To see how well our MM algorithms perform compared to existing nonparametric methods, we adopted widely-used Kernel Estimation techniques on the two bivariate real data sets to predict ŷ(x|q) at q=0.05,0.1,0.25,0.5,0.75,0.9,0.95. Given a finite sample with observed points (x_i,y_i), i=1,2,…,n and a new point with known x, the estimated conditional CDF of this point's y coordinate is of the form F̂(y|x) = ∑_i=1^n w_i(x,h_1)· I(y_i≤ y), where w_i(x,h_1) = K(x_i-x/h_1) /∑_j=1^n K(x_j-x/h_1). To smooth our estimation in the y direction, we replaced the deterministic form I(y_i≤ y) in <Ref> by a standard normal CDF N(y_i,h_2)=Φ(y-y_i/h_2) and formed a double kernel conditional CDF estimate F̂̂̂(y|x) = ∑_i=1^n w_i(x,h_1)·Φ(y-y_i/h_2) = ∑_i=1^n [ K(x_i-x/h_1) /∑_j=1^n K(x_j-x/h_1)·Φ(y-y_i/h_2)] . The estimated conditional quantile function is thus Q̂̂̂_q(y|x)=F̂̂̂^-1(q|x), q∈(0,1). We adopted the normal kernel ϕ as the kernel function K in <Ref> in light of the normal kernel's various favorable mathematical properties and proceeded to produce quantile curves of ŷ(x|q) against x at each q∈{0.05,0.1,0.25,0.5,0.75,0.9,0.95} after specifying a set of sensible choices for the bandwidths h_1,h_2. We discovered and confirmed by Likelihood cross-validation (LCV) that different h_2 values generated estimates that are extremely alike, if not identical. As such, we present quantile curves with a set of h_1 choices and a fixed adequate h_2=10^-4 in <Ref>. The figures suggest that h_1=10,12.5 for “triceps" and h_1=0.15,0.2 for “WTD" most effectively and smoothly capture the scatters' trends. To decide optimal bandwidth choices for Double Kernel Estimation, we carried out leave-one-out likelihood cross-validation to quantitatively judge different bandwidths' out-of-sample prediction capabilities. As the estimated conditional PDF is given by f̂̂̂(y|x) = ∑_i=1^n w_i(x,h_1)·ϕ(y-y_i/h_2), a leave-one-out LCV likelihood can be expressed as ℒ_LCV=∏_j=1^nf̂̂̂^(-j)(y_j|x_j), where f̂̂̂^(-j)(y_j|x_j) denotes the estimated PDF for a new observation with x=x_j derived using the entire sample except the point (x_j,y_j). The larger ℒ_LCV, the better the method. As <Ref> suggests, h_1 approximately ∈ (9,13) for “triceps", whose sample x∈(0,52), and h_1 approximately ∈ (0.12,0.22) for “WTD", whose sample x∈(0,1), would be effective x kernel bandwidth choices, corresponding well to what the quantile curves in <Ref> indicate. The values of h_2, on the other hand, do not seem to affect Double Kernel Estimation's prediction power much as long as h_2 is appropriately small. §.§ Simulation Studies The quantile curves produced by Double Kernel seem to appear “rougher" than those obtained from MM. To more convincingly and objectively compare the two approaches, we resort to simulation studies again. We considered the three-parameter generalized gamma (GG) distribution, as it incorporates many common fewer-parameter distributions (e.g., Gamma, Weibull, Exponential distributions). A generalized gamma random variable Y with parameters θ,β,k, or μ,σ,k, where μ=ln(θ)+ln(k)/β, σ=1/β√(k) θ=e^μ/k^σ√(k), β=1/σ√(k), has probability density function f(y;θ,β,k)=β/θ^kβΓ(k)y^kβ-1exp{-(y/θ)^β}, y>0 and quantile function Q(q;μ,σ,k)=e^μ{r(q;k)/k} ^σ√(k), q∈(0,1), where r(q;k) is the quantile function of the (θ=1,k), or (θ=1,β=1,k) distribution. It is virtually the 1/β^th power of a ( θ^β,k) random variable and can be generated either by substituting a (0,1) random variable as q into <Ref> or by generating a ( θ^β,k) random variable and taking the 1/β^th power of it. For implementation in software like R, the first method is preferred, as the second may fail for some parameter choices causing extreme values. We modeled the parameters μ,σ,k as suggested by Noufaily2013 – assume μ(x)=a+bx, σ(x)=e^c+dx, k(x)=e^f+gx, where {a,b,c,d,f,g} is the new set of parameters. For each chosen parameter set {a,b,c,d,f,g}, we formed N bivariate samples of size n by * Sample x_1,x_2,…,x_n independently from a uniform distribution; * for all i∈{1,2,…,n}, generate y_i from (μ(x_i),σ(x_i),k(x_i)) using one of the two methods described in the previous paragraph. We then applied to each sample MM/Double Kernel with a variety of covariate knots and quantile order bases/bandwidths choices. Integrated Mean Squared Error (IMSE), given in <Ref>, was calculated using the theoretical quantile function <Ref> at q=0.1,0.25,0.5,0.75,0.9 for each approach. Estimated quantile curves based on the mean conditional predictions ŷ(x|q)=Q̂_q(y|x) were also made to compare with the theoretical ones. We tried out four sets of parameter choices for {a,b,c,d,f,g} * a = 5.39, b = -4.26, c = -2.52, d = 3.25, f = -4.53, g = 12.96 * a=1.384,b=0.092,c=-1.021,d=0.008,f=-3.493,g=4.766 * a = -1.5, b = -6, c = 0.5, d = 1.5, f = 1.386, g = 0 * a = -2, b = -0.75, c = -0.5, d = -4, f = -0.2, g = -1, among which the first two are approximate parameter estimates obtained by Noufaily2013 when fitting generalized gamma distributions to the aforementioned “WTD" data set and an “IgG" data set Isaacs1983 measuring concentrations of serum immunoglobulin G for 298 kids under six, and the last two were argued by Noufaily2013 to produce particular interesting GG distribution scenarios. Theoretical quantile curves at q=0.1,0.25,0.5,0.75,0.9 for the four generalized gamma distributions are displayed in <Ref>. IMSE charts produced using both <Ref> (<Ref>) largely favor MM over Double Kernel (DK), as performances of DK with optimal bandwidth choices are worse than those of most MM scenarios we have carried out. MM's superiority is also quite evidently showcased in the estimated quantile curves at q=0.1,0.25,0.5,0.75,0.9 (<Ref> for <Ref> and <Ref> for <Ref>), as MM's plots not only resemble the theoretical ones more but also smooth out 's empirical estimates (<Ref>) more efficiently. We got similar results when using <Ref>, with IMSE chart and quantile curves presented in <Ref> and <Ref>. For <Ref>, although the IMSE chart (<Ref>) leans towards DK with larger h_1, double kernel frequently encounters extreme values when calculating <Ref> Q̂_q(y|x)=F̂̂̂^-1(q|x) and is thus unable to produce estimated quantile curves. This is probably due to the manifest upward shift of the theoretical quantile curve at q=0.9 (<Ref>) for <Ref>, which significantly deviates from other quantile curves. MM, on the other hand, does not run into such troubles. MM Estimated quantile curves corresponding to <Ref> are presented in <Ref>. § REGULARIZED AND MONOTONE QUANTILE REGRESSION We now discuss practical techniques commonly adopted in two quantile regression settings with special requirements – adaptive lasso penalty for regularized regression and monotone B-splines for monotone regression. §.§ Adaptive Lasso for Heterogeneous Variable Selection In diverse sparse modeling settings requiring efficient predictor selection, the adaptive lasso has been proven highly competent in consistent variable selection by imposing an adequately weighted 𝑙_1 penalty λ∑_j=1^p w_j|β_j|,λ≥0 Zou2006. Several authors (e.g., <cit.>, <cit.>) have extended this powerful technique to account for possible quantile-varying covariate effects by defining a penalized quantile regression problem with parameter estimate[We write β=(β_1,β_2,…,β_p)^T, where β_1 denotes the coefficient for the intercept term.] β̂_λ(q)_p× 1= β argmin L_q(β), where L_q(β) = ∑_i=1^n ρ_q(r_i(β)) + λ∑_j=2^p |β_j|/|β̃_j(q)| and β̃(q)_p× 1= β argmin ∑_i=1^n ρ_q(r_i(β)) (the nonregularized quantile regression estimator) at any quantile level q∈(0,1). Thanks to the AM-GM Inequality, Q_q,(β|β^(t))=∑_i=1^n ζ_q(r_i|r_i^(t))+λ∑_j=2^p 1/|β̃_j(q)|[ 1/2|β_j^(t)|+β_j^2/2|β_j^(t)|] is a majorizer of <Ref> at β^(t). To mend the flaw that <Ref> is undefined when r_i^(t)=0 for any i∈{1,2,…,n} or β_j^(t)=0 for any j∈{2,3,…,p}, a realistic concern as r_i(β^(t)), i∈{1,2,…,n} for some f_i(β^(t))s and β_j^(t), j∈{2,3,…,p} for penalized quantile regression may well converge to 0 as t→∞, we can adopt[Note that while it is still ensured that L_q^ϵ(β)≤ Q^ϵ_l_q,(β|β^(t)) ∀ β, introducing a perturbation ϵ_l>0 makes L_q^ϵ(β^(t))≠ Q^ϵ_l_q,(β^(t)|β^(t)). ] Q^ϵ_l_q,(β|β^(t))=∑_i=1^n ζ_q^ϵ(r_i|r_i^(t))+λ∑_j=2^p 1/|β̃_j(q)|[ 1/2(|β_j^(t)|+ϵ_l)+β_j^2/2(|β_j^(t)|+ϵ_l)], a majorizer at β^(t) of L_q^ϵ(β)=∑_i=1^n ρ_q^ϵ(r_i(β)) + λ∑_j=2^p |β_j|/|β̃_j(q)|= ∑_i=1^n {ρ_q(r_i(β))-ϵ/2ln(ϵ+|r_i(β)|) } + λ∑_j=2^p |β_j|/|β̃_j(q)|, which closely approximates L_q(β) in <Ref>. Let V^(t)_p× p = [ 0 ; 1/|β̃_2(q)|·(|β_2^(t)|+ϵ_l) ; ⋱ ; 1/|β̃_p(q)|·(|β_p^(t)|+ϵ_l) ], W^(t)_n× n = [ 1/ϵ+|r_1(β^(t))| ; 1/ϵ+|r_2(β^(t))| ; ⋱ ; 1/ϵ+|r_n(β^(t))| ], and c_n× 1 =(4q-2,4q-2,…,4q-2)^T. Then Q^ϵ_l_q,(β|β^(t))=1/4( (y-Xβ)^TW^(t)(y-Xβ)+c^T(y-Xβ))+λ/2β^TV^(t)β+. Hence, minimizing Q^ϵ_l_q,(β|β^(t)) is equivalent to minimizing h(β|β^(t)) =g(β|β^(t))+2λβ^TV^(t)β, where g(β|β^(t)) =(y-Xβ)^TW^(t)(y-Xβ)+c^T(y-Xβ). We have shown in <Ref> that X^TW^(t)X is positive definite and thus invertible for a full rank X_n× p. It is also clear that V^(t) is positive semi-definite, as it is a diagonal matrix whose entries are all non-negative. Hence, for a full rank X_n× p, the Hessian ∂^2/∂β∂β^Th(β|β^(t))=2X^TW^(t)X+4λV^(t) is positive definite since λ≥ 0. We can thus proceed to solve for β^(t+1) by directly minimizing <Ref>: ∂/∂βh(β|β^(t))|_β=β^(t+1) =( 2X^TW^(t)Xβ-2X^TW^(t)y-X^Tc+4λV^(t)β)|_β=β^(t+1) =2( X^TW^(t)X+2λV^(t)) β^(t+1)-2X^TW^(t)y-X^Tc=0 ⇒β^(t+1)=1/2 ( X^TW^(t)X+2λV^(t))^-1X^T[2W^(t)y+c]. ( X^TW^(t)X+2λV^(t))^-1 indeed exists for a full rank X as <Ref> ⇒ 2X^TW^(t)X+4λV^(t)=2(X^TW^(t)X+2λV^(t)) is invertible X^TW^(t)X+2λV^(t) is invertible. It remains to find suitable choices of λ>0 for each q∈(0,1) of interest. We followed <cit.>'s idea and used a quantile regression BIC metric put forward by Lee2014: _q(λ)=_q(β̂_λ(q))=ln( ∑_i=1^nρ_q(r_i(β̂_λ(q)))) + |𝑆_q^λ|ln(n)/2n, where |𝑆_q^λ| represents the size of 𝑆_q^λ={2≤ j≤ p:|(β̂_λ)_j|>0}.[When actually implementing the algorithm in statistical software like R, we set a threshold and take (β̂_λ)_j=0 if |(β̂_λ)_j| is less than this threshold value.] A derivation to <Ref> is presented in <Ref> Proposition A.2. Our algorithm thus proceeds according to the following steps: * Choose a quantile level q∈ (0,1) of interest; * Calculate a nonregularized estimator β̃(q) (using or MM); * Propose a sequence of candidate λ∈ (0,1) choices; * For each candidate λ, obtain a β̂_λ(q) that attempts to minimize L_q^ϵ(β) in <Ref> via our MM algorithm detailed above; * Evaluate _q(λ) in <Ref> for this coefficient estimate β̂_λ(q); * Find λ_(q) among our picked sequence of candidate λ's that yields the smallest _q(λ); * Calculate our regularized estimator β̂_λ_(q) with this λ_ at the desired quantile level q. We started off with the “pollution" data set McDonald1973 first raised in <Ref>, as Peng2014 have produced some “guideline" results obtained by their adaptive lasso “local" quantile regression methods SS(q) at q=0.25,0.5,0.75. As presented in <Ref>, our results obtained from MM do align quite closely with <cit.>'s counterparts (<Ref>). We gained more confidence in our MM algorithm by further comparing our MM penalized estimates with those from 's (<Ref>) with the penalty vector specified as (0,λ̂_(q)/|β̃_2(q)|,…,λ̂_(q)/|β̃_p(q)|), where λ̂_(q) is the λ∈(0,1) we found that yields the smallest _q. We then implemented our MM algorithm on three other high-dimensional data sets at q=0.1,0.25,0.5,0.75,0.9 to uncover potential non-constant predictor effects in diverse fields. Economics Data We constructed our first data set by compiling US monthly time series data for 9 economic variables (unemployment rate y, personal consumption expenditures (PCE) x_1, personal savings rate x_2, population x_3, Industrial Production Total Index x_4, Consumer Price Index (CPI) x_5, Policy Uncertainty Index[Based on US News.] x_6, Producer Price Index by Commodity x_9, and KC Fed Labor Market Conditions Index x_10) downloaded from econdata and US monthly Trade Policy Uncertainty (TPU), Geopolitical Risk (GPR) Indices x_7,x_8 constructed by Caldara2020 and Caldara2018. We considered observations (historical months) that have data in all 11 variables depicted above, and hence our finalized “econ" data set comprises n=347 continuous months from January 1992 to November 2020 and p-1=10 explanatory variables. Our results, as shown in <Ref>, lend credence to non-constant predictor effects. Although the highest correlated predictor population x_3's effects (positively influencing y) are relatively constant, 2 other critical predictors showcase manifest non-constant effects across various quantile orders: CPI x_5 is insignificant at q=0.1 yet counts a lot at higher quantiles; PCE x_1 appears clearly more significant (in negatively affecting y) at lower quantiles. Our finding that the inflation rate (measured by CPI x_5) negatively correlates to unemployment rate y significantly when y is larger aligns closely to the current widely accepted expectations-augmented Philips curve theory.[That cyclical unemployment and unanticipated inflation are inversely related.] In addition, x_4 (negative correlation with y) seems to matter more at higher q and x_9 (positive correlation with y) at lower q. These insightful findings, if backed by more evidence, may aid governments considerably in coming up with more effective and adequate macroeconomic policies to combat high unemployment rates in diverse situations. Social Data Our second data set is a “social" one obtained by combining certain columns of a public data set investigating worldwide country-level life expectancy lifeExp and other potential predictors' per-country data gathered from otherWHO, fertility, unemprate, and cpi. We removed variables with insufficient non-missing data and considered n=111 countries with no missing data on all remaining 21 (1 explanatory variable y, Life Expectancy, and 20 explanatory variables[Hence, p=20+1=21 (including the intercept). The 20 coviariates are crude suicide rate x_1, Corruption Perceptions Index (CPI) x_2, fertility rate x_3, mobilization resources x_4, unemployment rate x_5, adult mortality rate x_6, infant death frequency x_7, Hepatitis B immunization coverage x_8, measles frequency x_9, average Body Mass Index (BMI) x_10, under-five deaths x_11, polio and DTP3 immunization coverages x_12 and x_13, HIV/AIDS deaths x_14, GDP per capita x_15, population x_16, thinness frequency among 10-19-year-olds and among 5-9-year-olds x_17 and x_18, income composition of resources x_19, and number of school years x_20.]) variables. Should there be drastic non-constant predictor effects, in which case regularized quantile regression may greatly improve fit, a country's leaders can then devise more potent policies to influence its life expectancy in its desired way given the quantile of life expectancy the country stands worldwide, by controlling the corresponding influential variables around that q. It turns out that covariates for this “social" data set do not seem to exhibit notably varying effects for different quantile orders (<Ref>). The most attributable predictors (x_19 – positively affects y; x_6,x_1 – negatively correlate to y) seem to possess quite constant effects across quantiles. There are, however, covariates among second-tier influential predictors that do suggest possible varying effects. Effects of x_17 (which affects y negatively) and x_2 (which influences y positively) appear to decrease as q increases, and x_8 (which positively correlates to y) seems less important in middle quantiles. Forest Fire Data Our last data set is a real-world “forest fire" data set collected from Northeast Portugal Cortez2007, which contains a response variable “burned area" as well as potential spatial (x_1,x_2: x, y coordinates on the Montesinho park map), temporal (x_3: month of the year; x_4: day of the week), meteorological weather (x_9: temperature; x_10: relative humidity; x_11: wind speed; x_12: rainfall) explanatory variables and key Fire Weather Index (FWI) components including the Fine Fuel Moisture Code (FFMC) x_5, the Duff Moisture Code (DMC) x_6, the Drought Code (DC) x_7, and the Initial Spread Index (ISI) x_8. We used a subset (270 of the 517 instances when there is a forest fire) of the original data set for analysis and transformed the response variable using y=ln(x+0.1), as the original response variable “area" is highly concentrated at 0_+. As small fires are less damaging but much more frequent and large fires are less frequent but much more destructive, it would significantly aid targeted firefighting and thus reduce fires' negative impacts if we can discern potential differentiated predictor effects across different fire scales and make more accurate predictions on a given initiated fire's possible damages. As presented in <Ref>, it is likely that there are indeed noteworthy varying covariate effects. Temperature x_9 appears significant at all quantile levels but its effects switch sign. The Drought Code x_7 showcases prominent negative impacts on y from middle to upper-middle q, and the Duff Moisture Code x_6 appears more influential in positively affecting y at higher quantile levels. This finding, that higher DMC and lower DC result in more severe destructions for larger fires, match well to what Amiro2004 found out. We also observed that smaller (q=0.1) fires' impacts seem to have much fewer key predictors whereas impairments from fires of an upper-middle scale (q=0.75) appear correlated to much more covariates. §.§ Monotone Spline for Quantile Monotonicity Preservation Consider situations when we have a sole explanatory variable x for a response variable y. Sometimes, we know that y should always decrease or increase as x increases, yet sample variation may lead to non-monotone fitted regression curves. At times, fitted curves can present manifest counterintuitive yet unjustifiable characteristics, casting severe doubt on the underlying models' adequacy. One illustrative example is an aforementioned[In <Ref>.] generalized gamma model fitted by Noufaily2013 to the “WTD" dataset, whose scatter plot is given in <Ref>B. As shown in <Ref>, the GG model's fitted 0.9^th quantile curve notably shifts upward for large values of x, violating the perception that water table depth should decrease when more rainfall penetrates into the ground (when flux is bigger). Another case in point is a double kernel nonparametric smoothing fitting to the aforementioned<ref> “IgG" data set (<Ref>) brought forward by Yu2016. As in <Ref>, <cit.>'s quantiles exhibit evident bumps for elder kids, which remain unexplained and disagree with results obtained by Royston1994 and Isaacs1983. To address such concerns, we opt to impose a monotonicity constraint via Monotone Cubic Interpolation Fritsch1980, which ensures monotonicity by the following two steps: * Transform x=(x_1,x_2,…,x_n)^T into an n× p matrix X_= [ _1(x_1) _2(x_1) … _p(x_1); _1(x_2) _2(x_2) … _p(x_2); ⋮ ⋮ ⋱ ⋮; _1(x_n) _2(x_n) … _p(x_n) ] using B-spline (bs) with p-2 equally spaced knots (2 boundary knots and p-4 inner knots) and in R; * Restrict the regression coefficients β=(β_1,β_2,…,β_p)^T for the model ŷ_n× 1 = X_β̂ to be strictly increasing or decreasing, i.e., β_1<β_2<…<β_p or β_1>β_2>…>β_p. To guarantee the coefficient restriction <Ref> when implementing monotone regression, we introduce an unrestricted vector of to-be-estimated regression parameters θ=(θ_1,θ_2,…,θ_p)^T specified by θ_1=β_1, θ_2=ln(β_2-β_1), θ_3=ln(β_3-β_2), …, θ_p=ln(β_p-β_p-1) so that β_1=θ_1, β_2=θ_1+e^θ_2, β_3=θ_1+e^θ_2+e^θ_3, …, β_p=θ_1+e^θ_2+… +e^θ_p for monotonically increasing regression (achieved by requiring β_1<β_2<…<β_p) or θ_1=β_1, θ_2=ln(β_1-β_2), θ_3=ln(β_2-β_3), …, θ_p=ln(β_p-1-β_p) so that β_1=θ_1, β_2=θ_1-e^θ_2, β_3=θ_1-e^θ_2-e^θ_3, …, β_p=θ_1-e^θ_2-… -e^θ_p for monotonically decreasing regression (achieved by requiring β_1>β_2>…>β_p). Getting back to our Quantile Regression MM Algorithm Hunter2000 in <Ref>, we note that Q_ϵ(θ|θ^(t)) (<Ref>) is not quadratic in θ anymore since f_i(θ) = ∑_j=1^p β_j·_j(x_i) = θ_1·∑_j=1^p _j(x_i)+e^θ_2·∑_j=2^p _j(x_i) +…+e^θ_p·_p(x_i) θ_1·∑_j=1^p _j(x_i) -e^θ_2·∑_j=2^p_j(x_i) - … - e^θ_p·_p(x_i) , is not linear in θ anymore. Hence, we can no longer directly solve for θ^(t+1) as in <Ref>. We resort to a Gauss-Newton method instead to continuously drive the majorizer downhill. This way, the descent property (<Ref>) still holds and thus the value of our actual objective function is still guaranteed to decrease iteration after iteration. Let r_i(θ)=y_i-f_i(θ) with f_i(θ) as specified in <Ref>. Then [∂/∂θr_i(θ)]_p× 1=( ∂ r_i(θ)/∂θ_1,…,∂ r_i(θ)/∂θ_p)^T = (∑_j=1^p _j(x_i),∓ e^θ_2·∑_j=2^p _j(x_i),…,∓ e^θ_p·_p(x_i) )^T, [∂^2/∂θ∂θ^Tr_i(θ)]_p× p= [ ∂^2r_i(θ)/∂θ_1^2 ∂^2r_i(θ)/∂θ_2θ_1 … ∂^2r_i(θ)/∂θ_pθ_1; ∂^2r_i(θ)/∂θ_1θ_2 ∂^2r_i(θ)/∂θ_2^2 … ∂^2r_i(θ)/∂θ_pθ_2; ⋮ ⋮ ⋱ ⋮; ∂^2r_i(θ)/∂θ_1θ_p ∂^2r_i(θ)/∂θ_2θ_p … ∂^2r_i(θ)/∂θ_p^2 ] = [ 0 ; e^θ_2·∑_j=2^p _j(x_i) ; ⋱ ; e^θ_p·_p(x_i) ]. Denote r_i'(θ)_p× 1 as ∂/∂θr_i(θ) and r_i”(θ)_p× p as ∂^2/∂θ∂θ^Tr_i(θ). Given current parameter estimate θ^(t) and the first, second differentials Q_ϵ'(θ|θ^(t))_p× 1= 1/2∑_i=1^n[r_i(θ)/ϵ+|r_i(θ^(t))|+2q-1 ] r_i'(θ) and Q_ϵ”(θ|θ^(t))_p× p= 1/2∑_i=1^n[1/ϵ+|r_i(θ^(t))|r_i'(θ)r_i'(θ)^T+(r_i(θ)/ϵ+|r_i(θ^(t))|+2q-1 )r_i”(θ) ], we can calculate the Gauss-Newton step direction at iteration t as the p× 1 vector Δ_ϵ^(t)=-[r'(θ^(t))^TW_ϵ(θ^(t))r'(θ^(t))+H(θ^(t))]^-1r'(θ^(t))^Tv_ϵ(θ^(t))^T, where r'(θ)_n× p= [ r'_1(θ)^T; r'_2(θ)^T; ⋮; r'_n(θ)^T; ], W_ϵ(θ^(t))_n× n= [ 1/ϵ+|r_1(θ^(t))| ; 1/ϵ+|r_2(θ^(t))| ; ⋱ ; 1/ϵ+|r_n(θ^(t))| ], H(θ|θ^(t))_p× p=∑_i=1^n(r_i(θ)/ϵ+|r_i(θ^(t))|+2q-1 )r_i”(θ), and v_ϵ(θ|θ^(t))_1× n=(r_1(θ)/ϵ+|r_1(θ^(t))|+2q-1,…, r_n(θ)/ϵ+|r_n(θ^(t))|+2q-1). The next parameter estimate given by θ^(t+1)=θ^(t)+α^(t)Δ_ϵ^(t), where α^(t)= 1, if Δ_ϵ^(t)=0_p× 1, i.e., a stationary point max{2^-v:Q_ϵ(θ^(t)+2^-vΔ_ϵ^(t)|θ^(t))<Q_ϵ(θ^(t)|θ^(t)),v∈ℕ}, otherwise then ensures Q_ϵ(θ^(t+1)|θ^(t))≤ Q_ϵ(θ^(t)|θ^(t)).[When implementing the Gauss-Newton approach in statistical software like R, we take α^(t)=1 (interpreted as exploring some new terrain) if Q_ϵ(θ^(t)+2^-vΔ_ϵ^(t)|θ^(t))≥ Q_ϵ(θ^(t)|θ^(t)) ∀ v∈ℕ,v≤ M for some suitably big M∈ℕ.] We implemented this MM monotone quantile regression algorithm on the “WTD" and “IgG" data sets mentioned at the beginning of this subsection. As <Ref> suggests, monotone spline interpolation achieved via MM successfully preserved our desired monotonicity while capturing the data sets' patterns reasonably well. § DISCUSSION This article explores MM algorithms in diverse quantile regression settings. Applications of MM to several real data sets and two sets of simulation studies comparing MM with existing tested methods showcase our MM algorithms' effectiveness. One main advance we have made is putting forward a novel parametric fitting method that efficiently models interested quantiles simultaneously for nonparametric quantile regression, which is achieved via MM detailed in <Ref>. MM's computation speed edge over some other prominent algorithms when estimating coefficients at different quantiles separately Pietrosanu2017 has also been supported by our simulation study in <Ref>. One major flaw of our MM is that it gets relatively slow compared to when estimating different quantile coefficients simultaneously, which should result from having to invert an hp× hp matrix ∑_a=1^k D^T(q_a)W_a^(t)D(q_a) (in <Ref>) at each iteration. It is, however, quite promising to address this current weakness, as some authors have successfully invented fast MM algorithms that bypass large matrix inversion problems (<cit.>; <cit.>). In situations where super fast algorithms are needed, a finite smoothing algorithm raised by Chen2007, which operates by directly minimizing a nice smooth approximate function of L_q(θ)=∑_i=1^nρ_q(r_i(θ)), may also come to the rescue. § ADDITIONAL PROOFS Proposition A.1. Q(θ|θ^(t))=∑_i=1^n ζ_q(r_i(θ)|r_i(θ^(t)))=∑_i=1^n1/4{r_i^2(θ)/|r_i(θ^(t))|+(4q-2)r_i(θ)+|r_i(θ^(t))|} majorizes L(θ)=∑_i=1^n ρ_q (r_i(θ))=∑_i=1^n qr_i(θ)-r_i(θ)1_{r_i(θ)<0} at θ=θ^(t). Q( θ^(t)|θ^(t))-L(θ^(t))=∑_i=1^n [ζ_q(r_i(θ^(t))|r_i(θ^(t)))-ρ_q (r_i(θ^(t))) ] = ∑_i=1^n 1/4[|r_i(θ^(t))|-2r_i(θ^(t))+|r_i(θ^(t))|]+r_i(θ^(t))1_{r_i(θ^(t))<0} = ∑_i=1^n { r_i(θ^(t))(- 1/2+1_{r_i(θ^(t))<0})+1/2|r_i(θ^(t))| } =0. It remains to show Q(θ|θ^(t)) satisfies <Ref>. ∀ residual r, letting h(r)= ζ_q(r|r^(t))-ρ_q(r)=1/4( r^2/|r^(t)|-2r+|r^(t)|)+r1_{r<0}, we have h'(r)= ζ_q'(r|r^(t))-ρ_q'(r)=1/2(r/|r^(t)|-1)+1_{r<0}=1/2(r-|r^(t)|(1-2·1_{r<0})/|r^(t)|) = 1/2·r-|r^(t)|/|r^(t)| if r≥ 0 1/2·r+|r^(t)|/|r^(t)| if r< 0, and h”(r)= 1/2|r^(t)|>0 (r^(t)≠ 0) ∀ r. Observe from <Ref> that h(± r^(t))=h'(± r^(t))=0 and h'(r)≤ 0 if r ∈ (-∞, -|r^(t)|]∪ [0,|r^(t)|] and ≥ 0 if r ∈ [-|r^(t)|,0]∪ [|r^(t)|,∞) h(r) is non-increasing for r ∈ (-∞, -|r^(t)|]∪ [0,|r^(t)|] and is non-decreasing for r ∈ [-|r^(t)|,0]∪ [|r^(t)|,∞) ⇒ h(r)≥min{h(r^(t)),h(-r^(t))}=0 ∀ r ζ_q(r|r^(t))≥ρ_q(r) ∀ r. Hence Q(θ | θ^(t))≥ L(θ) ∀ θ and <Ref> is satisfied. Proposition A.2. Consider the linear quantile regression model Y_i=X_i^Tβ_q+U_i, i=1,2,…,n, q∈(0,1) with loss ρ_q(u)=|u|[q1_{u≥0}+(1-q)1_{u<0}]=qu-u1_{u<0} and error distribution U_1,…,U_n∼ an asymmetric Laplace distribution with PDF f_U(u)=q(1-q)/σe^-ρ_q(u)/σ. Given a fixed λ≥ 0 and an obtained (penalized) coefficient estimator β̂_q=β̂_λ(q) that attempts to minimize L_q^ϵ(β) in <Ref>, a BIC is given by _q(β̂_q)=ln( ∑_i=1^nρ_q(r_i(β̂_q))) + |𝑆_q|ln(n)/2n if we approximate σ by σ̂_. Given sample observations {(x_i,y_i):1≤ i≤ n}, the log-likelihood is l(β_q,σ)=ln[∏_i=1^n f_U(y_i-x_i^Tβ_q)]=-nln(σ)-∑_i=1^nρ_q(y_i-x_i^Tβ_q)/σ+. For any given β̂_q, we let g(σ)=l(β̂_q,σ), σ∈(0,∞). Since g is differentiable everywhere on (0,∞) and g'(σ) = ∂ l(β_q,σ)/∂σ|_β_q=β̂_q=-n/σ+∑_i=1^nρ_q(y_i-x_i^Tβ̂_q)/σ^2=-nσ+∑_i=1^nρ_q(y_i-x_i^Tβ̂_q)/σ^2 has a unique zero (and hence the unique critical point by Fermat's Theorem) at σ =σ̂(β̂_̂q̂)=∑_i=1^nρ_q(y_i-x_i^Tβ̂_q)/n with g'(σ)>0 for σ<σ̂(β̂_q) and g'(σ)<0 for σ>σ̂(β̂_̂q̂), σ=σ̂(β̂_q) given in <Ref> is the unique local maxima. Since g(σ)→ -∞ as σ→∞ or as σ→ 0_+, σ̂_(β̂_q)=∑_i=1^nρ_q(y_i-x_i^Tβ̂_q)/n∈ (0,∞) satisfies g(σ̂_(β̂_q)) = σ∈(0,∞)max g(σ) = σ∈(0,∞)max l(β̂_q,σ) and is the unique MLE of σ given β̂_q. Setting σ=σ̂_(β̂_q)=∑_i=1^nρ_q(y_i-x_i^Tβ̂_q)/n, we get[note that n is given and hence constant] _q(β̂_q) =-2l(β̂_q,σ̂_(β̂_q))+|𝑆_q|ln(n)=2nln[σ̂_(β̂_q)]+2n-2+|𝑆_q|ln(n) =2n(ln[∑_i=1^nρ_q(y_i-x_i^Tβ̂_q)/n]+|𝑆_q|ln(n)/2n)+2n-2 =2n(ln[∑_i=1^nρ_q(y_i-x_i^Tβ̂_q)]+|𝑆_q|ln(n)/2n)+2n-2-2nln(n) =_1(ln[∑_i=1^nρ_q(y_i-x_i^Tβ̂_q)]+|𝑆_q|ln(n)/2n)+_2, where _1=2n>0. Hence, <Ref> is equivalent to <Ref> as a BIC metric. § ADDITIONAL TABLES AND FIGURES tablesection figuresection
http://arxiv.org/abs/2407.13726v1
20240718172517
Compressing Structured Tensor Algebra
[ "Mahdi Ghorbani", "Emilien Bauer", "Tobias Grosser", "Amir Shaikhha" ]
cs.PL
[ "cs.PL", "cs.LG", "cs.MS" ]
0000-0003-3686-2158 Mahdi.Ghorbani@ed.ac.uk University of Edinburgh Edinburgh UK Emilien.Bauer@ed.ac.uk University of Edinburgh Edinburgh UK tobias.grosser@cst.cam.ac.uk University of Cambridge Cambridge UK 0000-0002-9062-759X amir.shaikhha@ed.ac.uk University of Edinburgh Edinburgh UK § ABSTRACT Tensor algebra is a crucial component for data-intensive workloads such as machine learning and scientific computing. As the complexity of data grows, scientists often encounter a dilemma between the highly specialized dense tensor algebra and efficient structure-aware algorithms provided by sparse tensor algebra. In this paper, we introduce DASTAC, a framework to propagate the tensors's captured high-level structure down to low-level code generation by incorporating techniques such as automatic data layout compression, polyhedral analysis, and affine code generation. Our methodology reduces memory footprint by automatically detecting the best data layout, heavily benefits from polyhedral optimizations, leverages further optimizations, and enables parallelization through MLIR. Through extensive experimentation, we show that DASTAC achieves 1 to 2 orders of magnitude speedup over TACO, a state-of-the-art sparse tensor compiler, and StructTensor, a state-of-the-art structured tensor algebra compiler, with a significantly lower memory footprint. Compressing Structured Tensor Algebra Amir Shaikhha July 22, 2024 ===================================== *These authors contributed equally to this workfootnote § INTRODUCTION Tensor algebra is a fundamental component in several data-intensive workloads, such as signal processing <cit.>, machine learning <cit.>, computer vision <cit.>, quantum chemistery <cit.>, and bioinformatics <cit.>. Numerous lines of research have attempted to specialize and enhance the performance of the tensor algebra computations either through hardware level (e.g., tensor accelerators <cit.> and TPUs <cit.>), or software level (e.g., tuned tensor kernels <cit.> and compilation based optimizations <cit.>). As the complexity of the data grows, scientists often face a trade-off between highly tuned and specialized frameworks provided for dense tensor algebra computation and efficient and flexible algorithms provided for sparse tensor algebra that leverage the sparsity of input tensors. Dense tensor algebra frameworks <cit.> support a rich set of compile-time optimizations such as vectorization, tiling, and parallelization. This is thanks to well-known memory access patterns and efficient utilization of contiguous memory, despite not knowing the actual data. Computations over structured tensor algebras are at the core of several important workloads. The structure is an expression of static information on the sparsity and redundancy of the tensors. Sparsity information defines the symbolic set of indices of non-zero values (e.g., diagonal) while redundancy defines a symbolic mapping from the set of indices of repetitive elements to their unique equivalent (e.g., symmetric). Structured matrices have many applications in machine learning <cit.> and deep learning <cit.>. Diagonal structure used in the ACDC layer <cit.>, tridiagonal weight matrices mentioned by <cit.>, and structured neural network pruning for large language models <cit.> are examples of applications of matrices with simple structures. Furthermore, more intricate structures such as hyper triangular for covariance matrices <cit.> and the butterfly structure used for neural network training <cit.> are examples of structured tensor applications. Sparse tensor algebra frameworks <cit.> provide asymptotically improved algorithms by leveraging the sparsity structure of the data with efficient memory storage requirements and better scalability. However, unlike dense counterparts, sparse frameworks cannot leverage the full computation power of the hardware. This is due to irregular data structures, leading to less predictable memory access patterns and making it challenging to fully leverage the same level of optimizations as their dense counterparts. In this paper, we introduce , a framework that propagates the high-level structural information of tensor computations down to the low-level highly tuned code generation (cf. Figure <ref>). leverages a novel symbolic indexing method that compresses an input structured tensor into a densely packed vector representation. In contrast to well-established sparse data layouts like CSR, CSC, or COO, which involve indirect accesses and incur index storage overhead, our method employs efficient direct symbolic index computations. This direct indexing not only diminishes the overall memory footprint but also opens up opportunities for vectorization. These advantages become more important in scenarios where memory bandwidth is the main bottleneck in program efficiency, as is the case on many widespread contemporary hardware, such as CPUs and GPUs. achieves the densely assembled data layout by relying on a well-established mathematical foundation: the polyhedral model. First, uses to capture and propagate the structure throughout the tensor algebra computation and then proceeds to apply polyhedral set and map optimizations over the captured structure. Second, proceeds to find a densely packed data layout for the tensors involved in the computation and compress them automatically by a novel symbolic indexing algorithm, leading to better cache locality and less memory footprint. Finally, generates structure-aware low-level code for CPU using the affine dialect of MLIR <cit.>, enabling effective optimizations at different levels of abstraction. Consequently, brings out the best of both dense and sparse tensor algebra worlds together by relying on polyhedral analysis, memory, and complexity efficacy of sparse tensor algebra alongside specialization and compile-time optimization of dense tensor algebra based on polyhedral and compiler optimizations. Specifically, we make the following contributions: * We introduce , the first framework that integrates the best of both sparse and dense tensor algebra worlds (Section <ref>). The algorithmic improvements are achieved by leveraging the structure of tensors and this structure is progressively lowered down to the machine-level code. * introduces a densely assembled data layout that enables low-level specialization opportunities (Section <ref>). The central technique is a novel static indexing algorithm that maps the indices of the input tensor to a contiguous memory buffer. * We provide a progressive code generation algorithm (Section <ref>). First, the tensor computations are translated to affine operations provided out-of-the-box by the MLIR framework. Subsequently, at the next stages, the MLIR framework allows to benefit from optimizations such as code motion, common subexpression elimination and parallelization. * We experimentally evaluate against state-of-the-art tensor algebra frameworks and show that leveraging structure while benefiting from our proposed compressed data layout and polyhedral optimizations leads to better performance both sequentially and on multi-threaded scenarios (Section <ref>). We show that achieves 1 to 2 orders of magnitude speedup over the TACO sparse tensor compiler and StructTensor, a state-of-the-art structured tensor algebra compiler, with a significantly lower memory footprint. We also show that frameworks such as Polygeist cannot recover the optimizations offered by . § BACKGROUND ON STRUCTURED TENSOR ALGEBRA Matrices and tensors can have several structures, such as diagonal, tridiagonal, symmetric, and triangular. Leveraging such structures can improve the computational cost of tensor operations by orders of magnitude. For example, leveraging the matrix structure in sparse matrix-vector multiplication (SpMV), where the matrix is diagonal, can lower the computational complexity from O(n^2) to O(n). Tensor structures can be classified into two categories: 1) sparsity patterns and 2) redundancy patterns. The sparsity patterns refer to structures such as diagonals where zero and non-zero values are distinguished. Structures with redundancy pattern refers to structures such as symmetry, with unique and repetitive elements. The tensor can be reconstructed using the unique values and a mapping from the indices of redundant elements to the unique ones.  <cit.> is the ’s unified intermediate representation used for representing both arithmetic and structural information. relies on structure inference for tensor computations in the form of unique sets, redundancy maps, and compressed tensors. We explain each component of and how it works using a running example. Unique set It represents the non-zero and unique (non-repetitive) values of the input tensor and corresponds to capturing the sparsity pattern. For example, the unique set of an n × n diagonal matrix M represented as M_U is as follows: M_U := {(x, y) | (0 ≤ x < n) ∧ (0 ≤ y < n) ∧ (x = y)} Any set representation can be shown in the form of tensor computation over a boolean domain by replacing ∧ with * and ∨ with +. Therefore, the unique set can be represented as follows: M_U(x, y) := (0 ≤ x < n) * (0 ≤ y < n) * (x = y) Redundancy map It provides the mapping from redundant elements to their corresponding unique elements and corresponds to capturing the redundancy pattern. For example, the unique set and redundancy map for an n × n symmetric matrix V are as follows: V_U(x, y) := (0 ≤ x < n) * (0 ≤ y < n) * (x ≤ y) V_R(x, y, x', y') := (0 ≤ x < n) * (0 ≤ y < n) * (x > y) * (x' = y) * (y' = x) Here we are assuming that the lower triangular part of the matrix is the unique part, and the upper triangular part of it is repetitive. Variables x, y represent the iteration space, and the mapping from the upper triangular to the lower triangular part is represented through variables x', y'. So this representation means x and y values must be swapped to have access to the unique elements. Structure inference Now imagine the example of sparse matrix-matrix element-wise multiplication where the operands are the aforementioned matrices, M and V. To perform this multiplication while leveraging the structure, applies the program reasoning rules <cit.> to infer this computation's output structure and compressed iteration space. For the mentioned example, the computation and the inferred output structure after simplification are: T(x, y) := M(x, y) * V(x, y) T_U(x, y) := (0 ≤ x < n) * (0 ≤ y < n) * (x = y) T_R(x, y, x', y') := ∅ Compressed tensor It is used for code generation and contains the structural information (the unique set and redundancy map) and the arithmetic operation over tensors using 's syntax. The computation over sets and arithmetic is written using the same intermediate representation () for structure inference purposes but is treated separately in the code generation process. internally separates the structural information and arithmetic parts. The structural information is used to generate loops while the arithmetic part generates the computational code inside the loop nests. For example, matrix T will have the following compressed tensor: T_C(x, y) := (0 ≤ x < n) * (0 ≤ y < n) * (x = y) * M(x, y) * V(x, y) This means that in this computation, the iterators x, y should iterate between 0 and n and must be equal to each other. The arithmetic operation that they perform inside the loop nests is M(x, y) * V(x,y). Note that compressed tensor here refers to a tensor that has the structural information, hence, the iteration space can be compressed using this information. The tensor data layout itself is not compressed here. generates the following code: [language=C++,keywordstyle=,stringstyle=,commentstyle=] for (int x = 0; x < n; ++x) int y = x; T(x, y) = M(x, y) * V(x, y); The first 2 lines are generated by the structural information and the third line is generated because of the arithmetic part of the compressed tensor. State-of-the-Art  <cit.> introduces the intermediate language with the program reasoning rules to infer structures. As a final step, provides C++ code that performs the computation over the compressed iteration space. outperforms sparse and dense tensor algebra competitors by leveraging and propagating the compile-time-known structures throughout the computation. Limitations Even though infers the structure, it relies on an uncompressed dense representation of tensors that contains zero and repetitive elements. Alternatively, requires the user to specify a compact data layout manually, but it is limited to only data layout for inputs. The outputs are still stored in an uncompressed dense format. However, cannot capture this compressed output data layout. Furthermore, the generated C++ code relies on the low-level compiler (e.g., Clang or GCC) for further optimizations. Thus, one cannot benefit from the more advanced optimizations not supported directly by the compiler (e.g., parallelization). solves both limitations by relying on the mathematical foundation of the polyhedral model. First, it uses a novel symbolic indexing algorithm to densely pack the input and output tensors. Second, benefitting from the progressive lowering between intermediate languages (dialects) provided by the MLIR framework, it applies additional compiler optimizations, including parallelization and vectorization. § OVERVIEW In this section, we describe the overall architecture of (cf. Figure <ref>). Input uses as its intermediate language, as it relies on  <cit.> for the structure inference as the first step. can express generalized Einsum expressions as a sum of products (SoP). As an example, Figure <ref> shows the representation of the tensor times matrix (TTM) computation with an upper half cube structure. Structure inference proceeds to propagate the input structure throughout the computation by applying program reasoning rules provided by . The output structure and compressed computation are inferred during this stage. uses the inferred compressed domain by as input. For the running example, the output structure is as follows: A_U(i, j, k) := B_U(i, j, l) * C_U(k, l) Structure Optimizations improves the structured tensor computation by applying structure optimizations. provides a set of optimizations such as inlining and logical simplifications (e.g., set idempotence and addition/multiplication identity optimizations) on the output structure and compressed computation. A polyhedral tool such as isl <cit.> can easily support and apply the aforementioned optimizations alongside further polyhedral set optimizations. Therefore, relying on polyhedral tools can reduce the development cost while increasing the robustness. For these reasons, relies on isl for further structure optimizations. Figure <ref> shows the structured computation for the running example after applying structure optimizations. Data Layout Compression leverages the statically known sparsity structure to assemble a densely packed buffer, aiming for cache-friendliness and vectorization opportunities. Our compression technique consists of a symbolic indexing function, mapping each original tensor index to a linear index of unique and non-zero values. For example, the inferred compressed index of B in our running example Figure <ref> is: P_B(i,j,l) = (((-1/2 + N) Q · i -1/2· Q · i^2) + Q · j) + l In this expression, the division operator is rational, Q and N correspond to the dimensions in Figure <ref>. Compression and decompression can then be achieved by iterating over the accessed indices and copying the data in and out of the compressed buffer. Progressive Code Generation then proceeds to code generation. It leverages isl's Abstract Syntax Tree (AST) generation feature, taking a union of iteration domains and optional validity dependencies and returns an AST enforcing those conditions. This AST is then used to generate code in the Affine dialect, a polyhedral-based intermediate representation defined in the MLIR framework. We drive the compilation from this IR down to executable code through MLIR's provided passes to generate low-level code implementing the used affine operations, simplify the polynomials computation and hoist each part as high as possible in the loop nest, and parallelize the computation by mapping to OpenMP runtime calls. Finally, the resulting LLVM IR is passed to Clang, which provides auto-vectorization and profits from the new opportunities to do so offered by this densely packed data layout representation. § DATA LAYOUT COMPRESSION In this section, we present our method for data layout compression using a novel symbolic indexing algorithm, enabling a densely packed representation of the structured sparse tensor elements. This method relies on a symbolic and linear indexing of the values used in a computation. As opposed to well-established sparse data layouts, such as CSR, CSC, or COO, that incur access indirections and index storage overhead, our method uses direct symbolic index computations. This direct indexing simultaneously reduces the global memory footprint and opens vectorization opportunities, two excellent properties when memory bandwidth is a significant constraint on the program efficiency, as is the case on many widespread contemporary hardware, such as CPUs and GPUs. starts the compression process from a tensor computation expressed in (e.g., Figure <ref>). restricts itself to the rules expressed with quasi-affine expressions (affine expressions augmented with modulos). This enables the implementation to leverage mature tools to infer this compressed indexing efficiently. To sidestep combinatorial complexity, only works with individual summands of a rule. Take for example the following rule: O(j) := A(i,j) * (i = 1) * (0 ≤ j < N) + A(i,j) * (i = 3) * (0 ≤ j < N) will first infer linear indices for O(j) and A(i,j) over the first region, where i = 1. Only then does it move on to the i = 3 region. In the second region, with i = 3, a disjoint region of A is read from, so a second compressed buffer is inferred for this region. On the other hand, the same region of O is being written to; thus "deduplicates" the compressed buffer by simply reusing the one infered for the first region. This design decision avoids compile-time costly generalization of the process and is found to work effectively on many motivated cases. However, in all generality, this yields two limitations. First, some values might be duplicated in different compressed buffers, potentially leading to a suboptimal memory footprint reduction in some specific cases. Second, the output compression is only used when output regions are equal (as in the above example) or disjoint, as any other case would potentially lead to partial results being spread across multiple buffers, requiring further computation overhead to have the right values in each buffer. Symbolic Indexing Algorithm At this stage, uses our symbolic indexing algorithm (Algorithm <ref>) to infer a compressed indexing function. The algorithm takes two inputs. The first input is 𝒟_𝐩 which is an ordered iteration space, where the parameter 𝐩 defines symbols such as dimension information. In other words, 𝒟_𝐩 represents the set of iterator values with which the tensor access T is used. By ordered we mean a sequential execution of the computation. The second input denoted by T, is the considered tensor access in the form of a function from an iteration to the indices used in this access. The described process in Algorithm <ref> is illustrated using buffer A of our example in Figure <ref>. Iteration space extraction To apply Algorithm <ref>, we define the iteration space of the summand at hand, taking all comparisons of the considered summand. For our running example in Figure <ref>, the iteration space is: 𝒟_Q,N,M,P := { (i,j,k,l) ∈ℤ^4 | 0 ≤ i < M, i ≤ j < N, 0 ≤ k < P, 0 ≤ l < Q } Here, symbols correspond to the boundary information of dimensions (Q,N,M,P). By restricting to affine expressions, one can observe that this set is a polyhedron by construction. Because we define the iteration space from a high-level representation, we control the order of the dimensions; we simply order them based on their appearance in the rule, ensuring the first iterators are used to index the output, with the last one of them indexing contiguous elements. Access mapping For each buffer, we define its access map using the access variables used in the expression. By construction, those are affine maps. The corresponding map for buffer B in Figure <ref> is as follows: a_B(i,j,k,l) = (i,j,l) 𝒜_B,p := a_B(𝒟_p) = {j∈ℤ^m |∃ i ∈ℤ^n : a_B ·i = j, 𝒞·i + c≥ 0 } We apply this mapping on the above iteration space, yielding the set of accessed indices: 𝒜^B_Q,N,M,P = a_B(𝒟_𝐐,𝐍,𝐌,𝐏) = {(i,j,l) ∈ℤ^3 | 0 ≤ i < M, i ≤ j < N, 0 ≤ l < Q} Preceding Accesses We proceed to define an order on this accessed domain, using the access map to respect the iteration order. This order is similar to the lexicographic order, only using indices in their iteration domain order. For the map a(i,j,k,l) = (k,j,l), this order will be: a(i',j',k',l') < a(i,j,k,l) j' < j ∨ j' = j ∧ k' < k ∨ j' = j ∧ k' = k ∧ l' < l We use the set of iterators used in the access map in their initial order: Applying this order to the accessed domain, we define the preceding access domain as the set of all accessed indices preceding a symbolic index. For our running examples, this yields: 𝒫^B_Q,N,M,P,i,j,l = {(i',j',l') ∈𝒜^B_Q,N,M,P| i' < i} ∪{(i',j',l') ∈𝒜^B_Q,N,M,P| i' = i ∧ j' < j} ∪{(i',j',l') ∈𝒜^B_Q,N,M,P| i' = i ∧ j' = j ∧ l' < l } Access domain linearization Finally, we can synthesize a mapping from a current iteration to the desired compressed linear index by counting the number of effective accesses preceding this iteration, that is, counting the number of elements of 𝒜_𝐩,𝐢. The Barvinok algorithm <cit.> is a method for counting the number of integer points within a parametric polyhedron, returning a piecewise quasi-polynomial (a generalization of polynomials having periodic coefficients) of the polyhedron's parameters. We can thus use this algorithm, implemented in isl, on the preceding access domain and obtain a linear indexing of the accessed domains element, following our chosen order: P_B = #𝒫^B_𝐩,𝐢 Piecewise Polynomial Simplification In general, the counting algorithm yields a piece-wise quasi-polynomial, which could have a different expression on distinct subsets of the counted set. In practice, it often yields special cases at boundaries, that are not desirable for simple code generation. For example, when counting the preceding accesses for a triangle as illustrated in Figure <ref>, the resulting piecewise polynomial is: P = 1/2i + 1/2i^2 + j if j > 0 1/2i + 1/2i^2 if j = 0 We apply a piecewise polynomial fusion algorithm presented in Algorithm <ref> to simplify the expression of this polynomial. Let us apply this algorithm to our example. We name the two regions as D and D', and their corresponding polynomials as P and P': P = 1/2i + 1/2i^2 + j, P' = 1/2i + 1/2i^2. Following line 4 of Algorithm <ref>, we have P-P' = j. Restricting this difference to D' : j = 0, we have (P-P')|_D' = 0; in other words, P is equal to P' on the latter's domain. We thus remove the special case on this domain (line 6), only keeping P on the union of D and D', yielding a single polynomial expressive enough for the whole buffer: P = 1/2i + 1/2i^2 + j The computed compressed index for buffer A in Figure <ref> is as follows: P_B(i,j,l) = (((-1/2Q + Q · N) · i -1/2 Q · i^2) + Q · j) + k In this example, one is expressive enough for the whole buffer; we can thus safely ignore the conditions, knowing by construction we will not index any value out of those bounds. An interesting side-effect of our design is that the generated piece-wise quasi-polynomials are readily expressed in a form where the slowest varying indices are evaluated the deepest in the expression, making it amenable to effective loop-invariant code hoisting. For example, in P_B, the most product-intensive term (-1/2Q + Q · N) can be hoisted of all loops in the computation, as it only depends on the sizes of the tensors. As our experimental results show, our symbolic indexing algorithm significantly reduces the memory footprint. Furthermore, we observed performance improvements thanks to improved cache locality and enabling vectorization opportunities. Finally, we have not observed significant performance overhead caused by the index computation time on the selected motivated cases. § PROGRESSIVE CODE GENERATION Finally, starts generating code using the MLIR framework. It has been chosen as a compiler for this work for its ability to express high-level Intermediate Representation (IR) and lower it progressively to LLVM IR, a low-level IR that the CLang compiler can compile down to binary. This enables MLIR to provide many optimizations at a high level, where the information needed to apply them safely is still present. §.§ MLIR Background MLIR <cit.> is a compiler framework successfully applied in the development of compilers in deep learning <cit.>, fully-homomorphic encryption <cit.>, and hardware design <cit.>. It also opens opportunities, such as compiling our high-level IR down to GPU kernels or other targets rather than generating specialized code and reinventing the wheel, but this has been left as future work. MLIR IR elementary objects are Operations, Values, Regions, and Attributes. Values represent runtime values and are constrained by the Static Single-Assignment (SSA) form: they can only be assigned once, at definition time. Operations may take values as input and may define new values as output. They can be augmented through attributes, representing various compile-time information. Finally, Operations may contain Regions, which include lists of Operations and possibly entry arguments: Values defined in the region and taking their runtime value from the operation. An example is given in Figure <ref>. func.func is an Operation representing a function definition. It has a Region representing its body and a function type. In its region, arith.constant is an operation only defining a value from a compile-time known value and type. arith.addi takes this value and the function's argument and returns its sum in a new value. Finally, func.return only consumes this value to represent returning it from the function. MLIR implements this idea of mixing levels of abstraction through dialects: collections of operations and types that represent some level of abstraction but can be combined in a program to allow progressive lowering of different parts in a controllable manner. Operations names are prefixed by their dialect's name: in Figure <ref>, arith and func were used. §.§ Affine code generation Parallelization drives ISL to generate an AST expressing parallelism over the outermost index if it is not a reduction. We believe that parallel code generation is a benefit of our densely packed data layout; while similar compaction could be achieved by simple linear indices increments judiciously placed in the loop nest, those would effectively result in loop-carried dependencies, and necessitate further analysis or treatment to parallelize correctly. 's symbolic indexing, on the other hand, only depends on current iterators and symbols of the computation; once inferred, it stays a correct mapping however loops are parallelised or further transformed. Thus, parallelism, loop transformations, and compression usage in the generated code become completely orthogonal. Our experiments demonstrate the effectiveness of our compression scheme with simple parallelization (cf. Section <ref>). Control Flow Code Generation The first step is obtaining an Abstract Syntax Tree (AST) scanning the iteration domain from ISL; an example is in Figure <ref>. By construction, this AST is guaranteed to have affine bounds and increments. uses MLIR's affine dialect, which expresses affine loops, affine conditions, and affine accesses, to generate IR at the closest abstraction level from its internal one. The main benefit of this approach is that does not need to re-implement the basics: it simply translates affine expressions from ISL's to MLIR's representation and defines loops using those. MLIR is equipped to lower those expressions to arithmetic computations and has advanced infrastructure to simplify those computations at a lower level. Finally, generates code corresponding to each statement. For the computation expressed in the language, that means loading each designated value from the right-hand side, multiplying them, and adding that product to the loaded left-hand side value. Memory accesses are translated in two steps to simplify the usage of the compressed indexing polynomial, as explained below. Uncompressed Compute Code Generation The first step is to apply ISL's schedule, an affine mapping from the current iteration to the values to pass to the statement. In the example given in Figure <ref>, this schedule is the identity θ(i,j,k,l)=(i,j,k,l). MLIR's affine dialect provides an operation to apply a given affine mapping to a list of indices, affine.apply, so we use it to define new values corresponding to the scheduled access variables. It then remains to generate the product and accumulation code from the corresponding accesses in the input rule, using MLIR's basic arithmetic operations. THis is all illustrated in Figure <ref>